My Own Talk
My own talk focused on the architectural implications of open versus closed systems, and lessons from the development of Unix and the Internet. Innovation best flourishes with open architectures, in which any individual can bring something to the party. An open architecture doesn't mean that there are no rules; in fact, it usually means that there are strong, clear rules. And one of the rules is something like "play nicely with others." This is going to be important for the world we're building, in which there will, in effect, be a global network "computer." The architecture of that computer's operating system (the subject of our Emerging Technology conference, Building the Internet Operating System) is the environment in which nanotech research and software will be developed, so it's worth paying attention.
This was a version of my usual stump speech, so I won't report in detail here. For numerous other versions, see my archive.
I kicked off the talk by reading a wonderful email message from Mike O'Dell, the former CTO of UUNET. Mike had sent in this message in response to a discussion of the spread of wireless community networks. Mike's message perfectly captures the main thrust of my talk, and puts it freshly, so it's worth quoting in full:
Ah yes -- once again, we see the power of transition from centralized planning to biological process.
Biology will out-innovate the centralized planning "Department of Sanctified New Ideas" approach every time, if given half a chance. Once the law of large numbers kicks in, even in the presence of Sturgeon's Law ("90% of everything is crap."), enough semi-good things happen that the progeny are at least interesting, and some are quite vigorous.
"Nature works by conducting a zillion experiments in parallel and most of them die, but enough survive to keep the game interesting." (The "most of them die" part was, alas, overlooked by rabid investors in the latter 90s.)
This is how the commercial Internet took off, and this remains the central value of that network --
You don't need permission to innovate.
You get an idea and you can try it quickly. If it fails, fine -- one more bad idea to not reinvent later. But if it works, it takes off and can spread like wildfire. Instant messaging is a good example, even though ultimately complicated by ego and hubris. Various "peer-to-peer" things an even better one.
Wifi is evolving the same way. A bit of great enabling technology made possible by a fortuitous policy accident in years long past, a few remarkable hacks, a huge perceived value proposition (and I don't mean "free beer"), and presto! You have a real party going on in the Petri dish.
Clarifying note: during my tenure at UUNET, I described the real business as operating a giant Petri dish -- we kept it warm, we pumped in nutrients, and we made it bigger when it filled up. And people paid us money to sit in the dish and see what happened.
And so it goes.
From this launching pad, I reviewed some of the lessons of open source -- create an architecture of participation (a la Unix and the Internet -- new programs can be first-class citizens without anyone's permission), create a modular architecture so things aren't too complicated at any one point, and remember that giving away "intellectual property" can be a strategic choice that advances adoption of a technology. Holding things too close to your vest can hold you back.
In particular, I addressed some of the concerns about the erosion of the public domain that are the subject of Lessig's book, The Future of Ideas. As applied to nanotechnology, it's easy to imagine a world in which the fundamental design of "things" is considered intellectual property. (We already see patents on organisms.) We could be facing a land grab like nothing we've ever seen if the promise of nanotech comes true, and we haven't resolved the IP issues. It's better to imagine a world in which designs for things are swapped a la Napster than one in which you have to pay a tax to someone who filed the paperwork first. And of course, the future is likely to have a mix of both these approaches.
David D. Friedman: Strong Cryptography Meets The Transparent Society
David D. Friedman, a professor at Santa Clara School of Law, and author of the books The Machinery of Freedom, Hidden Order: The Economics of Everyday Life, Price Theory, and the forthcoming Future Imperfect (a draft of which is online at daviddfriedman.com), gave afascinating talk about the balance between privacy and transparency.
About eight years ago, Friedman got interested in ideas of the cypherpunks. He wrote a paper -- "Strong Privacy: The Promises and Perils of Strong Encryption." The basic argument was that strong crypto and digital signatures create a world in which we can have online identity without having to reveal real-world identity -- we can have both anonymity and recognition. This set of technologies has some interesting consequences, both good and bad.
But then Brin's Transparent Society portrays a very different world, in which surveillance is so much better that there's no privacy. Crypto gives a level of privacy people have never known. Surveillance gives a level of transparency we've never known; privacy through obscurity cannot survive modern data processing -- face/image recognition. Brin's solution is that everyone gets to watch. (We watch the authorities as they watch us.)
Friedman's question is: how do the crypto and transparency worlds relate? Strong encryption doesn't help if there's a mosquito-sized camera watching my keystrokes -- but we can imagine mechanisms to make that more difficult. That's one of the critical variables. If you can monitor the interface, then crypto isn't any good. But if you can't, then transparency isn't. So if we get to interfaces like a direct brain interface, or maybe even just subvocalization of some kind, surveillance is less effective.
In short, cyberspace privacy and real world transparency can defeat the other to some extent. A lot depends on the answer to the question: How big is cyberspace? If all it consists of is sending text messages back and forth, it's not very interesting. But as virtual reality (VR) becomes more significant ... VR is now brute force through the senses -- eventually, it will be direct to the brain. You can think of this as cracking the dreaming problem -- how does the brain encode sensory signals? We've all experienced sensory sensations when we sleep -- that's "deep VR" (a term Friedman prefers to "full immersion VR").
If we have a protected interface, and most of what happens to us that matters happens in cyberspace, Brin's vision doesn't matter.
A few other random tidbits that came up in digressions from the main talk:
Will paternity testing change human behavior? An awful lot of social mores are rooted in the desire for certainty about paternity.
In an increasingly digital world, one of the limitations of surveillance is the ease of forgery.
The book The Red Queen makes the point that the reason for sex is to keep scrambling the combination to our genetic lock faster than intruders can attack it.
Tim O'Reilly is the founder and CEO of O'Reilly Media, Inc., thought by many to be the best computer book publisher in the world. In addition to Foo Camps ("Friends of O'Reilly" Camps, which gave rise to the "un-conference" movement), O'Reilly Media also hosts conferences on technology topics, including the Web 2.0 Summit, the Web 2.0 Expo, the O'Reilly Open Source Convention, the Gov 2.0 Summit, and the Gov 2.0 Expo. Tim's blog, the O'Reilly Radar, "watches the alpha geeks" to determine emerging technology trends, and serves as a platform for advocacy about issues of importance to the technical community. Tim's long-term vision for his company is to change the world by spreading the knowledge of innovators. In addition to O'Reilly Media, Tim is a founder of Safari Books Online, a pioneering subscription service for accessing books online, and O'Reilly AlphaTech Ventures, an early-stage venture firm.
Return to the O'Reilly Network.