The weekend of April 27-28, I spoke at the Foresight Senior Associates Gathering in Palo Alto. Foresight focuses on nanotechnology and related topics, including advanced software and human life extension. Here's my recap of the talks I attended:
Christine Peterson, President of the Foresight Institute, introduced the gathering.
The thrust of her talk was captured in her closing lines:
"If you're trying to look ahead long term and it looks like science fiction, you might be wrong. But if you're trying to look ahead long term and it doesn't look like science fiction, you're definitely wrong."
Leading up to that point (which was hammered home by succeeding speakers), she made a number of interesting observations. I've captured the ones I found most interesting (this is not a complete summary of the talk):
The military is often the social group that does the best job of looking ahead long term. It's their job to forecast possible threats, and they are paying a lot of attention to nanotechnology. [However, she hasn't seen any confirmation of the rumors that they are trying to guide or shut down certain areas of research.]
VCs are starting to take notice. [And in fact, she had four VCs, including Steve Jurvetson, on a panel later in the day.]
There are two components to nanotechnology: stuff and bits. Nanotechnology is ultimately about manufacturing "stuff," but manufacturing at this level involves software, and controlling nano stuff will also require a lot of bits. So nanotechnology and next-generation software go hand in hand.
The laws of physics allow us to predict technical advances up to limits allowed by nature. But laws of economics and human nature impose other limits, making it difficult to provide any precise time estimates.
The goal of nanotechnology is direct control down to the molecular level. This is a long term project. But now the term is being used not just for true atomic-level manipulation, and in the VC world, we're seeing it used for a shorter term view of "machines smaller than micro technology" -- some people call this nanoscale bulk technology. [In the next talk, Ralph Merkle called this second class of technologies "nanoscale science and engineering."] This is a change from the way long-term nanotech advocates have been using the term, but Peterson's advice to the audience was to "get over it," since terminology drift is what happens as ideas are embraced by new audiences.
Nanotech involves a paradigm shift; acceptance has a generational component. Like an iceberg, acceptance is below the surface of academia and industry. But as younger people rise to the top of organizations, we'll see this come into the open.
Nanotech looks like chemistry; it takes millions of dollars to set up ... but for machine intelligence (advanced software) -- the control side -- money and visibility levels decrease every year.
People overestimate what will be done in two years, underestimate what will be done in ten years. Medium-term predictions are the hardest. This will take a while. So try to have fun.
Before going into his list of recent achievements, Ralph gave a few introductory remarks:
There are three trends in manufacturing: greater flexibility, greater precision, and lower cost. Nanotechnology is about advances in all three.
Two fundamental ideas of nanotech are:
Positional assembly. Experimentally, we can pick up and move molecules -- but this is still noteworthy, rather than routine.
Self-replication. Systems that make copies of themselves. We know this is possible because biological systems do this. Biological systems also show us how small structures differentiate themselves into subsystems of various kinds, and can ultimately make very big structures.
Ralph's "noteworthy rather than routine" comment was a key to his talk. Right now, we're talking fairly tremendous achievements, but nanotech won't really be on the map until these achievements are a matter of course.
If nanotech achieves its goals, manufacturing costs will be unbelievably low by today's standards. Today, agricultural products are about $1/kg. In future, almost any product will be $1/kg.
Ralph went on from there to discuss what he thought were some significant recent achievements. Note that he didn't supply all of the URLs linked to here. I've Googled for what I think are the appropriate links, but I may not always be right.
Manipulation and bond formation of iodobenzene by STM (Scanning Tunneling Microscopy). This research has demonstrated that direct molecular manipulation and bond formation is feasible, but it's still hard; we need to reach state where this is routine and possible with lots of molecules.
DNA Sequencing with Nanopores. If you can make a very small and very precise hole, you can solve a whole class of problem, of which gene sequencing is only one example.
Modification of Virus shells. When it comes to nanotechnology, viruses are way ahead of us. Recent research piggybacks on this fact by modifying virus shells rather than building complex molecules from scratch. For example, in one experiement, the CowPea Mosaic Virus was modified to incorporate gold into its shell, perhaps one day enabling a new methodology for mineral extraction. [Ralph didn't mention it, but there was another news item on this front the other day. Scientists have used genetically-engineered viruses to pick up zinc sulfide molecules, with potential application in building semiconductors.]
DNA Motors. Ned Seeman has demonstrated that you can shift DNA from single- to double-stranded by introducing complementary strands, with the result being the development of "sliding struts" -- a three-phase motor. There is also some older work by Montemagno at UCLA -- interfacing biological motors to silicon systems.
Nanotube inverter. An IBM team led by Phaedon Avouris created an inverter -- a basic computer device -- by draping a bucky tube over a prepared surface to make a type of nanoscale transistor.
NRAM from Carbon Nanotubes. Startup Nantero is working on building what they call NRAM out of carbon nanotubesn -- "NRAM will be considerably faster and denser than DRAM, have substantially lower power consumption than DRAM or flash, be as portable as flash memory, and be highly resistant to environmental forces (heat, cold, magnetism)."
IBM's Millipede. Merkle commented that "While electronics is interesting, it isn't really quite what we're talking about. We want manufacturing." IBM Zurich's Millipede, which uses an array of STM microscopes, gives the first inkling that we are building a core of molecular manufacturing capabilities.
Merkle went on from there to discuss what we need to do. We have to start with the goal, then work backwards to what we need to achieve it, much like retrosynthetic analysis in chemistry. He gave the illustration of something he called a respirocyte -- a compressed oxygen molecule -- that could theoretically be injected in quantity into the bloodstream and that would then provide oxygen for an hour. This is a thing of value that we "could do" once we have the basic tech. We need more examples like this to inspire research on "how to" capabilities.
In particular, we need systems designs for molecular manufacturing. Here are two systems architectures currently being explored:
Exponential assembly (work done at Merkle's company, Zyvex). It's not self-replicating, but does work for large numbers of devices on a surface: first you pick up one, then you pick up two, then you pick up four, and so on.
Convergent assembly. You take parts that are small, put them into successively bigger parts. 30 doublings takes you from one nanometer to one meter.
From the Frontiers of Research to the Heart of the Enterprise
But this is just the tip of the iceberg. We need more molecular manufacturing systems designs. The problem is that development times are more than 10 years, most companies' planning horizons are less than 10 years, and research funding is not focused on systems.
Merkle closed with an admonitory story: Babbage had designed a stored program computer back in 1842; computers were not reinvented until a century later. That delay was not fundamental or necessary. You can get long delays if you don't pursue a technology aggressively. Nanomolecular manufacturing is possible, but will we have the will to pursue it?
During the Q&A, someone (from HP, if I recall), asked if getting to computers didn't depend on vacuum tubes. Merkle replied that Babbage's mechanical design wasn't adopted because it was too slow, and human labor was cheaper. But he missed a technology that was available that could have made a faster computer, even then. Someone might have realized it didn't have to be mechanical, could have been done with relays. There were some other designs within the design space that could have been explored. Technology can describe possibilities, but it can't force people to choose them.
(Frederick Turner,who was sitting next to me, quietly pointed out another "missed opportunity" even longer ago. Hero of Alexandria first invented a steam engine almost 2000 years before the technology was widely adopted.)
Another question: Where will products first appear? Answer: Computers will be the first product that people will focus on. People already understand, accept the idea of big advances there. We should also look to materials science - strong, light materials (buckytubes specifically) - and medical applications.
"I'm an inventor, and that's what made me interested in trend analysis: Inventions need to make sense in the world where you finish a project, not the world in which you start the project"
The paradigm shift rate itself is accelerating. Progress through the entire 20th century is equivalent to 20 years at today's rate. Few people really internalize the implications of exponential change. Self-replicating machines will take 100 years at today's rate of progress, but Ray expects them in only 25 because of the exponential increase of progress itself.
Evolution works by indirection. It creates some products, but then those products create the conditions for the creation of the next round of products. In particular, this happens when there is a means of recording progress.
The first step in evolution took billions of years. Once we got to DNA, acceleration accelerated. The Cambrian explosion took only a few tens of millions of years, Homo Sapiens a couple of hundred thousand years. Now that product has built new ways of recording and sharing info.
Ray showed slides of lots of different exponential growth curves. While the talk was not identical, a lot of the ideas can be found in a talk that Ray gave at a Business Week conference in December 2001, excerpts of which are found on Ray's site. So I'm not going to repeat the details here.
A key prediction regarding computing was that today, $1000 will buy you a computer with the complexity of a mouse brain. By 2020, $1000 will buy a computer with the complexity of the human brain, and by 2030, perhaps 1000 times that. But actually, the human brain has lots of redundancy, so we probably don't need to match all 20 billion MIPS; cognition uses only about 1000th of the brain's theoretical capacity. So this may happen sooner.
He pointed to some work by Lloyd Watts on reverse engineering the human brain.
By 2010 -- computers disappear. Images are piped directly to our retinas, with ubiquitous high bandwidth electronics embedded in the environment.
By 2029 -- reverse engineering of human brain completed. Computers pass the Turing test. Non-biological intelligence combines the subtlety and pattern recognition of human intelligence with the speed, storage capacity, and information-sharing ability of computers. Nanobots provide neural implants that are noninvasive, surgery free, distributed to millions of points in the brain. There will be full-immersion virtual reality.
People will beam their full experience out on the net, a la webcams, including the neurological component of their emotions. By 2050, humans will need to be augmented to keep up.
Q: The spiny echidna and other monotremes actually have the largest frontal cortex [presumably proportionally to the rest of their brain, not in an absolute sense] -- this is the hardware for remembering and processing. In human adolescence, there's a huge amount of pruning that goes on. In order to be able to do something, we need to be able to forget. Analogously, really rich people choose empty space and empty time. In the future you're portraying, won't we have to get better at getting rid of stuff?
Ray: This is an argument for death. :-) But right now, our software is dependent on our hardware. Death is a hardware crash. But software only lives if we care about it. Info only stays alive if continually maintained. Our lives will ultimately be in our ownhands, and we'll retain what we care about.
Pattern recognition is the constant destruction of information and the replacement by abstracted patterns. "The ultimate ontological reality is patterns."
Q: I've seen your curves, and I accept them. But if this is inevitable, why should we be working towards it?
Ray didn't answer this question directly, which was a shame. Of course, the answer is because of local optimization. A lot of things may be happening to our society, but the benefits often accrue disproportionately to those who are closest to the heart of the change.
In his talk, Stewart (originally famous for the iconic Whole Earth Catalog) assumed that the audience was already familiar with the Long Now Foundation; his talk focused on why he's created a follow-on Long Bets Foundation. If you aren't familiar with the Long Now, definitely visit their Web site. In addition to the general information there, be sure to check out Brian Eno's account of where the name came from, "The Big Here and the Long Now." (I heard Brian give this as a talk a few years ago, when he was helping publicizeStewart's book, The Clock of the Long Now. Both the article and the book are well worth a read.)
Here's a brief synopsis of the talk.
The goal of the Long Now Foundation is to foster long term thinking. We want to debate intelligently, and remember and revisit the debates. We started to wonder if we could make record-keeping of our thinking not boring, as well as self-documenting. So we came up with the Long Bets Foundation. Let's make it fun.
The basic idea is that people place long term bets, with arguments for those bets (pro and con), and with the proceeds going to charity. For example, Ray Kurzweil bet Mitch Kapor $10,000 on whether the Turing test would be passed by a computer before 2029. The arguments are what counts. A great way to frame a debate at a particular point in time.
Stewart pointed out that a long bet can be interesting even if no one takes the bet. For example, Martin Rees placed the "open bet" that there will be one million deaths from bioterror by 2020. So far, no one has stepped forward to take the negative side of the bet. This is itself interesting.
Some hacks that make long bets work:
A: The US soccer team will win the World Cup before the Red Sox win world series. (This is really a bet about immigration and globalization, not sports.)
Q: Robin Hanson's Idea Futures seems to have advantages over your system. Her users can make money ...
A: I sense a bet there.
I skipped out of the afternoon sessions, to go to Kevin Kelly's birthday party instead.
Neil Jacobstein of the Institute for Molecular Manufacturing gave a report on the Foresight Guidelines for responsible nanotechnology.
Especially with the current backlash from 9/11, and the fear of out-of-control self replication (the "grey goo"), it's important to do some "meme engineering." Nanotechnology advocates are still fighting the old war of convincing people that nano is possible, while the new issue is fighting the idea that it's dangerous and that the risks outweigh the benefits.
According to Jacobstein, the risk of not pursuing nanotechnology research is actually higher. A growing world population with increased affluence requires this for basic maintenance of the environment, if nothing else, since only nano offers a real hope of low-impact affluence.
There's also the risk that if we don't do it others will. Criminalizing nano and biotech (as in stem cell and cloning research) is possible. But the counter-meme is that there's a global market. There's more risk if nano is driven underground.
There is a future risk of abuse by terrorism. Unfortunately there are no guidelines to avoid that. There is always a cycle of attack and defense. However, the focus with regard to terrorism should be accountability, transparency, and so forth, with regard to research. The real way to address the risk is to deal with the horrible conditions many people live in, to alleviate the reasons for terrorism. And if nano lives up to its long-term promise, it may be the best way to do that.
There's a worry that nano will lead to uncertain futures. However, we a facing a known bad future.
There's a worry that we are usurping nature or God. But current technology does a lousy job of interacting with nature, so nano won't be worse. And in fact, the current situation will get worse as more people adopt our current tech.
Here are eight things nanotech advocates have to do:
Make Necessary Distinctions. Get the facts out. For example, we have to distinguish between nanoscale science and engineering (MEMS, carbon nanotubes) vs. true molecular nanotech (molecular assemblers).
Make Tradeoffs Explicit. We talk about material abundance with zero emissions, but we must address security procedures. We talk about cleaning up the environment but we don't talk about the possibility of interacting with the envrionment in unknown ways. If MNT (molecular nanotech) enables desktop matter compilation, it will require innovative controls.
Produce Compelling and Specific Benefits. There's a Catch 22: we can be grounded and specific (but boring) vs. showing strong benefits with a science-fiction taint. We hit the extremes: modeling molecular gears today or describing future technology. We need a realistic roadmap of the messy interim.
Look at relative risks and credible safeguards. There are biases in human cognition: people adapt to current risks, but rate unknown risks much higher. E.g., we all have self-replicating critters right now -- bacteria and viruses represent an "ancient molecular manufacturing system." There are also lots of risks that we accept today that could be ameliorated by nanotech. But we do need guidelines. [At this point, Neil walked through the guidelines but since they are on the Web, I won't recap them here.]
Neil noted though that these guidelines are good for accidental misuse, but not for terrorism.
Emphasize common context and values. We occupy a very small world with increasing transparency and accountability. We all share the risks of continuing use of today's deadly technologies. There are common values in US -- democracy, pluralism, capitalism, freedom of choice, environment, religious tolerance, technical and economic progress. The vast global majority wants technology to bring them a higher standard of living, health, and safety, and these values are part of the story.
Depolarize the dialog. We have to get out of right and wrong, and moral vs amoral. We can argue that not developing nanotech is amoral. We can't let anti-tech people take the moral high ground. All of this debate is values-driven, on both sides.
Provide opportunities for involvement rather than protest. Let's get opponents involved in the debate, rather than treating them as persona non grata. We all have a lot to learn.
Use professional PR practices. Opponents of technology have figured out how to use professional PR and lobbying techniques. We rely too much on preaching to the converted, and messages that are too complex.
My own talk focused on the architectural implications of open versus closed systems, and lessons from the development of Unix and the Internet. Innovation best flourishes with open architectures, in which any individual can bring something to the party. An open architecture doesn't mean that there are no rules; in fact, it usually means that there are strong, clear rules. And one of the rules is something like "play nicely with others." This is going to be important for the world we're building, in which there will, in effect, be a global network "computer." The architecture of that computer's operating system (the subject of our Emerging Technology conference, Building the Internet Operating System) is the environment in which nanotech research and software will be developed, so it's worth paying attention.
This was a version of my usual stump speech, so I won't report in detail here. For numerous other versions, see my archive.
I kicked off the talk by reading a wonderful email message from Mike O'Dell, the former CTO of UUNET. Mike had sent in this message in response to a discussion of the spread of wireless community networks. Mike's message perfectly captures the main thrust of my talk, and puts it freshly, so it's worth quoting in full:
Ah yes -- once again, we see the power of transition from centralized planning to biological process.
Biology will out-innovate the centralized planning "Department of Sanctified New Ideas" approach every time, if given half a chance. Once the law of large numbers kicks in, even in the presence of Sturgeon's Law ("90% of everything is crap."), enough semi-good things happen that the progeny are at least interesting, and some are quite vigorous.
"Nature works by conducting a zillion experiments in parallel and most of them die, but enough survive to keep the game interesting." (The "most of them die" part was, alas, overlooked by rabid investors in the latter 90s.)
This is how the commercial Internet took off, and this remains the central value of that network --
You don't need permission to innovate.
You get an idea and you can try it quickly. If it fails, fine -- one more bad idea to not reinvent later. But if it works, it takes off and can spread like wildfire. Instant messaging is a good example, even though ultimately complicated by ego and hubris. Various "peer-to-peer" things an even better one.
Wifi is evolving the same way. A bit of great enabling technology made possible by a fortuitous policy accident in years long past, a few remarkable hacks, a huge perceived value proposition (and I don't mean "free beer"), and presto! You have a real party going on in the Petri dish.
Clarifying note: during my tenure at UUNET, I described the real business as operating a giant Petri dish -- we kept it warm, we pumped in nutrients, and we made it bigger when it filled up. And people paid us money to sit in the dish and see what happened.
And so it goes.
From this launching pad, I reviewed some of the lessons of open source -- create an architecture of participation (a la Unix and the Internet -- new programs can be first-class citizens without anyone's permission), create a modular architecture so things aren't too complicated at any one point, and remember that giving away "intellectual property" can be a strategic choice that advances adoption of a technology. Holding things too close to your vest can hold you back.
In particular, I addressed some of the concerns about the erosion of the public domain that are the subject of Lessig's book, The Future of Ideas. As applied to nanotechnology, it's easy to imagine a world in which the fundamental design of "things" is considered intellectual property. (We already see patents on organisms.) We could be facing a land grab like nothing we've ever seen if the promise of nanotech comes true, and we haven't resolved the IP issues. It's better to imagine a world in which designs for things are swapped a la Napster than one in which you have to pay a tax to someone who filed the paperwork first. And of course, the future is likely to have a mix of both these approaches.
David D. Friedman, a professor at Santa Clara School of Law, and author of the books The Machinery of Freedom, Hidden Order: The Economics of Everyday Life, Price Theory, and the forthcoming Future Imperfect (a draft of which is online at daviddfriedman.com), gave afascinating talk about the balance between privacy and transparency.
About eight years ago, Friedman got interested in ideas of the cypherpunks. He wrote a paper -- "Strong Privacy: The Promises and Perils of Strong Encryption." The basic argument was that strong crypto and digital signatures create a world in which we can have online identity without having to reveal real-world identity -- we can have both anonymity and recognition. This set of technologies has some interesting consequences, both good and bad.
But then Brin's Transparent Society portrays a very different world, in which surveillance is so much better that there's no privacy. Crypto gives a level of privacy people have never known. Surveillance gives a level of transparency we've never known; privacy through obscurity cannot survive modern data processing -- face/image recognition. Brin's solution is that everyone gets to watch. (We watch the authorities as they watch us.)
Friedman's question is: how do the crypto and transparency worlds relate? Strong encryption doesn't help if there's a mosquito-sized camera watching my keystrokes -- but we can imagine mechanisms to make that more difficult. That's one of the critical variables. If you can monitor the interface, then crypto isn't any good. But if you can't, then transparency isn't. So if we get to interfaces like a direct brain interface, or maybe even just subvocalization of some kind, surveillance is less effective.
In short, cyberspace privacy and real world transparency can defeat the other to some extent. A lot depends on the answer to the question: How big is cyberspace? If all it consists of is sending text messages back and forth, it's not very interesting. But as virtual reality (VR) becomes more significant ... VR is now brute force through the senses -- eventually, it will be direct to the brain. You can think of this as cracking the dreaming problem -- how does the brain encode sensory signals? We've all experienced sensory sensations when we sleep -- that's "deep VR" (a term Friedman prefers to "full immersion VR").
If we have a protected interface, and most of what happens to us that matters happens in cyberspace, Brin's vision doesn't matter.
A few other random tidbits that came up in digressions from the main talk:
Will paternity testing change human behavior? An awful lot of social mores are rooted in the desire for certainty about paternity.
In an increasingly digital world, one of the limitations of surveillance is the ease of forgery.
The book The Red Queen makes the point that the reason for sex is to keep scrambling the combination to our genetic lock faster than intruders can attack it.
Tim O'Reilly is the founder and CEO of O’Reilly Media Inc. Considered by many to be the best computer book publisher in the world, O'Reilly Media also hosts conferences on technology topics, including the O'Reilly Open Source Convention, Strata: The Business of Data, the Velocity Conference on Web Performance and Operations, and many others. Tim's blog, the O'Reilly Radar "watches the alpha geeks" to determine emerging technology trends, and serves as a platform for advocacy about issues of importance to the technical community. Tim is also a partner at O'Reilly AlphaTech Ventures, O'Reilly's early stage venture firm, and is on the board of Safari Books Online, PeerJ, Code for America, and Maker Media, which was recently spun out from O'Reilly Media. Maker Media's Maker Faire has been compared to the West Coast Computer Faire, which launched the personal computer revolution.
Return to the O'Reilly Network.
Copyright © 2009 O'Reilly Media, Inc.