Related link: http://conferences.oreillynet.com/etcon2002/
Many of my articles (including
O’Reilly Emerging Technology Conference)
have enthusiastically promoted the hacker virtues of
flexibility and features. But in the real world,
reliability and security matter just as much. These
were the main themes explored today in the talks I attended at
So robust, the system runs itself
The morning keynote on Autonomic Computing was given by leading
Internet researcher Robert Morris, who now works for IBM. Researchers
there have examined each aspect of system operation and tried to find
ways to make computers more like live organisms: “self-configuring,
self-protecting, self-healing, self-optimizing.” In other words, IBM
is trying to create systems so reliable they hardly ever need human
Morris listed–as examples of current systems that do a pretty good
job of relieving the administrator of responsibility–RAID, virus
detection filters, and internal database query optimizers. But the
sterling example was the telephone system’s Electronic Switching
System, which works so well most users never experience a failure.
What did Morris suggest for future directions in autonomous computing?
Expanded RAID disks that do mirroring instead of parity, perhaps with
enough extra disk space that they never need to be administered even
after failures take place.
Database query optimizers that check expected results against actual
results, effectively learning from their mistakes.
Massive Web caching that evens out the loads experienced by different
A lot of his solutions involved the old trick of “add another level of
indirection.” Thus, multiple operating systems could be run on a
robust base so that an operating-system crash is no worse than an
application crash. Clients could be managed over the network. Most
interestingly, we could replace stock recovery procedures (which are
error-prone) with a system that defines goals–that is, a vision of
what a healthy system should look like–and lets the system find the
procedures to return to health itself.
The question I did not get a chance to ask Morris concerned
security. Designing systems that fix themselves involves recognizing
repeated patterns and defining predictable solutions. What about human
intruders who figure out the patterns and exploit them? IBM’s
solutions may work for acts of God, but not acts of man–and
protection against these attacks take up a lot of system
As it happens, a while back I weblogged an IBM response to security called
It sounded a lot like what Morris was describing, and I was as
suspicious then as I am now.
Full tilt toward pervasive computing
The urge toward adding levels of indirection continued with Michel
Burger’s talk on how to achieve truly pervasive computing. In his
scenario, one could set up multiple sessions, pick them up later on a
different computer, represent oneself with a different identity in
each one, and generally escape what Burger called “the digital
ghetto.” He spoke only of Web sites, but I believe his model could be
extended to any protocol–which is good because I don’t think all
Internet data has to run over HTTP.
The system was extremely complex, involving regular servers (as we now
have), “user servers” that remember what you do across the other
servers (so you can compare all the books you buy, for instance, not
just the books you’ve bought at one site), and “context servers” that
provide your different identities. The full room of attendees seemed
somewhat stunned by the presentation, although a few people were
articulate enough to question aspects of the system.
The search for true security
Three more talks showed where the world is moving in search of
authentication and trust.
Three experts on reputation–Roger Dingledine, who wrote about it in
book, Jim McCoy of MojoNation fame (now with Hivecache), and Bryce
Wilcox-O’Hearn–delivered a mixture of lessons about both the need for
reputation and the difficulties of attaining it.
Basically, reputation lets users rate other users and then decide whom
to trust. The general impression left by the speakers was that it’s
very hard to get working. The best approach is to keep it really
simple: figure out exactly what you want to measure (such as uptime),
stick to as few variables as possible, seed the system with external
information of proven validity, and offer users an idiot-proof
If nothing else, the speakers were honest–they made it clear they
didn’t have general solutions to offer. While Slashdot and eBay are
famous for their reputation systems, Google may perhaps have the best
one in operation today, despite some known ways to game the system. In
theory, distributed systems are more flexible and robust than
centralized ones, but finding good examples is hard.
I mentioned in
that this talk would be of interest to builders of community wireless
networks, and indeed McCoy specifically referred to them in a slide
devoted to what he called “ad-hoc networks.” These present special
difficulties too, because when nodes can freely come and go they are
hard to track.
Another talk on security was given by Rima Patel; this concerned the
more conventional Security Assertions Markup Language (SAML). The goal
of SAML is to let users cross between Web sites with single sign-on.
Like many XML initiatives, this adheres to the common approach of
“let’s take known methods, express them in plain text, and wrap them
in angle brackets.” I do not wish to suggest that SAML is
unsophisticated, though. It seems rich enough to be valuable. It
offers, for example, ways to set conditions that apply to security
assertions, such as time limits or restrictions on who can ask for
them. Patel indicated that SAML is flexible enough to be the basis for
other systems, such as Microsoft’s Passport. But it could be run by
any company that wants to get into the authorization game.
Finally, I heard Richard Forno give a spirited critique of current
Public Key Infrastructure (PKI) systems. He is a traditional security
guy, careful in his investigations and brutal in his conclusions. I
have heard most of his points made elsewhere (such as by noted
security expert Bruce Schneier, who speaks tomorrow) but the rigor of
Forno’s thinking and the clarity of his presentation were impressive.
Would you buy a user interface from this man?
Reliability remained the theme of the afternoon keynote, if you define reliability as “doing what the user expects you to do.”
Perhaps the most audacious presentation of the day came from
Richard Rashid, who has quite the distinction of providing core
technologies to the two leading operating systems of today (he created
Mach, the kernel of Mac OS X, and worked extensively on Windows NT).
Despite my respect for Dr. Rashid’s work, I was disturbed by his talk
about the next generation of operating systems.
Rashid’s work at Microsoft is part of an experimental class of systems
known as adaptive interfaces. I like the idea of a system that can
query me intelligently (an example of which I’ll describe later in
this article), but I do not like one that presumes to know what I want
and decides whether or not it should interrupt me by watching my
gestures or checking whether I’m on the phone. The latter, however, is
the vision that Microsoft has.
The technologies Rashid showed were very impressive. He promoted “not
just document retrieval, but information retrieval,” and showed slides
of a technology called MindNet that could accept natural-language
queries and return excellent answers.
I accept the notion that the traditional building blocks of
computers–such as files and processes–are old-fashioned and do not
correspond to the way people think. I certainly would like a computer
that recognized my speech, gestures, and handwriting. But having a
microphone or camera monitor me all the time?
The more modest monitoring suggested by Rashid–such as checking a
user’s mailing habits and online calendar–seemed like reasonable
tools to “augment” (Rashid used an old term from Doug Englebart) the
user’s experience. But they also seemed like an excellent way to
further lock users in to using Microsoft tools. If you want your daily
habits taken into account, you’d better use the integrated system
they provide you for everything.
Furthermore, Rashid’s concept of “user-centered computing” sounds very
individualistic. He didn’t suggest any way to combine the knowledge of
colleagues and peers, which I find much more exciting than having a
computer that tells me which of my mail messages I’ll find important.
It was a long day. Tim O’Reilly began it with a discussion of the
trends that came together to produce this conference.For instance, the Internet is assumed to be present, rather than being “an add-on to the PC.” He stressed that
players should not maneuver to control chokepoints and try to set the
rules, but should let everybody contribute–find value in the new community created by new possibilities. Also, new technologies take time
and should be allowed to develop organically.
Overall, he sees the Web as evolving to become a set of
components. For instance, he pointed out that
would be a wonderful service to offer for automated use, and predicted
that if AOL fails to develop it that way, another company will swoop
in and take their business away from them.
I also heard O’Reilly author Brian McConnell discuss his Worldwide
Lexicon, which he recently described in an
on our Web site. It is an interesting combination of human and machine
intelligence. It does not attempt to provide machine translation, but
simply accepts definitions of words and phrases from interested users.
It also represents the kind of adaptive interface I said I liked
earlier: it monitors keyboard and mouse behavior in order to figure
out whether it should bother the user by asking questions.
Stayed tuned for tomorrow’s events, particularly a Birds-of-a-Feather
session I’m holding on telecom policy.