advertisement

Print
[ Andy Oram ]

Ten Years After the Future Began

by Andy Oram, O'Reilly editor
12/21/2001

Ten years ago this month, a highly unusual Request For Comments (RFC) appeared. In contrast to the usual descriptions of protocol headers or rules for resource allocation, RFC 1287 was audaciously titled "Towards the Future Internet Architecture." And its team of 32 authors and contributors practically encompassed the field of experts who developed (and still develop) the Internet.

Most of the leading Internet designers I've talked to about RFC 1287 didn't know it existed; those who participated in its creation have practically forgotten it. Yet it was no casual, after-hours speculation. It represented the outcome of a year-long effort by leading Internet researchers involving many meetings and discussions, a very serious inquiry into the critical changes that the Internet needed to grow and keep its strength. The IETF, the Internet Activities Board, and the Internet Research Steering Group were all involved. The authors tell me that a large number of follow-up workshops were also held during the subsequent decade on various topics RFC 1287 addressed.

When I discovered RFC 1287, I decided it would be instructive to interview as many of its creators as I could recruit and to look at the RFC from the standpoint of ten years later. I contacted a number of Internet experts through the grapevine and asked such questions as:

  1. Which predictions in RFC 1287 came true? Which look ridiculous at this time?

  2. Which recommendations in RFC 1287 turned out to be relevant? Have they been carried out?

  3. If a recognized need is still unmet, how has the Internet managed to work around the problem?

  4. What major developments did RFC 1287 fail to predict and prepare for?

My inquiry began as an retrospective on RFC 1287 in the light of subsequent Internet development. The inquiry soon became just as much a retrospective on Internet development in the light of RFC 1287. In other words, reading the thought processes recorded in that RFC tells us a lot about why certain major events have taken place in the design of the Internet.

Predictions and Recommendations

Let's start with some explicit assumptions and predictions made by RFC 1287. This was an age, remember, when:

  • Internet users generally connected from a research facility in a university or corporation. A few services offered dial-up access from home, providing email and a Unix shell account.

  • The vast majority of Internet hosts were located in the United States. Much traffic ran through a single, government-funded backbone provided by the National Science Foundation.

  • The World Wide Web was a text service used by a handful of physics researchers and other curious experimenters. (In fact, December 1991 marks the appearance of the first U.S. Web site.) The really hot technology of the day was Gopher.

  • The need for security was widely recognized (for instance, the Internet worm was released in 1988) and packet-filtering firewalls had been invented. But both attacks and defenses were fairly primitive by today's standards. Network Address Translation (NAT) was mentioned as a research project in RFC 1287.

Those are just a few facts to set the tone. Major assertions about the future in RFC 1287 included:

  • "The TCP/IP and OSI suites will coexist for a long time," and the Internet "will never be comprised of a single network technology." Consequently, the authors predicted that the Internet would have to expand beyond the IP protocol stack to allow a "Multi-Protocol Architecture."

  • The IP address space has to be expanded. "The Internet architecture needs to be able to scale to 10**9 networks." (That means 1 billion; and the total number of end-user termination points could enter the trillions.) Routing has to be simplified, because routers were becoming burdened with the need to remember too many routes.

Recommendations for the architectural work included:

Comment on this articleWhat's your assessment of RFC 1287 ten years later?
Post your comments

  • An expanded address space and a more carefully planned system of allocating addresses. This project, of course, evolved into IPv6.

  • Quality-of-Service (QoS) offerings, which would enable time-sensitive transmissions such as video-conferencing.

  • Better security through firewalling (which they called security at the "network or subnetwork" perimeter), protocols that integrated security such as Privacy Enhanced Mail, and certificate systems to provide distributed access control.

  • Coexistence with non-IP networks through "Application interoperability."

  • Support for new applications through a variety of formats and delivery mechanisms for data in those formats.

That was quite a grab bag. For the most part, the relevance of their recommendations inspires admiration ten years later. The authors of these documents could tell where the stress points were in the Internet and proposed several innovations that became reality. Yet it's also strange how little the Internet community has accomplished along some of these goals in the ensuing years.

So, let's see how a decade of intensive Internet growth and innovation matches with the predictions and recommendations in the RFC.

Clothing a Straw Man

A sizeable chunk of RFC 1287 is taken up with speculation about how to live with a multiplicity of competing network protocols for an indefinite period in the future.

Judging "the future relevance of TCP/IP with respect to the OSI protocol suite," the authors rejected the suggestion that we "switch to OSI protocols," and proudly waved their successes in the face of the "powerful political and market forces" who were pushing OSI (Open Systems Interconnection). The authors boasted that "the entrenched market position of the TCP/IP protocols means they are very likely to continue in service for the foreseeable future."

But they also bent over backwards to find ways to accommodate OSI. They even proposed "a new definition of the Internet" based on applications rather than on the Internet Protocol. This Internet would include anyone who could reach an Internet system through an email gateway, which would include the users of Prodigy, CompuServe, and other non-Internet services of the time. The IETF would cooperate with developers of other networks to develop protocols on the upper layers that all networks could run, and that would communicate as mail does through "application relays" or other means.

Nathaniel Borenstein, the author of the MIME protocol, says that all this material was included largely for the sake of politeness and that many of the designers of the Internet tacitly expected much of it to be rendered moot by the Internet's success: "Because the whole focus of Internet protocols was on interoperability, we were planning to support gateways for as long as there were multiple standards. But gateways at best are nondestructive and usually fail to meet even that modest goal. While we talked about planning for a multiprotocol world, many of us believed that such a world would be just an interim step on the way to a world in which the Internet protocols were pretty much universal, as they are now."


O'Reilly & Associates is the premier source for information about technologies that change the world. In addition to authoritative publications, don't miss these upcoming O'Reilly conferences:

  • O'Reilly's Bioinformatics Technology Conference, January 28-31, 2002, in Tucson, Arizona, explores the engineering, software development, and tool-building aspects of bioinformatics. This conference will deliver knowledge from the biotechnology innovators, rendered into useful skills you can take back to improve the way you do research.

  • O'Reilly's Emerging Technology Conference, May 13-16, 2002, in Santa Clara, California, explores the emergence of a new network--distributed, untethered, and adaptive.


In any case, Internet designers continue to show deference over and over again. For instance, RFC 1726, which appeared three years later and was titled "Technical Criteria for Choosing IP The Next Generation (IPng)," says "Multi-Protocol operations are required to allow for continued testing, experimentation, and development, and because service providers' customers clearly want to be able to run protocols such as CLNP, DECNET, and Novell over their Internet connections."

The modest assumption that IP-based networks would be just one of many networking systems is the biggest point on which RFC 1287 shows its age. Indeed, a few interesting concepts from OSI remain in circulation today (LDAP, for instance, derives from the OSI standard X.500.), but the Internet has effectively swept it from the scene. The Prodigys and CompuServes of the world push their Internet access as a key marketing point. Even the standard voice telephone system is threatened with turning into a conduit for the Internet.

A rich multiprotocol environment does exist, but it is layered on top of the Internet. For instance, HTTP has given rise to SOAP and other protocols; embedded applications in HTML pages also function effectively as new protocols. Thus, the Internet has succeeded in fostering innovation to the point where few alternatives to IP are needed.

Pages: 1, 2, 3

Next Pagearrow