advertisement

Print

Ten Years After the Future Began
Pages: 1, 2, 3

IPv6 Addressing and Its Alternatives

Internet visionaries had already decided by 1991 that the 4 billion individual addresses allowed by the current 32 bits of the IP address would not be enough. In the future we may face a situation where every DNA molecule in the universe requires its own IP address. In 1991, the threat of address exhaustion was even closer than it is now, due to the inflexible system of allocating Class A, Class B, and Class C addresses. The continual subdivision of the address space also began to weigh down routers that were required to know how to reach large portions of the address space.



The RFC 1287 authors pinpointed the problems with address assignment and routing tables that have become a central concern of many Internet researchers. The authors recognized that changing the IP address size and format presented major difficulties and would require transitional measures, but they declared that a thorough overhaul was the only way to solve the problems with address assignment and the proliferation of routes.

Comment on this articleWhat's your assessment of RFC 1287 ten years later?
Post your comments

IPv6 is the main outcome of this research. Yet a lot of observers suggest, with either scorn or despair, that IPv6 will never be put into practice. Despite important steps forward like the implementation of IPv6 in routers and operating systems, one would be hard put to find an IPv6 user outside of private networks or research environments like the Internet2 networks. Other observers acknowledge that changes on this scale take a long time, but they claim that IPv6 is critical and therefore that its spread is inevitable.

RFC 1287 anticipated that "short-term actions" might be found to give the Internet "some breathing room." And indeed, that is what happened. Classless Inter-Domain Routing extended the class system in a simple way that created the needed flexibility in address assignment. Network Address Translation allowed organizations to make do with displaying a single IP address (or something on the order of a Class C network) to the world.

The path of least resistance has won out. The RFC authors themselves recognized the possibility that an old proverb (sometimes reversed) would apply: the good is the enemy of the best. In any case, these issues in RFC 1287 were right on target, and were handled elegantly.

Some Comments on Addressing and Routing

If the plans of the Internet designers and device manufacturers come to fruition, the Internet may become one of the largest and most complex systems ever built. The traditional way to manage size and complexity is through the hierarchical delegation of tasks, and the routing infrastructure on the Internet certainly follows that model. Recent proposals for address allocation reinforce the important role played by the organization that routes the packets to each end user.

On top of the Internet's flaws, RFC 1287 seemed to recognize at an early stage that personal computers with their less-than-satisfactory operating systems would require network protection.

Addresses for private networks are a fixture of IP. Wherever a user reaches the Internet through a gateway, as NAT shows, tricky addressing schemes can sidestep the need for a large, Internet-wide address space. The results aren't pretty and are blamed for holding back a wide range of new applications (especially peer-to-peer) but they are still dominant in today's networks. Mobile phones always talk to the larger network by connecting to a gateway in the cell, so each phone company could create a special mobile-phone addressing space and translate between that and the addresses used by outsiders to reach the phone.

Some commentators think that mobile phones and other devices will be the force driving the adoption of IPv6. That would probably be beneficial, but phone companies don't have to take that path. They could take the path of address translation instead. The ideal of a monolithic Internet consisting of equal actors has given way to a recognition that users take their place in a hierarchy of gateways and routers.

Security Slowly Congealing on the Internet

In the 1980s, many computer users seemed to accept that security was a binary quantity: either you had a network or you had security. The famous Orange Book from the Department of Defense refused to consider any system attached to a network worthy of certification at fairly minimal levels of security.

Security problems remain the top headline-getter and the central battleground for the Internet today. Among the casualties is one of Tim Berners-Lee's original goals for the World Wide Web. He wanted a system where people could easily edit other people's Web pages as well as read them, and he has mentioned the lack of Internet security as the reason that this goal remains unfulfilled.

RFC 1287 treats security as a major focus and lays out lots of ambitious goals:

  • Confidentiality. We've made great strides in this area. Although few people encrypt their email, there are many VPN users enjoying reasonably secure connections through PPP, L2TP, and SSH. Web sites offer SSL for forms and other sensitive information. The general solution to confidentiality, IPSEC, is gradually appearing in both commercial and free-software VPNs.

  • "Enforcement for integrity (anti-modification, anti-spoof, and anti-replay defenses)." This seems to be offered today by Kerberos and its various commercial implementations and imitations. The VPN solutions mentioned in the previous item also contribute.

  • "Authenticatable distinguished names." This promise lies implicit in digital signatures, but these signatures are not widely deployed except when users download software from major Web sites. A major breakthrough may come with Microsoft's My Services, or its competition.

  • Prevention of denial of service (DoS). Clearly, we have no solution to this problem. Trivial DoS attacks can be thwarted with firewalls and proxies, but in the face of distributed DoS the best (and still faint) hope we have is to persuade ISPs to adopt outgoing filters on certain kinds of traffic.

In general, progress toward security has been steady, but along multiple and haphazard routes. Symptomatic of this unfinished work is that nobody has written RFC 1287's recommended "Security Reference Model." The partial success in turn reflects the difficulty of retrofitting security onto TCP/IP. The RFC authors themselves repeated the common observation that "it is difficult to add security to a protocol suite unless it is built into the architecture from the beginning."

As with IPv6, the slowly evolving state of Internet security can be ascribed to taking the path of least resistance. It's hard for IPSEC to take hold when application-layer approaches to confidentiality and authentication seem adequate.

On top of the Internet's flaws, RFC 1287 seemed to recognize at an early stage that personal computers with their less-than-satisfactory operating systems would require network protection. I believe this recognition underlies the warning, "There are many open questions about network/subnetwork security protection, not the least of which is a potential mismatch between host level (end/end) security methods and methods at the network/subnetwork level."

In the context of this issue, the following "assumption" seems more like a veiled complaint about weak operating system security: "Applying protection at the process level assumes that the underlying scheduling and operating system mechanisms can be trusted not to prevent the application from applying security when appropriate."

Pages: 1, 2, 3

Next Pagearrow