advertisement

Print

Platform Independent A Nice Way to Get Network Quality of Service?


by Andy Oram
06/11/2002

This month, at various times, I got stuck in traffic behind a school bus, a food truck, and an elderly transport van. I stayed calm each time, reminding myself that they were carrying out more important missions than whatever I was doing at the time. (I was probably on my way to work to write this article.)

Such generosity is rare on the streets. But on the Internet, it may lead to the long-sought promise of high-bandwidth, interactive media at a relatively low cost. Part of this story involves a technical description of the "scavenger service" and "alternative best-effort service" adopted by the Internet2 QoS Working Group . The other part of the story is the odyssey the working group took in order to adopt such creative solutions.

To push the traffic metaphor a bit farther, the working group found they could not reach their destination without placing their entire fleet of vehicles in reverse and looking for a different street. Their boldness is particularly worthy of note because they had to leave behind a road many of them had helped to dig out and pave in the first place.

Reservations About Quality of Service Protocols

Traditional, best-effort packet delivery has not prevented people from sending real-time audio and video over the Internet for the past couple decades. But the jitter, combined with limited bandwidth, rendered services like CUSeeMe looking as if you were gazing at them through a rain-drenched window.

Sites with sufficient cash on hand tried to solve the problem through over-provisioning. But even if users at two endpoints were willing to install optical fibers on site, they couldn't force their providers to do the same.

In other words, high bandwidth required a coordinated effort between endpoints and backbone. To a large extent, the creation of Internet2 in 1996 sprang from an agreement to upgrade lines and equipment at each level of the network. While Internet2 is many things, one of its aspects could be described as a mutual promise among universities, researchers, and regional backbone providers to create and gainfully employ high-throughput connections.

Related Reading

IPv6 Essentials
By Silvia Hagen

To make smart use of these lines and maximize their value, Internet2 researchers joined others in looking for ways to nail down bandwidth across a wide area network: to provide guaranteed throughput for specific applications during limited time periods.

At that time, the promise of flexible, guaranteed Quality of Service (QoS) on the Internet was exemplified by the Resource ReSerVation Protocol (RSVP), defined in RFC 2205 in 1997. It embodied the most ambitious goals of the QoS researchers.

Suppose two hosts agree on a session that involves large data flows. Under RSVP, the host meant to receive the data uses reservation messages to contact each router on the route back to the server and reserve bandwidth at that router.

Putting this job at the receiver allows routers to notice when multiple receivers are asking for the same data flow, and combine them in a form of multicasting to increase efficient use of the network between the sender and the router. Routers all along the route, once they agree to provide the bandwidth, cooperate to keep packets moving.

In theory, it's awesome. In practice, RSVP has been declared viable only for small networks. According to Ben Teitelbaum, chair of the Internet2 QoS Working Group, recent protocol development on "aggregated RSVP" may overcome the objections that "it doesn't scale."

Still, his group has come to the conclusion that logistical, financial, and organizational barriers will block the way toward any bandwidth guarantees. Here are a few of the daunting problems, summarized from an article by Internet2 researchers Teitelbaum and Stanislav Shalunov:

  • Guaranteed service assumes that every router along the route supports the QoS protocols. As the RSVP RFC points out, non-RSVP nodes not only ignore QoS requests, but might reroute packets so they aren't using the reserved route at all. While the RFC considers this result tolerable, real guarantees would require huge numbers of ISPs to agree to deploy the protocols all at the same. The Internet does not work that way.

  • ISPs must cooperate in ways that help their competitors more than themselves. In other words, one ISP will be promising a premium service as a way to win customers, then asking competing ISPs to help meet that promise. Such help is not likely to be proffered until ISPs are run by the spiritual descendents of St. Francis of Assisi.

  • New, complex payment mechanisms would have to be put in place. Who pays whom along the route? How much more should QoS cost? What if users want the priority service to kick in only when the network gets congested? Moving from a flat, one-size-fits-all system to a tiered system is always a headache.

  • Complex monitoring systems will have to be put in place along the routes. How do customers know they're getting the throughput they paid for? (Subjective experience is a very poor indicator.) What kinds of penalties can be imposed on ISPs that cheat and get caught only once in a long while? And suppose the ISP cannot meet its promise due to a Denial of Service attack beyond its control?

  • Once ISPs start offering QoS, they have incentives to degrade standard service so as to nudge customers toward paying for the premium service.

In addition to these and other specific criticisms, the premise of premium service runs fundamentally counter to the architecture of the Internet. Consider this: traditional IP routing chooses the best route for each packet based on local considerations. In fact, this practice is the justification for breaking up data into packets in the first place.

Premium services assume that a route is chosen in advance (although RSVP allows for a limited degree of rerouting). One could ask, "Who needs the Internet for this?"

As Teitelbaum puts it: "The best-effort service model allowed the Internet to become the fast, cheap, and global infrastructure that we know and love. The temptation to teach it new tricks--like offering circuit-like QoS assurances--is very real. Unfortunately, there is a huge risk that in doing so, we would undermine the very properties that have made the Internet so successful."

Reaching the realization that premium services were both impractical and philosophically undesirable, the Internet2 QoS working group made an astonishing turnaround. They officially announced they were halting all efforts on premium service -- turning their backs on years of impressive research and specification work. And then they demonstrated some true out-of-the-box thinking by looking for efficient bandwidth use in an entirely new direction.

Pages: 1, 2

Next Pagearrow