OpenP2P.com    
 Published on OpenP2P.com (http://www.openp2p.com/)
 See this if you're having trouble printing code examples


Thinking Beyond Scaling

by Simon St. Laurent
04/23/2001

Writing as someone who's quite relieved to be working on smaller-scale systems, I think it's time we got the mantra of "everything must be scalable" off our backs. People who want to build larger systems are welcome to do so, but they shouldn't expect support -- in standards, kudos, or code -- from those of us looking to bring processing closer to users.

If this seems like a radical statement, heresy to the accepted wisdom, let me explain.

Scalability is both a deeply useful concept and an unfortunate excuse for not trying new things. An understanding of scalability keeps people from making bad decisions like building e-commerce Web sites on Microsoft Access, but it also leads some people to dismiss ideas because "that won't scale." Sometimes ideas that flunk the scalability test are worth pursuing anyway, because there are plenty of cases where scalability simply doesn't matter, and other cases where overall performance depends as much on the architecture of a project as on the scalability of technologies used within the project.

Scalability is for centralized systems

Traditional concepts of scalability -- efficient processing without bottlenecks, the ability to grow to meet demand -- have emerged from the world of what I'll call "traditional computer processing." Centralized systems receive new information, accept queries about that information, and process the information, often sending it through predefined pipelines. Bottlenecks in centralized systems can quickly become nightmares, rippling through all the processes that depend on that system. Inefficient processes can tie up resources, and dependencies between processes may mean that a process which ran admirably in a test environment is a catastrophic failure in a production environment. When processing and data storage are centralized, the ability of those systems to respond to growing numbers of requests is critical.

Outside of those centralized architectures, however, there is a lot more tolerance for processes which may take a long time, a lot of resources, or both. Even within centralized systems, there is often some tolerance for slow processes that run only when the system is lightly burdened. These situations offer programmers an opportunity where the decision-making criteria are very different. There are plenty of computing problems that simply require large numbers of processing cycles, network packets, and heavy-duty input/output. Code or protocol reuse becomes much more attractive, reducing the cost of development in exchange for some loss in efficiency.

Related Articles:

<decent> MemeBag

Interoperability, Not Standards

P2P Smuggled In Under Cover of Darkness


More from the OpenP2P.com

Developers and managers who build for these systems aren't always concerned with "scalability" per se, because the cost of managing the bottlenecks is lower than the cost of building the system without them. As applications become more and more distributed, the cost of individual bottlenecks may decline. Distributing applications across multiple systems makes it possible to use resources -- processing cycles, memory, and data storage -- which are otherwise wasted by centralized systems. By keeping information close to the users who need it, developers can spend less time hunting for relevant information and more time processing it.

Such approaches also make it possible to build systems in which the computers on individuals' desks perform tasks more directly related to their jobs. Instead of being a mere window on work which takes place in a central processing core, the local computers take control over their own work, communicating with other systems, including (possibly) a central processing core. Bottlenecks stay local, as do the decisions to invest resources in breaking them up.

Cycling through systems

Comment on this articleIs scalability a monkey on the back of decentralized systems, or a requirement for any serious system?
Post your opinion

Heard this all before? That's not surprising. Computing seems to cycle through periods of centralization and distribution, as we continue to figure out what the long-term impact of cheap computing is going to be. First PCs infiltrated offices as a standalone supplement to existing systems, and then they got wired in as dumb terminals with extra local capabilities. Client-server architectures centralized storage and some processing, but offloaded some kinds of processing and interfaces to local systems. The Web discarded the complex clients of client-server in favor of cheap and easy to deploy browsers, and once again most processing was centralized. Today's peer-to-peer architectures are running through that cycle again, decentralizing processing while still retaining some links to centralized systems.

Through all of these cycles, the promoters of centralized systems have expounded on the virtues of information management, the lower maintenance cost of processing and storing information centrally, and the greater stability of systems maintained by paid professionals rather than just "users." Managers and architects of centralized systems are treated as experts, holding the keys to critical business information and processing, while the creators of that information, from cashiers to documentation writers to programmers to secretaries, are just doing their job. Code that gets the job done without the virtues needed to run in centralized systems is commonly dismissed as "hacks," and control, stability, and measurable throughput are regarded as signs of excellent work.

What's good for centralized systems isn't necessarily good for distributed systems, however. Scalability demands compromises from users. All input to a system must conform to specified input formats -- why waste processing cycles on translations? All output will similarly conform, and users just have to accept what they are given. On local systems, however, processing cycles aren't that expensive, and users may not have the power to demand that others send them information in the format their system wants. Similarly, they may have to produce results in multiple formats for multiple recipients. This happens every day in the course of business, and makes a lot of sense in the context of distributed processing with local owners, but it doesn't fit well with the vision of "scalable" promoted by developers of centralized systems.

Standardization as an option, not a requirement

In centralized systems, standards make processing possible. The demands of scalability require strict adherence to standards for input and output, and greater standardization makes it possible to do more things. If multiple companies share the same format for their data flows, for instance, their large systems can handle enormous flows of information. Having to translate between those formats would create potential bottlenecks, and at least add another layer of processing.

In distributed systems, standards make processing easier. It's really convenient to use off-the-shelf technology like HTTP and XML, but it's not absolutely necessary to standardize every aspect. Translations between formats and bridges between protocols aren't as dangerous when the processing cycles needed to perform them aren't critically needed for other tasks. Delays of a second per request are much better tolerated when there's only one request per second, instead of ten thousand requests. Efficient processing is important for computing intensive tasks, but locally controlled and small-scale processing can get away with much more wasted time than larger systems which need to meet the needs of many more users.

It may one day be possible to reconcile these competing needs, but for now it looks like the battles will continue. Developers building systems with a tight focus on scalability will continue to specify things that developers building smaller systems find confusing, annoying, and useless. Developers building smaller systems will use approaches which meet their needs but which deeply offend the sensibilities of developers used to working on much larger and more elegantly designed systems.

is an associate book editor at O'Reilly Media, Inc..


Return to OpenP2P.com.

Copyright © 2009 O'Reilly Media, Inc.