I went down to the Cambridge, Massachusetts lab of One Laptop Per Child today to find out what they’re doing with mesh networks. This was a particularly appropriate day for a blog on OLPC, because today they’re launching a fifteen-day-long purchasing opportunity called Give One Get One. You pay them for two of their brightly colored, impressively lightweight computers, and one goes to a child in a developing nation, while the other goes to you.
But the whole point of this blog is that a One Laptop Per Child system has limited value on its own. Its most innovative and powerful features lie in its participation in a mesh network with other laptops. So get your neighbors and workmates to buy them too!
What’s happening at the application levelThere must be some kind of lesson in the amusing miscommunications that led up to my visit to the One Laptop Per Child lab. I had been talking to a number of people in Europe where very advanced open-source mesh networking is taking place. One of them suggested I talk to a developer named Michalis Bletsas, and I elaborately set up an exchange of Skype account names along with an ideal time to call. Once we got connected, I found out he was located a fifteen-minute subway ride from me.
The lab of One Laptop Per Child is sleek, sparse, and entirely different in atmosphere from what I imagine it’s like to be in the places One Laptop Per Child will be used. But wireless networks are the same everywhere (when one ignores weather conditions), so Bletsas’s demonstration of mesh networking was believably authentic.
Although there is interesting work on link-level protocols that I’ll describe later in this blog, Bletsas says the applications are really what require work. The OLPC developers want people to be able to do the most advanced and intensely interactive networking tasks–the same ones we are used to in developed countries–on their laptops, such as:
- Photo exchange
- Streaming video
- Collaborative document editing
OLPC supports all this. (Photos and videos can be taken on the laptop.) Bletsas showed me demos of the easy-to-use collaboration features on OLPC that use mesh networking, along with other OLPC features that you can read about elsewhere (or experience for yourself by joining the Give One Get One program mentioned at the beginning).
It’s easy to see the value of these tools for educational purposes, and their potential to strengthen communities that use them on a local level. I asked what was required to instrument applications so they can work in peer-to-peer fashion over the network. Bletsas mentioned the following.
- Message exchanges
Bletsas says that “if an application is written well,” allowing it to collaborate with other instances across the network is not hard. By “written well” he means that components trigger behavior by the exchange of messages instead of more direct method calls. If the application rigorously uses this form of communication, it can quickly be adapted so that writing sends messages over the network and reading checks the network. OLPC provides a high-level API that hides the details of sockets and TCP/IP from the application.
This message-passing system could simply use the signaling systems used in GTK+ and Qt/KDE. (These have nothing to do with Unix signals, and are much more powerful.) Another implementation would be the componentization I discussed in earlier articles such as The desktop I’d like to see and Applications, User Interfaces, and Servers in the Soup.
- Cache consistency
Co-operating applications always have to make sure they’re holding the same things in memory, since they access memory on different systems.
- Discovery and presence
Now it starts to get interesting. OLPC has implemented a peer-to-peer discovery system comparable to Jini and Zeroconf. They plan to add a way for applications to identify themselves, so that a laptop knows not only that “my neighbor is Lucia” but “my neighbor is Lucia and is running Abiword and Firefox.” Then the applications can collaborate with less user intervention.
Although the applications currently can communicate only with the same applications running on other OPLC laptops, they are based on open source and open standards (such as ODF for documents) and therefore could work with other standard-compliant applications on other systems in the future. Bletsas mentioned the Nokia 8-series as a possible collaborating platform.
Is this mesh networking model valuable for the developed world, or just for places where Internet connections are slow, expensive, or nonexistent? That’s an ongoing debate, and calls for a brief survey of the trends.
It’s well known by now that mesh networking doesn’t scale. A number of engineers are creating new protocols to solve the problem. The protocol that open network developers prefer for discovering and addressing nodes, the Optimized Link State Routing protocol (OLSR), is being upgraded to use supernodes, a solution similar to what the Gnutella network was forced to do earlier in this decade when it gained millions of users. This supernode, called a multipoint relay (MPR), is just a node chosen by immediate neighbors to funnel the link-layer broadcasts needed to discover other nodes on the mesh.
A regular node communicates any status change to its chosen MPR instead of to every node that it can reach. A regular node also expects to receive messages that aid it in routing traffic from the
It’s also interesting to note that the mesh networking field has settled on packet loss as the most important determinant of network quality. Physical distance and number of hops are less important than the relative number of dropped packets. This is also known as Extended Transmission Count or ETX, a term that seems to have a generic meaning as well as referring to a protocol that was developed at MIT and is widely used on mesh networks.
Where are mesh networks valuable?
Another universally accepted criterion is that mesh networks make heavier use of the network than client/server communications. In this way they are like peer-to-peer protocols like BitTorrent. Peer-to-peer systems relieve the load on servers, which is a valuable benefit when you’re trying to save money on servers and avoid spikes in traffic. But the burden is just distributed through the network, with extra overhead and data transfers.
One anecdote I heard underscores the tendency to move away from mesh networks when centralized alternatives are available. In the early 1990s, after the Czech republic was formed and left Communism behind, a huge and highly effective constellation of mesh networks was formed that covered the whole country. It was formed out of necessity, because DSL connections were either unavailable or prohibitively expensive in most of the country. But just a few years later, DSL became widespread and dropped to a reasonable cost. The mesh networks became a historical oddity.
Yet Bletsas argues that mesh networking is still useful in developed countries as well as developing ones. Social location applications (finding a store, restaurant, or public transportation facility in the area you happen to be) hold great potential.
So there is still controversy about the general usefulness of mesh networking, but everybody agrees that it has a number of specialized applications:
- Spreading bandwidth usage in underdeveloped areas
This is, of course, the chief application of OLPC. A single expensive Internet POP can serve many more people, and they can engage in local communications such as educational projects without connecting to the outside world.
- Fast communications deployment under emergency conditions
This covers disaster response and many military applications. The US military is very interested in mesh networking.
- Environmental applications
Sensor networks often use mesh networking, such as Matt Welsh’s CitySense network here in Cambridge. Before Welsh promulgated this urban network, he created a sensor network to collect geological information on a volcano in Ecuador! So sensor networks have applications everywhere.
I’ve heard that OLPC has uncovered many advances in Linux that benefit other systems, and asked Bletsas for detailed examples, but he mostly waved away the subject by saying, “This is standard with open source.”
There are plenty of questions as to whether OLPC can produce the benefits that its developers hope for, whether the laptops can survive the potential hard use and danger of theft they face, and whether the project can keep costs down. But the laptops will roll out. Money has just been received for the first orders from Latin American countries, and production laptops are already rolling out from the assembly line. But even if OLPC doesn’t make the grade, it will produce much more worthy discussion of computing’s potential and the field’s responsibilities to the wider world than most of the other computing projects that have come and gone over the decades.
November 16 update: I heard some corrections and news from some of my contacts on this story.
Sascha Meinrath, Research Director at the Wireless Future Program of the New America Foundation, writes:
OLSR is the baseline from which new innovations are being developed in Europe. HSLS (Hazy-Sighted Link State) is even more scalable. Better still would be a hybrid of the two.
ETX is currently the route prioritization metric of choice, but a lot of folks are interested in expected transmission time (ETT) as a potentially better solution.
Currently, the limiting factor on mesh throughput tends to be the Internet connection point. In other words, the meshes are faster than people’s DSL/cable modem lines). This may change in the future (as faster service speeds become normative), but wireless mesh speeds are likewise increasing rapidly through 802.11n-based systems, channel bonding improvements, and UWB), so it could be that mesh provides a way to both increase speeds (among participants) and lower costs (through line-sharing) even in developed economies.
Aaron Kaplan of Vienna’s FunkFeuer network writes:
There are two approaches to mesh network routing over here in Berlin and Vienna: OLSR-NG (next generation), and a new protocol called B.A.T.M.A.N. The newer one, B.A.T.M.A.N., still needs to prove itself in big networks.
The goal of OLSR-NG is to allow the olsrd implementation to scale easily to thousands of nodes. The coding seems more important than the protocol: olsrd needs better algorithms, optimized data structures, and general clean-up. Recently, its implementation of the classic Dijkstra calculation for the shortest path was improved from O (n2) (squared complexity) to O(n log n).
In Vienna, a single 100-Megabit uplink serves the whole city’s mesh.
802.11n might not work well outdoors.