The early screens, keyboards, disk drives, and other devices were truly computer interfaces. They were a shared ground where the human being and the raw computational power of the machine met. While programmers still have to develop interfaces (there’s no way around it) we now have to reach toward the much more difficult goal of providing a meeting ground for human being. The computer has to get out of the way as much as possible.
The most talked-about technology at the conference–the new ways of programming the Web introduced by Ajax–give us an opening toward that goal. Ajax is based on a small enhancement to HTTP, even smaller than the Common Gateway Interface (CGI) that led to dynamic websites and the whole dot-com revolution of online ordering. In Ajax’s turn, it combines the quick feedback and rich interactions of desktop applications with direct lines to the data being thrust out by servers and by other people throughout the Internet.One could argue that Ajax wasn’t necessary (and I’ll explore that later in the article), but at this moment in computing history, it’s driving the recognition that people are here to interact with each other, not with computers. This article summarizes my experiences at the Open Source convention, covering:
- Keeping up with what “open” means
- Why doesn’t everybody use OpenLaszlo?
- That database on the back end
- Karl Fogel tries to revive the augmentation of human intelligence
- Greg Kroah-Hartman unfolds the Linux Kernel
- Can you use Mono?
- Many hills for open source to climb
- More than software in this world
Linux, databases, and various scripting languages are all represented at the conference too (in fact, one attendee pointed out to me that there are more kernel talks this year than last year). But most of the talks feed directly into the goal of creating a kick-ass web site.
You might assume this means the focus is all on new, sexy technologies such as Ajax frameworks. Plenty of that, yes, but large crowds have also turned up for tutorials on testing web applications, maintaining large web sites, and improving PHP performance. Conference attendees know that server infrastructure is as important as what you see out in front.
Note that web applications run on all platforms and happily unite free software components with proprietary frameworks. So these applications really redefine what open source is; for some people they even call the concept into question.
During the keynotes Wednesday morning, Tim O’Reilly sparred with Michael Tiemann (CTO of Red Hat) over whether open source licenses are obsolete. Open source software requires open source licenses, of course, but nowadays the most interesting software experiences (one cannot even call them applications–and it’s not clear they should be called services) involve web mash-ups or other sites that combine a service with user-contributed content such as blogs, photos, and videos. These require a different approach to maintaining openness.
User-contributed content can be covered by licenses similar to open-source licenses, such as Creative Commons licenses. Web sites providing services can try to standard around interoperable APIS for data exchange and invoking operations. Thus, keynoter Mike Olsen of SleepyCat said his main take-away advice was to enhance interoperability among services.
I interpret Tim’s comment as an observation that openness can be achieved in many ways besides the heavy club of the law. One of the incentives for companies to open up data and services is the chance to benefit from community contributions. A more negative incentive is peer pressure, or the threat of users to move to a competitor.
Here, openness takes a strictly practical tack. A user may say, “I don’t care whether you reveal the inner workings of your service or not, so long as I can get all the use I want from it. If I can’t, I’ll just use somebody’s else’s service.” If proponents of free software don’t like such pragmatism, they may have to learn to live with it.
Open source software is by no means irrelevant. To underline this, Stuart Halloway pointed out during his Ajax on Rails tutorial (which drew a nice-sized audience of about 75) that leading companies with the most exciting web sites base their work on open source, not on a canned solution from a vendor. This is because these sites have explored new ideas that weren’t implemented yet by vendors.
And it’s worth pointing out that 60 people attended a talk on copyright and other legal issues by the Apache Software Foundation’s vice president of legal affairs, Cliff Schmidt. It revolved around licenses, but that allowed him to cover several interesting issues, mostly related to copyright and ownership of code.
But modern dynamic web pages switch languages with prolific abandon. HTML templating languages, while they ease maintenance, add yet another syntax and another paradigm. Much of the goal behind modern frameworks is to provide a unifying idiom.
Ruby on Rails removes most of the back-end languages, while other frameworks reduce the number of lines of code you need to provide features. For instance, a few of the frameworks introduced by Stuart Halloway in the talk I attended included:
An added layer to Prototype that provides fancy effects reminiscent of Flash.
Another example is Curl, created by a company spun off from MIT with the endorsement of (and some funding from) Tim Berners-Lee. Though it boasts a unified, powerful language for dynamic web pages, Curl has remained in obscurity. They have found a profitable niche providing in-house applications for local use within large corporations, where all employees can be instructed to download the Curl plug-in. But they do not have an Internet strategy.
Laszlo can display graphics and play sound without invoking an external program. It can read in graphical objects and data, and run RPC and SOAP calls to interact with Web Services. They also license a sample application, Laszlo mail, which is very fast and loaded with interactive events the user can invoke, like a desktop mail client.
Web developers have never taken to OpenLaszlo, although they can boast some large installations and the software has been downloaded 250,000 times. Christian Wenz, an O’Reilly author working on Programming Atlas, and a master of many web programming frameworks, suggests that developers like a framework such as Atlas that is closely tied to their server. This assures extra features that make it much faster to deploy applications, at the expense of leaving the application less portable.
I asked the ActiveState guys how much trouble it will be to upgrade Komono for Perl 6. Not much, they said–after all, they just upgraded it to support a couple new languages. The main work will be altering the debugger to understand the Parrot back end.
PHP users have told me version 5 is popular because of its performance improvements, not its new features or improved object handling. I predict that will change, if it hasn’t already. PHP creator Rasmus Lerdorf presented some of the nifty APIs in the most recent version of PHP at the end of a talk on improving PHP performance. A Geospatial library, for instance, hooks in very simply.
Web 2.0 is demonstrating a trend in which object-oriented languages turn web pages into collections of powerful server components, with the HTML of the web page becoming a vestigial shell.
One of the interesting sessions–although I’m not sure how far its value extends beyond intellectual satisfaction–is one given by David Wheeler on implementing object-oriented concepts within SQL. He used PostgreSQL as his reference database, but many concepts can be carried out in others as well. For instance:
- Inheritance and aggregation
In object-oriented terminology, inheritance means deriving one class from another class, which is the base class. (Everybody uses the metaphor of deriving a Manager class from an Employee class.) Aggregation is similar but involves embedding one class in another (for instance, embedded a Phone Number class in an Employee class).
If you define a class as a table (so that each row of the table is an object and each column is an attribute of the object), inheritance can be implemented in PostgreSQL by intercepting an insert and using a DO INSTEAD clause to update two tables: the table representing the base class as well as the table representing the derived class.
Aggregation can be handled with a similar hack, letting an insert, update, or delete affect the table representing the embedded class as well as the class cited by the user who does the operation.
Encapsulation means hiding internal details from external users so they can’t muck up the rules governing the object. For instance, if you want to maintain a reference counter that keeps track of how many users are referencing an object, you should keep that counter private so no user can capriciously increment or decrement it.
With SQL, you can use a view (a virtual table that offers data from the real, hidden table) to restrict users so they access only public data. Column-specific permissions can stop nosy users from trying to get around the view to the private columns of the original table. Views and column-specific permissions are available in MySQL as well as the historically more feature-rich databases.
Constraints enforce rules, such as saying that no salary can be stored in the database that exceeds a certain amount. (Every corporation in America would benefit from such a constraint. Wheeler pointed out that constraints started out in the database world, although they are now widely used in programming languages as well. (Some languages call them assertions.)
In SQL, triggers implement constraints.
Also of interest in the database space is a powerful storage engine for MySQL called solidDB, put out by a company of the same name. It has lots of modern features, such as hot backups, logging, and checkpointed recovery. Most noteworthy, probably, is multiversioning, which allows both reads and writes to be done without locks. Thus, solidDB is useful when hundreds of users access a database at the same time, making it unusually fast compared to other MySQL storage engines under similar circumstances. It also supports features of robust databases.
Like many other MySQL add-ons–such as InnoDB, MySQL cluster, and some of the code contributed by SAP–solidDB began as an independent product from a different source. The first transactional database available on Linux, solidDB has been used in the telephone industry for some time, and has proven itself in applications such as telephone switches in Alaska, where a reboot would require air travel. The solidDB company is keeping control over their product, using the same dual-licensing scheme as MySQL AB, but the two companies have a partnership now and the engine can be obtained from either one.wrote up several months ago, became one of the honored grandparents of modern computing through his 1960s-era experiments at Xeroc Parc. He talked of computer use as augmenting human intelligence. I caught a ripple of that ancient tide in a talk by Karl Fogel, where he took a giant step beyond his discussion of useful project management tools in his book Producing Open Source Software (O’Reilly) by proposing some almost futuristic AI-like programs.
I caught only the end of this talk, unfortunately. At that point he was not proposing conventional tools such as chat rooms or text editors, but independently active monitors that filter material from mailing lists and other developer resources, in order to present developers with more useful aggregates of information.
One proposal was for a bot that tracks the questions asked on a mailing list and sends mail to all the participants later if the answer is posted. This may be useful when several days or weeks separate the question and the answer, and they happen to be on separate threads. (But false positives, I imagine, could turn this into an annoyance.)
Another proposal was for a program that extracts key postings from a high-volume mailing list and produces a digest such as the Linux kernel tracker (which was discontinued because the maintainer couldn’t handle the logistics of creating a digest). This could form a valuable information source for people who want more than they can get in the trade press but don’t want to step through every message on the developer list.Linux Device Drivers, and who now, while working for Novell, maintains several Linux driver subsystems, came to OSCon to speak on the current state of the kernel. He immediately opened the floor to a question-and-answer section, which worked well because many of the 30 attendees had specific and intense interests.
Greg has recently become aggressively vocal on the question of binary drivers, which are contributed by various teams or companies as Linux modules without providing the source code or putting it under a free license. Linux developers in general hate these modules, refusing to support any kernel that is (like the wireless-enabled laptop I’m writing on right now) tainted by such drivers. Greg pointed out that Novell will no longer distribute non-GPL modules, and goes on to claim that “every lawyer” he’s talked to says they infringe on copyright.
It’s particularly unethical to put out a binary module for Linux, according to Greg, because the copyright infringement occurs not when the original developer releases the module; rather, it occurs when a Linux distribution includes the module with Linux. Releasing a binary modules “says that their little piece of software is more important than everything else.”
Greg also took on the myth that drivers on Linux are poor quality. He pointed out that Linux supports more hardware than any other operating system.
There are lapses in certain areas, to be sure. What most people complain about are problems with wireless cards–and here perhaps is a good argument for free drivers. I know my Linux driver does not pick up a signal as well as the Windows driver on my laptop, and the continual freeze-ups I experience with my Linux driver when WEP security is in effect are outrageous. A free driver would probably be fixed.
Greg claims that both wireless and other drivers are getting much better. Suspension on laptops remains a flagrant problem, but a several people are working on it.
Interestingly, drivers are more secure on Linux than on Windows because the kernel developers are willing to change the APIs used by drivers, whereas Windows must maintain backward compatibility so that binary drivers can be distributed. Therefore, security problems that require driver API changes stay in Windows forever.
The Mono team has achieved tremendous results with a production-ready C# compiler and a range of class libraries. There are two questions inhibiting Mono’s use. The first is how much of the enormous Microsoft ecosystem (some of it still unimplemented by Mono) is needed to run a useful application. Is Mono ready for everybody to port their ASP.NET programs? Kevin quoted Mono contributor Francisco Martinez: “It’s ready if you can do what you need to do.”
The second question involves patents, with Microsoft keeping mum on whether they believe anything in Mono infringes on some patent they hold. Kevin thinks everything implemented so far should be OK, because Microsoft hasn’t complained yet–but that reasoning may not satisfy corporate development teams. Furthermore, the big communication frameworks, Indigo and Avalon, have not been approached yet.
Stephen O’Grady, an industry analyst, warned us in a keynote that the understanding of open source in the general population is so low that government officials told him they’re worried that terrorists might contribute code to Linux and weaken our computing infrastructure.
The following material regarding John Terpstra’s talk was changed on August 1 following some email discussion with John.–Andy O.
John Terpstra, a Samba team member who now promotes open-source multimediar platforms for AMD, said that many fears prevent media companies and vendors from offering open source content and software. Like the government officials who confided wild fantasies to O’Grady, one media company told Terpstra they were afraid of Linux because so many Linux users are hackers, and releasing such a program would make it easy for them to steal content.
Note how that last bit of paranoia tars the entire field of open source users and developers; we have become The Other in the eyes of the traditional media companies–even as they rush to create their films and animations on Linux systems.
- Licensing requirements for the use of codecs.
- The need to implement DRM, which goes against the open source ethos.
- Requirements for protection of media content within the viewing network. An example is the requirement for a “protected video path” so that at no point can a video data stream be intercepted.
Back in the era of peer-to-peer hype, around 2001 and 2002, a lot of us thought we were on the cusp of a revolution in human-human interface. The tools were not yet ready, though, and a lot of resources got wasted on limit-stretching P2P architectures that floundered on issues of identity, addressing, coordination, and reputation. (See my articles From P2P to Web Services: Addressing and Coordination and From P2P to Web Services: Trust.) With the evolution of more traditional architectures, the moment may have come.
Near the end of the conference I heard James Jones and Jeremy Johnstone lay out Yahoo!’s project to help with disaster recovery after Hurricane Katrina. The hurricane itself led to a notoriously chaotic situation–local churches filling in where disaster agencies were absent, families torn apart because buses would leave before the children could disembark, and so forth. Jones described some of the technological supports and the problems of balancing the need for privacy and identity protection with the desire to reunite families and get them what they needed quickly. The talk was a reminder of how computerization and networking can make real differences in people’s lives.
Damian Conway’s keynote (an introduction to a company that depends entirely on stealing ideas and litigating trivial patents) was a total waste of time, but also a hilarious reminder why we should try to spend some of our lives wasting time.
Along similar lines, the conference sponsored a concert by two excellent musicians playing uniquely eclectic music: mandolin player Mike Marshall and pianist Jovino Santos Neto. Backed up by two excellent percussionists, they laid out a mix based to a large extent on the choro (an African-Brazilian interpretation of European styles), but with licks and chords from modern jazz and North American folk. As long as people such as Marshall and Santos Neto do music like this, I have faith in the survival of human culture.
On the other hand, it was annoying that some of the attendees had their laptops open during the performance. It’s impossible to notice the subtleties that made the performance work–the antiphonal responses, the clever rhythmic irregularities, the borrowings from other forms of music–while reading your email. I worry about the value of creating all sorts of great open software if we don’t cultivate a human culture that knows how to use the software to enhance life–and I think a pretty basic element of culture is knowing how to listen to music.