Tim Berners-Lee, the Director of the World Wide Web Consortium (W3C), suggests that “the web is no good unless it can be a sound foundation for the semantic web and web services too.” This seems horribly wrong.
Once, long ago, there was an understanding of how the Web was different from the Internet. Web designers wasted hours on lists educating people about the differences, noting that the Web was in fact a subset of the Internet, the subset you saw in a Web browser. The IETF and W3C seemed to partition their work along these lines as well.
Over time, this distinction has become pretty badly blurred. Mail clients grew into the browser as supposedly competitive advantage, while Web protocols found themselves reused for a variety of tunneling applications between computers. SOAP, XML-RPC, and related approaches all build on that strand of development. Warnings about problems with these approaches, even when applied directly, have largely been ignored.
Over the last few years, the W3C has also done its best to blur the distinction. While development of the “traditional Web”, notably in the HTML field, has slowed at both the consortium and at some key software organizations, SOAP-based “Web Services” have been the commercial rage while visions of a “Semantic Web” have charmed a smaller but thoroughly devoted group of developers.
While Web Services and the Semantic Web are popular in some quarters, their relationship to what I’ll call the Traditional Web is pretty distant. SOAP reuses HTTP in ways that are barely familiar to those who’ve worked with the protocol in Traditional Web contexts, while the Semantic Web reuses URLs (now rechristened URIs and topped with a strange dose of philosophy about identifiers) in ways which are barely familiar to developers in other contexts. Both Web Services and the Semantic Web cite the success of the Traditional Web as demonstration that their approaches work, but neither has the patience to work within the confines of the system they claim supports their work.
Berners-Lee’s latest claim seems to forget that a huge group of people finds the Traditional Web more than adequate for their needs. Some of those folks even do Web Services-like things with REST-based approaches that more closely follow the patterns laid down by the traditional Web. Some of them use XML (and even at times RDF) to exchange semantically-rich information between computers without needing the full power of the Semantic Web. The Traditional Web may be no good to the W3C’s director or its members any longer, but it’s still good for a lot of us.
(If that’s not enough for you, the Internet’s still wide-open for possibilities beyond the Web, of course!)
Is “the web… no good unless it can be a sound foundation for the semantic web and web services too”?