In a recent interview with Rohit Khare, Director of CommerceNet Labs, Jon Udell may have been responsible for introducing a new meme into the noosphere that will be as important in its time as AJAX was in 2004. Rohit Khare gave an influential presentation describing ALIT, which utilized SOAP messages for transferring events between systems, but in the intervening years, his thinking has shifted to a new system based not upon SOAP but upon RESTful RSS and Atom feeds, for which he has coined the term Syndication Oriented Architecture, or SynOA.
SynOA by itself is not that new - news feeds have been around since the mid-1990s, first launched during the rather laughable hype of push-based syndication servers, then later reincarnated as the foundation for the blogosphere. However, the notion of SynOA as a generalized vehicle for event handling is something that people are only just beginning to gravitate toward, though by many indications it’s already having a major impact.
An argument that I (and many others, most recently Sam Ruby and Leonard Richardson) have made in the past several years is that SOAP based “web services” aren’t really that useful on the web. They were originally perceived as a good way of implementing RPCs without having to build up extensive binary clients on both sides of the pipeline and as a way of providing a standard messaging mechanism that wasn’t controlled by any one company. Yet these RPCs generally were fairly impractical to call from the web if not downright impossible, and while on one hand there was a shift of thinking about the role of SOAP from being an RPC envelope to being a generalized messaging envelope, in practical terms SOAP/WSDL is still fundamentally bound to the notion of RPCs, and it assumes that a highly specialized API is not only feasible but also even desirable.
Yet there’s something that most people (even many programmers) fail to realize about the web. The web is, at its heart, a publishing system. You publish web pages then your users read those pages. You search on those pages using a simple search API (build, in a largely de facto fashion, by Google), and in the end what is most important to you is usually that which has been most recently posted or worked upon. In other words, the web really likes syndication, which is why it keeps getting invented every time someone tries stamping it out.
A syndication feed is a curious beast. At its head is a block of metadata that includes a reference to where it comes from, a category or two, publication dates, and enough data to provide a human readable label on that particular feed. The entries of a syndication feed are similarly blocks of metadata tied to links that also include identifiers and the potential for content blocks. In general, the purpose of such blocks is not in general to contain bodies of information (which is the way that SOAP works) but rather to contain enough of a description about the given entry that it can be used either by a human agent or by a machine agent to figure out what to do with the link.
With a system like AtomPub, you can also post entries to an AtomPub server wrapped in an <atom:entry> block, and the system will then, based upon ACLs and the entry envelope’s rel or category tags, perform processing on that object to add something derived from it to the server. This information doesn’t have to be HTML blogs - it can just as readily be objects modelled as blocks of XML (or potentially JSON) data. What’s more, what gets sent in that particular case may not necessarily be the data itself, but instead might be links to that data - in essence, a properly enabled publisher would then be able to reference that linked data at a later time, with the knowledge coming from the atom <entry> envelope making it possible to determine what specific action needed to be performed on that data.
This is beginning to interest enterprise level developers, for very good reasons. Consider, for a moment, an insurance investigator who shows up at the site of an accident, and starts entering details into an XForms document that’s running against a local server (Google Gears perhaps), where you enter the information into a XForm that forms it into XML, then you publish it to the local server where it gets “published” into a set of entries containing all of the other accident reports online. When the insurance adjuster gets within a wireless network (I’m assuming he was offline before) the Atom feed gets sent to the server, which proceeds to empty the feed back into its data store.
Later, a claims adjuster gets online, her system queries the “feed” of the server, and from her news feed reader she can see seven new claim reports appear. Note that this doesn’t have to be a specialty application - it could be a Firefox reader or Google Home Page or IE’s blogging engine - she’d just need to pass in the authentication information in order to get access to the feed. When the link itself gets pulled, the served page will in turn generate a different view of the XML (read-only, but with spaces for comments and a recommendation), and the adjuster can then sign off on it or invalidate it. The comments in general are just another form of feed, linked to the initial record through a secondary link in the feed, and as such all such comments could be generated as a second feed.
The claims manager can then look at the list of approved, deferred or rejected claims through (you guessed it) another feed, and can drill down from that feed to see the comments linked to that feed. By providing queryable options within the URLs of the feed (perhaps through simple web page “search” dialogs) that manager can also control the type of output based upon keywords or other prospective searches.
Finally, those same queryable interfaces can be invoked programmatically by routines that will take all of the approved processes (retrieved as an Atom feed) and grab the individual XML records from the links contained within each entry of the feed, process the amounts approved by the claims adjuster, and can then instruct the accounting systems to write checks to the appropriate people for the amounts indicated (perhaps even through some XML-RPC at that stage, just to show that both systems can interoperate).
Note that this kind of architecture performs a significant portion of the role of message queues, but shifts such queues away from the processor and towards the data provider. It is, admittedly, courser grained than a SOAP based system … in the latter case, the receipt of a SOAP object typically initiates the processing, making it perhaps better for time sensitive operations, while in the former case the resolution of processing is essentially dependent upon the query frequency of the client - how often it polls the server to see if new information has arrived.
On the other hand, SOAP based queuing systems are susceptible to “binding” because it is possible for a message to enter into the queue at the time that other messages are being pulled from the queue, necessitating another layer of process locking. This kind of problem simply doesn’t occur in a SynOA system because the feed is invoked by the client at predictable intervals, meaning that any locking that does occur will likely be resolved by the database, which already has strong locking protection typically built into it.
This is the potential of SynOA, and where I’m going with my own work with x2o (formerly ROX). What makes such systems so compelling is that they are surprisingly simple to build, can work with both robust and minimal clients on any platform, are easily compartmentalized and rely upon the same security systems that already exist for the web. Given the number of people that have been working in the XQuery space in particular (which is particularly amenable to SynOA systems) that are now building variations of the above types of application, I suspect that the web will soon be SynOA’d under with syndicated applications as the advantages of such architectures become known.
Kurt Cagle is an author, information architect and software developer specializing in XML, AJAX and (yes) SynOA based services, including the upcoming open source x2o server. He lives in Victoria, British Columbia, and is reveling in the cool all air and deliciously red and gold maple leaves there.