The following is a transcript (or at least a prescript) of the talk that I gave at the AJAXWorld conference on October 4, 2006, looking at emerging technology in that space and focusing (not surprisingly for me) on the XML side of things. I also have a Powerpoint of the presentation if you want to see how cheesy I can get with my own productions. If you are interested in seeing a video of the presentation itself, check back with AJAXWorldExpo.com - they should have that up shortly (I’ll post a more precise link when they have it loaded).
So, without further ado, AJAX on the Enterprise:
Table of Contents
- Enterprise Matters
- AJAX: The Five Year Mission
- Entering the Holodeck
- Extending Tractor Beams
- The Final Frontier
Scotty, James Montgomery Scott, was my favorite, perhaps inevitably. Spock was always the cool and collected uber-genius, inscrutible and forced into an emotional straightjacket, and while the parallels to the Real Politik of the time are obvious, to me Spock has always been the epitome of the pure ivory tower researcher. Scotty, on the other hand, was the engineer , in many ways the ultimate hacker. Spock may have been able to tell you what properties of dilithium would cause the inducement of warp drive, but Scotty knew how exactly to crack the damn crystals in such a way as to eke out that last 0.5 warp factor, necessary to escape the evil baddies chasing behind the Enterprise.
Scotty knew about estimates - and how much you could pad an estimate in order to insure that you got the exact correct time necessary to complete your work, down to the minute. He was not above a brawl or two, but when it came right down to it, a vacation was the time you could actually get to that stack of Linux Magazines from 2215 that you’ve been putting aside for the last five years and just read.
The Enterprise needed Scotty, far more than it needed Kirk or Spock or even McCoy, yet he was always little more than an odd bit player, the one who was never on the bridge … unless he was repairing one of the computer panels that everyone else kept falling into every time the gravitational system failed, which usually didn’t happen because of anything that Scotty did, but because Kirk seemed to have absolutely no sense of restraint or the cost involved in replacing one of those warp nacelles. And AJAX … let me tell you about AJAX on the Enterprise …
I … oh, I’m sorry … this talk should have been about AJAX in the Enterprise. Oops … Um, okay, the slides are already prepared, and I’m going to be in serious trouble if I have to take this thing from scratch in front of such an august group of people as yourselves … so how about letting me tell you a little bit about AJAX on the Enterprise, and we’ll see if maybe, just maybe, there are a few nuggets of wisdom (or at least crystals of dilithium) that we can extract from all this when dealing with the issues of AJAX in the enterprise.
In the introduction to the early Star Trek episodes, the hope of NBC (or at least Gene Roddenberry) was fairly clear - the Enterprise was on a five year mission. Unfortunately for them, they managed to get through only three before the axe fell (and not surprisingly, when Patrick Stewart’s stentorian tones introduced The Next Generation two decades later, it had become the “Ongoing Mission”).
However, I believe that that there was something about that five year bit that’s actually pretty important in the here and now. In the 1960s, Central Planning was as much a part of the American economy as it was the old pre-Peristroika Soviet economy, and the five year plan described what was often taken as a convenient metric for how far one could plan before things became too unpredictable.
Five years also seems to be about the lifespan that it takes for “major” technologies to go from being a good idea to becoming foundational. (Note that this differs fairly significantly from product marketing lifecycles, which seem to have about a three year cycle from inception to obsolescence). I believe that we’re at one of those interesting transitional points where things really are changing in radical ways, the end of one “five year mission” and the beginning of another, waiting only for Picard to make it so.
Five years ago, several very interesting things were happening, both in software and in business in general. The tech sector was collapsing, warp shields blowing left, right and center. Now, to someone who’s weathered a few of these five year plans, the tech sector collapsing was really nothing new - its an industry that’s built upon promises of miracles, and every so often the bill comes due. People invest in tech hoping for outsized gains are generally deluding themselves - tech always underperforms in the short term, and overperforms in the long, but in ways that few people can really imagine.
However, in spite of, or more likely because of, this effort, people who had been hoarding their cool ideas in order to capitalize from the next Bay Area VC suddenly found themselves unemployed and sitting in their parent’s spare bedrooms with time on their hands while they waited for some response — ANY response — from an employer. So they did what computer people always do when the next boom becomes the next bust - they began to network.
Standards groups that companies had been rushing to get something out the door began to slow down and actually start to take some time thinking about those standards. Several good ones came out between 2000 and 2002 - XSLT, XPath, XML Schema (well, maybe not schema), XForms, XHTML, DocBook (just for a break from the 25th letter of the alphabet), SVG, ebXML, RDF, and a whole host of specialized industry specific languages from XBRL to MathML to HumanML (yup, its up there in OASIS - I was a member of that working group for a while)).
Meanwhile, Linus Torvald’s pet project went from an interesting hobbyist effort to increasingly looking like a standard itself, accreting stillborn commercial products that were given new life in the long tail, reinforcing the notion believed by most programmers (and espoused quietly by Scotty himself more than once) that if you get two developers communicating with one another, you get something more than twice as good than what each can develop separately, that three tend to add value proportionately, and so forth.
In other words, those five years of “down” time, was the time of real research and development, not done with the hope of getting that next crucial patent (or the million dollar pay-off) but rather done because the work represented real needs that needed to be rectified and it was to everyone’s benefit to do so. Standards matured, projects started and worked and bloomed and died, and out of the remnants came new projects and the further tinkering of standards.
One of those revenant projects was the ghost of Netscape. I’m going to speak heresy here, in San Jose, but Netscape failed because it wasn’t good enough. You give away a perfectly good software product for free against a competitor who has billions more in the bank, and while you will find people cheering you on, they, and you, are idiots. You will fail. Netscape failed.
There is a lesson in that, a lesson that emerged with HTML (and that occasionally needs to be relearned on the XML side — and even on the AJAX side). Simple is good. Indeed, perhaps more than that, simple is so friggin’ essential that your development efforts will most likely fail spectacularly if you do not embrace that fundamental notion.
That message pump means that you can send information from the client to the server and back from within a web page. Of course, you can do that anyway, but the important distinction is that with the message pump you are not necessarily forced into refreshing the entire page every time you need to change some aspect of it. In programming circles, this means that state management no longer needs to be performed exclusively within on the server, but can in fact be significantly offloaded to the client.
Now, to take this back into the Star Trek metaphor again (hah, you thought I’d forgotten, hadn’t you!) imagine the Enterprise without transporters (or look at the final series … or maybe not). You want to get valuable medical equipment to the colonists at Rigel 7, you have to launch a shuttlecraft filled to the brim with equipment and fuel, have them descend into a hot steamy jungle with only the vaguest hope of finding a convenient air strip to land on, have to provide armed guards for the shuttle while you unload the likely heavy equipment, then take off from an unfamiliar planet, in hopes of a rendezvous with the mother ship two or three days later. This is web programming circa 1998.
To get back into contemporary terminology, what this means in practice is that rather than creating a single page from fairly complex components on the server and needing to maintain this information on the server, you instead push the components onto the client, and each component in turn becomes responsible for its own interactivity with the server. The state of the application in turn either ends up residing within each component, or within a client-side “model” server which all of the other components interact with.
The distinction between these two forms is important (and I’ll get to them momentarily) but one of the immediate upshots of this is that the server can in fact become dumb - it doesn’t need, in either situation, to retain anywhere near as much state as it did before for that particular session or application. This has a number of immediate consequences:
The server need send each component only once, then let the component handle the presentation layer directly, rather than performing this task itself for each component every time some aspect of state changes, This makes the server side code easier to write and more modular to maintain.
The server can standardize on a single given transport protocol that the various components can use, meaning that you have less need for extensive server side development of “translators” between databases and a whole host of different presentation formats.
The server layer becomes thinner - more a generic conduit between the database and the client than a large set of custom presentation and content pages, and in general this translates into the ability to generate more sessions for the same resources.
From the client standpoint, however, things tend to get potentially more complex. (The third law of programming thermodynamics - complexity never disappears, it only moves around). Browsers are perhaps more homogenous with regards to interfaces than they were five years ago (especially now that Internet Explorer 7 is on the horizon), which in turn means that the amount of specialization code necessary to write to the diversity of browsers has shrunk. It hasn’t disappeared entirely yet, though the good news is that the benefits of maintaining a uniform set of interfaces appears to have sink in to just about all of the major players.
This point in turn raises another, in many ways a far more crucial one. The AJAX movement is not about calling home without refreshing the page, is not about cool widgets appearing in web pages, displaying the latest feeds from Slashdot or neat drag and drop effects, though certainly all of these have a place. Instead, the primary driving motivations of AJAX is the fundamental belief that the browser is ultimately the last platform, that the web will not truly be universal until browsers can do everything that a standalone desktop environment can do, regardless of whether there’s a multi-colored flag, a fruit, or an aquatic fowl on the start-up screen .
This shouldn’t be a radical point, but somehow it is. Your customers, the employees at your company, and you individually spend a huge amount of time in front of web browsers, which are in turn becoming the primary interfaces for all modalities of communication. I haven’t used a standalone email application in months, most of my IM communication occurs within the browser context, and increasingly my production tools exist as extensions to my web browser. For many people, going from a web interface to a standalone application seems a step backwards, forcing them from their primary point of contact for news, documentation, and communication into an isolated environment where they have to run the browser in the background and click back and forth in order to shift between the two.
AJAX has gained momentum not because someone put a message pump onto a browser with 15% of the market, but because this move has basically catalyzed a reaction among the other browser vendors and projects and caused the web developer sphere to shout “Enough is enough! If certain parties won’t get their act together then we will solve this problem ourselves!”
Movements are funny things, especially in technology. No one takes them seriously at first — there’s no press releases, no aging rock starts singing the praises of the product, usually just a handful of people who recognize that there is a problem and that the “market” is not rushing to solve it because there’s no immediate money in it.
Often there is a single event that sparks the whole thing - a programmer getting frustrated because no one can find information about physics papers written at the research center where he works and putting up a small set of tools for free, a grad student who sends out a note saying that because he can’t afford to use the university’s Unix implementation he’s writing his own for free, and would anyone else like to help … events that occur almost daily now, and that are only important in hindsight. People pitch in not for glory or money (because there seldom is much of either) but because most software developers are a lot like Scotty - they do things because those things need to be done and the problems are interesting enough to them to make it worthwhile.
Yet these sparks are almost invariably observable only in retrospect - and what’s more, such sparks are much like those that start a forest fire - there may be dozens or hundreds of such off of a campfire that go nowhere because the conditions aren’t right, but if the weather has been too dry for too long, if the underbrush has become overgrown and primed, then any one of those sparks (or many of them) may be responsible for the raging conflagration.
(This same argument, by the way is one of the most compelling I’ve seen against software patents, as important as they may seem to CEOs and investors - good ideas can only exist within a proper context - too early and there isn’t enough technology to support the concepts, too late and the ideas become obsolete. Because software developers live in a medium of common (and commonly available) ideas, it is very rare for a truly unique idea to actually occur within this space.) .
It is arguable to whether this should be called Web 2.0. It’s a nice catch-phrase, and I’ve written a few articles myself on what Web 2.0 really means. However, I think that this tends to mask that what is going on here really is essentially a continuity of what happened in the 1990s, after taking a few steps back to rejigger some of the basics … most notably XML.
The argument has been made elsewhere, and it is one that I will repeat here,
that the end of the dot-com era occurred because we had pushed the prototype
phase of the web too far and thought it was complete. I’m a software
developer. I practice a form of development that will likely not be out of
place at any of your companies - I start with an idea, a model of where I
want to go and build it much like I would a sculpture’s maquette from clay
— add a module here, rewrite a part of an API over there, building
something up in pieces until I get about as far as I possibly can. However,
and this is the important part, this maquette exists only as a prototype for
me to understand what I need to do in the final product. Functionally its a
mess - the API may not be consistent from one class or structure to the
next, the XML may be hideously non-optimal for either performance or
updating, the documentation consists mainly of
// To Be
Written. It will work, indeed, it might work quite well because
I’ve been doing this gig for a while, but it will be almost impossible to
maintain and passing it off to another programmer at that point will frankly
just be an invitation for him or her to rewrite it.
However, that macquette is important to the scultor, just as the prototype is important to a software developer. It helps either of them shape their final vision, and having completed the prototype, the developer can then go in and rebuilt it right, insuring that there is fundamental integrity between and within the components, that the application is able to integrate properly, and that the resulting product is not only functional and (this is the critical thing) maintainable.
Your applications will start to become obsolete the moment that the programmers stop working on it, because the business cases that the software was intended to solve will both change in response to changes in the business environment and will also change because you have solved the immediate business cases, which in turn open up possibilities that weren’t open before. What this means is that your applications will spend far more time in a phase of incipient obsolescence than they did in development, which means in turn that they should be designed to age well.
Given all that, we’ve developed the prototype with Web 1.0 , and like all too many products out there, the prototype was shipped. Web 2.0 is not a new web, it is what happens after engineers take the crash test dummy from the 100 mph collision with a tanker truck and examine what’s left. AJAX is, as a consequence, a natural evolution of that.
The issue of XML is perhaps fundamental to this whole discussion. XML is more than just a replacement for HTML, and after a decade of XML being out there I’m not going to spend any time digging into what exactly it is. If you don’t know, ask your programmers. If your programmers don’t know … fire them. Seriously. All of your data will eventually be moving around in XML streams of one sort or another if it doesn’t already, your databases are likely increasingly to speak XQuery as well as SQL (and there are MANY MANY benefits to that), chances are that your middleware is increasingly tasked with transporting and manipulating XML, and of course your client applications are increasingly assuming one or another XML dialect in order to render content. That’s of course not even beginning to talk about the XML services that are out there, the fact that in your verticals your customers, business partners and competitors are already working with industry specific XML schemas and will be expecting you to be too. If your programmer doesn’t know the basics of working with XML, then chances are pretty good that they will be a liability real soon now.
Keep in mind that XML is fundamentally a mechanism for abstraction. It’s not a product - it is not even, technically speaking, a language. It’s simply a set of conventions for structuring data in particular ways and providing means to identify compositional elements within that data. I remember one client nervously viewing the medical landscape and getting alarmed that a particular hospital group is going to XML. I personally was ecstatic - it meant that the application I had developed for the client would be able to work more easily there than with those groups that are still dealing with patient records on paper (or even in SQL databases). It is a key point to this whole web 2.0 thing - the free flow of information requires a common structural language, and XML, for all its warts, is it.
However, that doesn’t mean that XML by itself is the answer, and more importantly doesn’t mean that XML has not itself been changing to reflect the evolution of the web. In particular, there are several key aspects of XML that will likely loom large in the AJAX world, and you should be looking very carefully at these as you are evaluating technological investment for the next few years.
Syndication is for more than just blogs. Incredible amounts of information in your system, from red shirt security types that are expendible to planets that serve Earl Grey tea can be thought of as lists which can be presented as syndicated information. Atom is an XML format that’s designed both as a good mechanism for presenting lists of content and which includes its own (openly available) publishing formats.
The fact that each Atom entry can also contain a veritable forest of links of varying types and semantics also makes it a good, lightweight alternative to RDF and other relational formats, especially once people start migrating to XQuery enabled databases.
For instance, consider as an example a set of schematic diagrams (say of Starships, just to keep in theme here), with each ship being itself one entry in an atom feed. Each schematic in turn contains a breakdown of the schematic by section, and each section in turn contains a list of callouts that point to specific items of interest within the schema section. If each of these lists consisted of entries defined with appropriate linkage structures, then this “application” essentially becomes simply a matter of pulling in external “news feeds” that both contain enough data to describe particular nodes in a graph while at the same time providing unique links capable of pointing to subordinate “feeds”. Certainly such information can be expressed as RDF as well, but the fixed commonality of Atom feeds mean that there is typically enough to populate generalized components without requiring “semantics” to seep into the equation.
What’s perhaps more compelling about such syndicated feeds is that the system for displaying them assumes that such information changes over time, that the hyperlinked lists are themselves ephemeral and have some form of time or thematic relevance. Obviously, news in general fits this bill well, but does weather information, availability of computer systems, lists of students in a given course and so forth. An Atom list is fundamentally a cohesive “editorial” unit, with all items in the list being tied together by some relevant criterion.
One of the critical issues inherent in the deployment of web services has been the question of determining how to designate list or array content. If you think of an Atom feed as an array in which each entry has a minimal set of “metadata” that can provide some context for the links contained within the feed, then you can do such things as build tools that will display Atom without needing to know what the specific “payload” is, which in turn makes it much easier to componentize such viewers. This is discussed more in the final section of this talk/paper, about bindings and components.
Every era in computing has defined their own paradigms for reading and updating data. If you are reading a relational database to convert into XML, then sending XML up to the server and spending time with DOM converting it into SQL, XQuery is for you. XQuery is a light-weight (and non-XML) language for manipulating XML, based in great part on the XPath 2.0 specifications which are going golden this month.
I’ve written two books, and perhaps a dozen articles and blog postings on XQuery. They were, admittedly, too far ahead of the curve - the specification for an XML oriented query language has been underway since before 2000 and even today the formal specification strictly handles only the query (not the update) side of data management. However, one of the most interesting facets of XML Databases has been the fact that they have tried a number of different mechanisms for handling updates, and the most elegant of them seem to tie into the notion of performing such updates in the same query space as used for getting XML requests in the first place.
I think it’s fair to talk about XQuery and XML Databases in the same breath. The two are fundamentally tied together, and are further tied into the notion of data provider abstraction. A significant amount of the work involved in putting together a web application of any complexity involves the translation layer necessary to communicate between the database and the web client. For the most part, such middle tier services involve using some form of data abstraction service such as ODBC, ADO, Spring, etc. in order to read from or write to specific fields in a database, typically using a language such as C# or PHP to handle this work.
Unfortunately, such code is remarkably fragile, is very verbose, always deals with information at the atomic level even when the information may be coming in (or needs to be produced) at a more abstract aggregate level, and all too often is spread out over several different functions or web services, making maintenance costly and cumbersome.
XQuery shifts the processing of such queries (and potentially updates as well) out of the server language and into XQuery scripts. Such scripts
XMLDatabases are becoming both fast and robust, and there are some interesting update extensions proposed (and integrated into open source projects such as eXist and Sleepy Cat XML Database) that handle the update side of XML data query in a clean and seamless manner. From personal experience, such databases can cut your development time significantly in the web application space.
XSLT in the hands of a good programmer is a wonder tool, especially if you can use XSLT 2.0, which goes golden this month as well - it provides a means to provide exhaustive transformations from one form of XML into another, can read from multiple XML streams and produce multiple forms of output, can easily be subclassed to handle variations in formats, and works incredibly well even in bindings (which I’ll talk on shortly).
One additional facet of eXist that I like is the ability to perform XSLT transformations from within an XQuery and then continue processing the results in the same query (including passing the transformation onto a conditional pipeline of other transformations). I cannot stress enought how important XSLT is even now, and how it will perhaps be the dominant mechanism for manipulating XML in the future.
The distinction may seem minor - XHTML is, for the most part, simply an expression of HTML using XML rules rather than the older SGML rules - but the effects are profound. By shifting to XHTML, you gain all the manipulative tools of XML, including the ability to create arbitrary tags that can be transformed or otherwise bound, the ability to incorporate other namespaces (from the graphically oriented SVG to MathML to RDF for metadata to XForms, and the means to validate such XHTML content quickly and easily.
What’s more, you can incorporate XHTML fragments into transport formats such as Atom, or as secondary documentation within many other formats. Finally, even browsers which don’t formally recognize XHTML (such as Internet Explorer) can still take XHTML as valid HTML with a minor change in the response header.
As a technology evangelist, one of the things that I have to be very sensitive to is to recognize those technologies that are still in their infancy but that have the potential to be much more. XForms definitely fits into this category. XForms, another W3C standard, originally was started to create a somewhat more robust set of controls for the web that could take advantage of the newly emergent XML standard.
One of the first things that came from this was the realization that such components were ultimately mechanisms for handling data-binding (assuming, as has proved prescient, that XML would likely be the ultimate expression of that data). This in turn implied the existence of a data-model, and by extension a control/ binding layer that serves to both map content to the controls and to constrain what the content can be.
What emerged from this effort was the XForms standard, which was released as a 1.0 standard in 2003 and is scheduled to be released in a 1.1 form later this year, designed to handle use cases and ambiguities that arose with the original specification. This specification includes supports for a number of features:
XPath Mapping. All components are populated either directly or indirectly using XPath expressions.
New Form Elements. including <output> and <range> (slider) components
Repeating Component Groups and Contextual Toggles. Making it possible to generate repeating tables, wizards and similar components.
Add/Delete Capability. Items (and complete substructures) can be added or removed from the data model using components without explicit need for imperative code.
Auto-updating via XML. The XForms data-model will, upon submission, post the relevant portions of the data model as XMLto the server automatically without losing the page state.
Constraint Modelling. Constraints are applied to the data model, not the components, though the reaction of components to changes in the value or validity of the model can change the UI.
Abstract Component Model. XForms components are abstract, and are intended to be subclassed or otherwise bound via some form of presentation layer. All implementations provide default bindings, of course, but the XForm should ultimately be “skinnable”.
XForms implementations have been around for a couple of years, with a number of different approaches being taken. Solutions such as IBM’s Workplace Forms (formerly PureEdge) and x-port’s FormsPlayer are stand-alone solutions existing within a separate (albeit embeddable) application space from the normal browser. Orbeon is an open source AJAX implementation, while Mozilla foundation is creating an XForms extension which should be natively installed with Mozilla Firefox 3.0.
While there are other XML aspects that play a part in the SOA/AJAX space, ultimately they are all moving towards a form of application in which data (moving around typically as XML, or in the case of lighter-weight messaging as JSON) moves from point to point in a distributed network, one where the client becomes far more important, and one where the same client architecture is likely to be rendered on the fly in response to state contained within the user profile and experience.
Several speakers at this conference have made the pronouncement that while AJAX is powerful, it’s not QUITE there yet. I am inclined to agree, for a number of reasons, though perhaps not the ones that were thought of as these talks were given. Last year (2005), Tim Berners-Lee gave a keynote address at the World Wide Web Conference in Japan where he pointed out that the insecurities plaguing the web browsers were a direct result of the introduction of scripting language within the confines of the web browser.
Understanding why such a blanket statement, one perhaps at odds (though in the long run I don’t think it is) with what’s covered at the AJAX conference, should be both made and be true can go a long way towards understanding where the web itself is going, and certainly where web development is heading. There are a number of key insecurities about immediate scripting that make the web more fragile:
Platform Incompatibilities. Each incompatibility offers both a chance for code to break and for rogue code to get through. While a common framework here would be desireable, vendor differentiation makes that framework difficult to achieve.
Cross-Domain Scripting. As AJAX programmers, we want to have the freedom of getting out of the sandbox, but the security restrictions involved in such sandboxing make certain sense.
Lack of Validation. With such cut and paste code, it also makes it far more difficult to validate the code, because it is typically developed within a production environment at production deadline speed.
Given all these factors, is there any real alternative around there? I believe that there is, and its one that many third party vendors have of course twigged onto. This alternative is XML Binding , which can be thought of as the following:
An XML Binding is the association between an XML element in a presentation environment and some form of functionality from an external “binding document” that persists beyond a single presentation generation. Properties for the bound element can in turn be set via attributes, or from associated methods defined from the binding itself. Such a binding is also known as a behavior.
What this means in a nutshell is that you can introduce XML tags into your presentation documents (XHTML or otherwise), define a binding or behavior on those tags, then let this underlying behavioral functionality control the interactions with clients within the application.
Such XML Bindings can cover a lot of different functionality. A short list of such bindings include such things as:
Page Layout Elements. This includes oriented boxes, containers, flexible spring controls, stacked boxes, tabsets, wizards, and similar components.
Data Fed Elements. Undisplayed data providers (both static and dynamically refreshing, such as news feed reader), dynamic tables, XForms data models, Live treeview components and so on.
Graphical and Animated Elements. Defined or complex generated shapes, charts, graphs, bound image components, animated clocks and other widgets.
Secondary Applications. Some tags may also contain whole applications - slideshow widgets, image selectors, IM windows and so forth.
In short, such XML Bindings give you all that you need to effectively create either primary applications (an insurance application) or subordinate infrastructure (such as a notification widget in a web page) and can do so without necessarily requiring an extensive amount of server side programming.
Most commercial and open source XML frameworks are built upon the notion of bindings in one fashion or another, although whether the ability to define subordinate components from those frameworks is implementation dependent. Languages such as Mozilla XUL, Adobe FLEX, Microsoft XAML, Laszlo SystemsLZX, and so forth are all built in this same manner, though being able to build components (which is what XML binding is all about) that can be directly integrated into web page flows varies considerably from app to app.
Such bindings offer a huge win for enterprise development in particular:
Standardized Library. Code can be created in isolation and then added to an enterprise library of useful objects, making it much easier for IT managers to control code workflow.
Separation of Skillsets. A component developer is a considerably more difficult person to find than web designers and developers, so by creating specialized XML bindings you can keep such developers focused on building components rather than spending all of their time marking up code.
Security. Such components, being tested in isolation, can be introduced independent of production timeflows and can be made appropriately secure. Moreover, such components can also perform secondary implementation - bootstrap code that pulls in what should be private coding will only function if the page is within the appropriate security context, so people can’t follow a URL to get a source code library.
Code Maintenance. With XML Bindings you can keep your XML contained within a single block without the need for additional inline scripting. This abstracts the application, making it easier both to read and to change under the hood so long as you’re not changing the XML interface, making it much easier to maintain in the long run.
Platform Flexibility. An additional side-effect of such bindings is that in limited capability devices the XML from the source code can be transformed directly on the server, and the bindings themselves can be amended to handle device specific implementation details. This masks the need to have a common development framework, something that is currently limited AJAX.
The efforts going on in the web right now are not a radical revision of the past, but rather a refinement and “refactoring” that is at the heart of nearly every software endeavor, and it is this refactoring, far from obvious for those in the thick of it but profound nonetheless, that is insuring the integrity of the web. We are moving to a model where we have both an imperative model for power and a declarative model for structure, something near and dear to an old engineer’s heart, and it will be both glorious and, more to the point, elegant, when it is done. May you, gentle reader, boldly go where no one has gone before.