While giving a presentation at the Enterprise Architect Summit in Barcelona, on the subject of ESB and other SOA-related infrastructures, I got into a public altercation with a couple of Microsoft guys who were in the audience. It turned into a pretty heated debate over core architectural fundamentals, which we hammered out in front of an audience of about 75 unsuspecting conference-goers. I have to say that in all my years of public speaking this sort of spontaneous public debate has never happened quite like this. The conversation that ensued brought to light some really important issues regarding some fundamental differences between the use of an ESB as the foundation of building a Service Oriented Architecture vs. using a combination of Biztalk and WCF (formerly known as Indigo).
As part of defining your strategy for building and deploying a SOA, you may be considering a variety of SOA infrastructure support products. As an enterprise architect you are probably faced with these various approaches regularly as vendors offer their wares to you, positioning their software as the best choice available for building Service Oriented Architectures upon. You should consider that there are key differences in how those SOA infrastructure offerings are architected, and the ramifications associated with how the infrastructure allows you to build and deploy your SOA across an extended enterprise.
The debate arose out of my discussion of the ESB lightweight service container model, and how that compares and contrasts to a hub-and-spoke integration broker architecture. The focus of my presentation turned toward scalability issues. In my discussion I was talking about how an ESB allows for the selective deployment of specific integration functionality as independently scalable mediation services. I used an example of an XSLT-based transformation service, and talked about how an ESB allows an instance of that transformation service to be separately deployed in its own lightweight ESB container.
It’s no secret that the parsing and manipulation of XML, including an XSLT-based data transformation, can be an expensive operation in terms of consuming computing resources. Using the ESB distributed container model, multiple instances of a particular transformation can be scaled and load-balanced across multiple containers across multiple machines in order to be able to support increased demands on the particular transformation as the transformation becomes more complex or the service invocation traffic increases.
I then talked about the contrast of the EAI/Integration broker approach, which typically employs a monolithic architecture that includes data transformation, messaging and connectivity, routing of messages based on business rules or scripting, application adapters, and process control all in one server implementation (an ESB, by the way, also does all these things, but each capability is separated out into its own separately deployable and independently scalable piece).
In order to scale up the XSLT transformation using the monolithic EAI broker approach, you have to install that EAI broker on a really big machine, or if the EAI broker supports the notion of clustering for scalability purposes, you would have to install that entire EAI broker stack across multiple machines. Keep in mind that we are simply trying to support the scaling up of the one XSLT transform that sits between two popular applications! All the while that transformation will still be trying to compete for computing resources with all the other things the EAI broker is trying to do – business rules, process control, execution of other services, application adapters, etc.
OK, I started this writeup by saying it was about an altercation with a couple of Microsoft guys……
I then said something like – “…even Biztalk and Indigo (WCF) with all of its fanfare still suffers from this problem! Indigo provides a nice Web Services enabled messaging bus, but when you’re doing the rest of the integration piece you need Biztalk, and you can’t selectively deploy and independently scale individual integration components within Biztalk”…. I was pretty fired up when I said it too.
I knew there were two Microsoft guys in the room—I was talking to them earlier that day. I didn’t mean to pick on Biztalk per se. I was just using that as an example—this same situation exists with any SOA infrastructure that relies on an EAI broker architecture – TIBCO BusinessWorks, webMethods Integration Server, etc. Sometimes I pick on them too.
Just then Jeromy Carrière, Sr. Technical Evangelist for Microsoft, interrupted and said “excuse me, I hate to interrupt your talk, but actually you can do that. You can actually selectively deploy individual components in a Biztalk server”. Then the other Microsoft guy, Arvindra Sehmi, Lead Architect, EMEA Developer and Platform Evangelism Group, also chimed in and spoke of a couple of document numbers and indicated that they were how-to documents. Then he said “well, its your talk, you can say what you want up there…we’ll take this off line. Please continue” So–he was basically saying that I was full of crap, but that’s allowed because I had the floor. I wasn’t happy with that situation so I decided to stop the rest of my presentation and debate the issue right then and there in front of the rest of audience.
In the end, we determined that it was apparently possible to strip down a Biztalk server and deploy it with just one transformation engine in it. However, and this is a pretty BIG however, Jeromy did concede in front of the whole room that this stripped down Biztalk deployment would be a much heavier weight entity than the ESB container model that I was describing, and that there was a cost associated with that. We never did get to talking about whether this pared down Biztalk server could be easily deployed across multiple machines for the purpose of load balancing an individual mediation component. We also didn’t get to talking about the licensing models of either approach (using the Sonic ESB licensing model you are free to deploy thousands of containers across the extended enterprise without incurring additional license cost). Any by the way, how is it that Arvindi was able to cite those document numbers from memory anyhow? It must be a pretty hot topic for them lately.
That wasn’t the end of it. Arvindi didn’t like how this conversation was going, so he then decided to attack my example and say that it was unrealistic. He tried to make a point that it was a completely invalid use case to have a separately deployable XSLT transformation service…that data transformation belongs at the application endpoints! How intriguing, I thought to myself. Just then another audience member chimed in, announced himself as someone who had been doing distributed computing since the DCE days, and stated that my argument for having a separately deployable and independently scalable transformation service is a very valid deployment scenario for anyone who has ever done any kind of n-tier architecture. Thank goodness! Someone else stood up to support me. It was getting pretty hot up there :).
I have been pondering this portion of the discussion since then. Its interesting that the Microsoft way of thinking–having the transformations co-located with the application endpoints–represents a very endpoint-centric point of view about building a SOA. What exactly did he mean by that anyhow? Was he suggesting putting a Biztalk server with every application? Perhaps he was talking about using a message handler. There are so many SOA advocates that talk all day long about how SOA and web services is all about the SOAP stack, the WSDL interface, and how stuff gets serialized and deserialized across the wire. While those things are really important, its not the kind of thing a SOA architect should be thinking about when building a SOA. In fact, I would submit that the entire design center of building a SOA is about the whitespace between the endpoints! When I refer to what’s between the endpoints, I’m not just talking about reliable and secure protocols (which are important too!)—I’m talking about mediation. Mediation comes in many forms. It can be provided in the form of intermediary services that provide content based routing and data transformation. Mediation can also be in the form of protocol mediation, for example being able to plug one application into a SOA using one type of protocol such as FTP, being able to plug in another application using an adapter, and being able to plug yet another application into the SOA using SOAP and web services. The mediation that a SOA infrastructure such as an ESB can provide in such a situation is 1) the abstraction away from the details of the protocol from the services being implemented, 2) mediation between the interaction models that each connection model might imply (batch, sync RPC, async, event-driven), 3) mediation by providing a unified service invocation model, and 4) a consistent process control mechanism that controls the interactions between the services.
I also agree conceptually that data transformation should be logically associated with the application endpoints, in order to allow for a canonical data model to represent data as it passes between applications and services, and provide transformations to and from the particular proprietary formats as needed at the “edge”. In fact I wrote about this concept in my ESB book, and also in other articles on the subject of the VETO pattern (Validate, Enrich, Transform, and Operate). However the difference in opinion here is that the philosophy of using an ESB to build a SOA is based on the notion that the association should be a logical one. The physical entities should be capable of being deployed anywhere you choose to deploy them based on the machine resources you have, and the horsepower required to execute each operation. If you wish to have a transformation step co-located with an application endpoint, you should be able to do that, but you should not be forced to do that.
In the end, who cares if my XSLT example is valid or not? It doesn’t matter. Substitute another mediation component such as a content based router service, an application adapter, or a third party EDI to XML translator for that matter. In the ESB model that entity is still deployed in a lightweight service container that is remotely managed, load balanced, and scalable across as many machines as you need to deploy in order to support any increase in demands on those particular parts of your SOA. And those containers can be spread across a Linux machine, a Windows machine, a Solaris machine…anything you happen to have available for allocation. The management layer can deal with the fact that these deployment artifacts are distributed and make it just as easy to configure and deploy as if they were all in one location.
Finally, Arvindi said “Well, if its an XSLT transformation service you want, then Indigo provides that for you. You can put a service anywhere you wish using the Indigo architecture”. My reply was “yes perhaps, but we’re not talking about Indigo, we’re talking about Biztalk”. At this point, I thought maybe I should end the debate and try and salvage the rest of the allotted time to finish giving my presentation. However, this last comment got me thinking as well. We started out the debate over whether Biztalk servers could be stripped down and separately deployed, then ended up with the subject of XSLT transformation services deployed in Indigo (WCF). These two subjects are very different things when you are using those technologies together. The XSLT transformation service in WCF got me to thinking of another issue, which is about configuration rather than coding.
Using the WCF framework, you can plug in a XSLT transformation as a separate service. However, it has to be plugged into a known endpoint and coded into place rather than configured. In Sonic ESB, an XSLT transformation service is extracted from the service repository, configured through the tool, and added to an ESB process definition using visual drag and drop just like any other kind of service. You don’t have to code anything. What happens if further down the road the requirements for transformation change such that XSLT is not sufficient, and you need to swap in a third party transformation engine using an adapter? In Sonic ESB you simply change the process definition using visual configuration tools to swap out one service for the other.
Back in July of this year I was at the Burton Group Catalyst Conference in San Diego, where the subject of hard-coded services in Indigo came up during a Q&A session with Ari Bixhorn (Indigo PM). A member of the audience was asking if that coding restriction was going to be removed, and would Indigo/WCF moved more towards a configuration friendly approach. Ari explained to the audience that the hard-coded approach was by design, and it was a decision to remain that way based on feedback from the last Microsoft PDC. Well, c’est la vie! …or should I say c’est la guerre!
What’s really funny about this altercation is that just a week prior to this spontaneous debate, Arvindra posted a blog entry
complaining about how he was uninvited to one of our SOA Architect Forums in London. In his blog entry Arvindra said – “Is it that Sonic Software (and perhaps Dave Chappell himself) is going to say something about Microsoft and our strategy that they don’t want me to hear? Are they embarrassed I might challenge them or kick up a stink? Not likely in a public forum!” How ironic is that?
A recording of the whole presentation, including the comments from the audience members, can be found here -
http://www.ftponline.com/channels/arch/reports/easbarc/2005/video/ Look for the presentation entitled “How the Enterprise Service Bus Delivers on the Value of SOA”
For more information on plugging in WCF services, here’s a great article –
Introduction to Building Windows Communication Foundation Services
or here’s another one written by my namesake –
Introducing Indigo: An Early Look
David Chappell (the other one)