Women in Technology

Hear us Roar



Article:
  XSLT Performance in .NET
Subject:   Missing Bits - Quite a few
Date:   2003-07-17 00:46:23
From:   zarko
This is what I noticed at the first glance:


- XmlDocument used instead of the XPathDocument. Surely the one using the COM DLL doesn't have a choice but the one using .NET does. The nature of the former is to facilitate editing, and the nature of the latter is to facilitate XPath matching - thereby XSLT - when written well of course.


- Using StringWriter and calling .ToString() is a way different operation then just casting xslProc.output - almost as if author wanted to test how wild a code can get sending objects to GC and copying strings. A little mofification of using explicit StringBuilder instead of the one hidden and thereby forced to GC in the StringWriter, would be at least a bit further apart from a tutorial sample.


- Article states the goal "to test the performance of the two PARSERS" which is a largely different task from anything that followed.


- It looks like the author insisted on measuring individual small transforms instead of batches of at least 100 or 1000 so he needed more accurate clock. That however means that large part of the measurement was a measurement of all kinds of jitters in the system.


- The rate of processing time increase indicated in that diagram (5-6 for the size jump from 100 to 500) would bring someone a Turing award - for a linear general sorting algorithm (5*lg5 ~ 11.6 not 5) in both engines :-) so it looks like it just shows the trend of increased load and doesn't actually sense anything else but noise.


- If the code from the article was actually used, then timestamps didn't sense any "COM objects must be freed" time - when it's encapsulated in an Interop object it lives by the same GC rules. So the closest guess of what that code actually measured would be - the GC reaction to Interop objects. Also, unless someone would manage to come up with a divine prediction logic, loading something like an XML structure has no option but to fragment the memory and therefore be subject to a GC in one form or the other.


- The only thing that timing shows clearly is that .NET GC gets more retentive when faced with Interop objects - probably since it knows what a hairy bulk is rolling behind a pointer, but more realistically because a component, and a foreign body, is not expected to be trashed every 10ms. What that means in turn is that such MSXML-based application would be very unstable - GC would explode in rather large bursts that could melt down the server. At the same time .NET native object would get the GC working more steadily all the time.


- Too simplistic XML format, very close to a DB rowset dump out, so the measurement becomes the one of how would these two XSLT engines handle a SQL-esque task they are not meant for in the first place.


- Too simplistic XSLT and mimicking the procedural way of processing particularly reflected in picking/selecting instead of following the tree - to the point of calling for-each loop "some actual processing". One normally doesn't have to touch things like sort in XSLT work at all - except for some small, standalone work - or book samples :-)


- That long rename-tags-to-TD XSLT in the article for example is equivalent to a two very simple and fully declarative templates which good XSLT compiler can optimize - that list of value-of/select-s no one can - they are spaghetti pointers of the XSLT world.


The reason I mentioned these points is that just by switching to XPathDocument I saw 2 to 10 times performance improvement in actual XSLT processing (not simulating a SQL query) and even for the utmost inefficient constructs like for-each, XPathDocument suffered at worst a 2x performance hit compared to the efficient XSLT code.


The reason is that XPathDocument expects to be queried, only queried and nothing but queried in accordance with the basic principle of the XML standard (tags are principal entities, attributes and textual content are secondary). For example, if one would really be faces with XML as simplistic as the one in this article he would simply get things done with a few XPaths without firing a transform at all.


XML has intrinsic threshold bellow which it becomes terribly inefficient for anything at all and if we measure bellow that threshold we just measure the color of a noise. XSLT has very wide area of use and can actually be very fast - if handled with care and not pushed out of it's natural domain - structural transformation


As for the MSXML used standalone (straight COM as the other reader mentioned), it fared something like an order of magnitude worse in a close to a real world scenario and mostly couldn't notice the difference between efficient and wild XSLT - meaning that it's XSLT compiler hardly does any optimization work.


Good part of that of course came from the intrinsic problem that it really isn't a stand-alone animal but tries to act like one so it essentially has to create and then trash everything it has - the nature of a coarse grained component model.


I'm sure that, except for an exercise, no one would suggest writing a whole Web site or processing engine in C/C++ and ATL just in order to give MSXML a "kosher" environment. A noble cause otherwise, and an interesting one, but that seems to be the work industry and large is giving to Perl, Java and C#. MSXML always ends up being handled either by a script engine or a separate binary, just to be able to talk to the rest of the world.


ASPX "pages" can, in my experience, actually be much more efficient vehicle then ASP - but only after those using it find their peace with the paradigm shift and looking at the framework of pieces serving very different needs and not a monolithic engine to apply verbatim to any problem.