It’s been a while since I’ve written a “non-directed” blog for xml.com, so while I will be covering a few XML topics here if you’re not interested in economic systems theory, then you might as well skip this.
As I write this, it’s about eight hours before the financial markets open in New York. The markets were closed today, Monday the 21st of January, for Martin Luther King, Jr. day, which may prove to be a bad thing in the morning. Today the average market loss globally was about 7% - here in Canada, the drop translated to a 605 point loss, or about 4.75%, on the TSX. The India Sensex fell 11% in two minutes before trading was halted. I can pull out other figures, but they say much the same thing.
It’s hard to say what will happen in the morning - I’m not even going to try, though I have my suspicions. Enough fire control may have been put into place to keep the US markets from getting too badly singed (though I have NO doubt that few people at any brokerage firm in the country were allowed to stay at home today), but what you’re seeing here is something that we’ve not seen in a long time … the start of a worldwide stock crash.
Crashes happen, and they usually seem to come without any direct “initiating” event. In the days after 9/11, there was a lot of effort put into the market to keep it from crashing, but I’ve long suspected that what you saw was an overreaction - the markets outside of the tech sector (which had already crashed) were actually fairly robust, and the drop that occurred had more to do with fear reactions than it did with any major problem in the economic firmament itself. The amount of credit poured into the system in term served to prime a number of bubbles - real estate, hedge funds, exotic speculative funds, and of course the dramatic growth in the security industry that made it possible, a couple of years later, for mercenary companies like Blackwater, Halliburton and Kellogg & Root to put almost as many men and women into combat as the Department of Defense did.
Not surprisingly, things got out of control, but they did so almost invisibly. An interesting piece on chaotic systems by Mark Buchanan called Ubiquity, Why Catastrophes Happen, points to a theory called fingers of instability.
According to this idea, suppose that you had a system where you had a number of independent actors (such as grains of sand) which nonetheless could exert pressure on those underneath, either fully or partially. When you first drop sand onto a relatively smooth surface, it tends to distribute itself in single layers, but eventually friction on the ground underneath acts to anchor the sand. Add more sand, and the weight of the sand is not enough to cause the layers underneath to give way, and the sand begins building up. This process can go on for some time, but eventually, through random positioning, you’ll begin to see cascades forming, where the weight of the sand is enough to cause a shelf of sand to collapse. These collapses occur all the time, and are generally unpredictable, but they also usually tend to be fairly limited in scope. The sand pile remains quite stable, all things considered.
Where things get interesting is when you increase the friction of the sand. If the grains consisted of perfectly smooth ball bearings, the ability to create piles stays very limited. Increase the friction of the sand (make it coarser, for instance), and the height of the sand pile rises pretty dramatically - you can add more and more sand to the pile. However, this also means that you end up with zones of instability, where the weight is not perfectly distributed, and the addition of even a single grain is enough to cause a sidewall to collapse. You can think of these as “trouble spots” within the sandpile that represent higher than normal probability of collapse.
Normally these collapses don’t really have that much of an effect on the overall pile, other than to make it appear more sculpted. Yet if the sand continues to come down, the zones of instability begin to create “fingers” through the pile, in essence causing faults and fractures. The greater the adhesion of the particles, the greater the stresses that occur, and the more such fault-lines honeycomb through the structure.
What this means is that at some point, a collapse will occur in one part of the sand pile, and that collapse will cascade through other parts until you get a major avalanche, scooping out a huge part of the pile in the price. This catastrophic collapse doesn’t stop until the stressors acting within the system are relieved.
Professor Graciela Chichilnisky of Columbia University and Ho-Mou Wu at the University of Taiwan wrote a paper in 2006 entitled “General Equilibrium with Endogenous Uncertainty and Default” for the Journal of Mathematical Economics that essentially laid out how economic systems exhibit most of the same characteristics of the sand pile described above, in essence showing how creating a leveraged system of autonomous lenders and debtors that in turn are themselves also lenders and debtors has the potential to induce the same fingers of instability into the market.
The glue in this case is the degree of trust that each lender has that the borrower can in fact pay back the money that they borrow. When that degree of trust is low, then loans tend to be made less frequently at higher rates, and the velocity of money, consequently, is also low, meaning that economic growth remains slow. However, decrease the cost of borrowing, decrease the amount of oversight on the loans, and reduce the requirements that the lenders have on retaining a certain “security” in terms of a fixed capital percentage, then the lenders can lend more money, can reduce the requirements of creditworthiness, and can in turn borrow against reinsurers who themselves are essentially superbanks. Apparent trust goes up because the penalty due to default appears to go do (the banks are insured against that, after all), and the velocity of money increases. This is why the banks love when central banks reduce their prime lending rates.
Unfortunately, as in all networks, you get these fingers of instability that begin to creep into the system. Mortgages are made at deceptively low teaser rates to people who can ill afford them, and the institutions making the mortgages then sell the mortgages to brokerage firms, pocketing origination fees in the process. To them, it doesn’t matter whether the mortgages are good or not, so they can act with impunity (and can force housing prices up by encouraging appraisers to appraise at higher and higher values - introducing points of instability). The ones buying the mortgages (and other loans) in turn packaged them together into collections of mortgages that “in the aggregate” are safe, because the risks inherent are bundled together like so much sausage (i.e., more points of instability), and then the new investment vehicles are then used as collateral for hedge funds, which are in essence bets on market direction using significant multipliers on the collateral (even more instability).
Throw into this mix a few other factors. The first is inflation, largely due to the initial stimulus of low-low interest rates for too long, coupled with high demand globally for just about every resource class from oil to metals to wood and corn. The second factor is a growing imbalance in financial equality, which meant that real wages for 95% of the population has been stagnant or declining in the last three decades while earnings for the top 5% have skyrocketed. This has generally meant that the savings rate (at least in the US) has collapsed, while most people with any discretionary income at all have either spent it on goods and services or have invested into the highly speculative markets, and for the most part have made almost nothing relative to inflation (of course saving would have done the same thing, since inflation has been above interest rates for more than a decade). Instead, they’ve tapped mortgages, easy to do when housing prices are rising, far harder to do when housing prices are falling dramatically. These make for even more instability.
Another piece in this puzzle comes in the form of currency speculation. Because of real-time communication, vast sums of money can move through the currency arbitrage system overnight. Keep in mind that George Soros was able to use similar arbitrage on the silver markets to effectively cause the collapse of the British economy more than a decade ago. This also makes it increasingly difficult for central banks to cut rates in times like this, because the effect of this is to cause significant devaluation to that government’s currency as arbitrage speculators (including sovereign nations) seek higher interest rates elsewhere.
Now, the US in particular is reaping the rewards of twenty years of outsourcing. Normally, weak currencies tend to work in favor of a country in that it tends to increase the export in manufactured goods, hence spurring the manufacturing sector. The problem is that America’s manufacturing sector is now in China, and the cost of bringing home that manufacturing is far and above the cost of moving it out in the first place. Instead, what’s happening now is that countries all over the globe are actually purchasing American companies at what’s increasingly fire sale prices, primarily for their assets rather than their production capabilities. (This won’t last - after today, a lot of non-American investors that were long in their own markets and took a bath are going to need funds to cover their positions, which means that we may see a major liquidation of the US markets in the next few days as they sell stocks and everything else they can in a market where everyone else is also selling).
The funny thing about system theory is that at times it seems like the quantum mechanical effect at a distance, where changing the state of an electron can change the state of a coupled electron seemingly at a distance greater than the speed of light can reach in that time. Seemingly unrelated things all seem to go bad at once. This is precisely due to these fingers of instability - they are often buried deeply, and are often triggered by events that appear to have nothing to do with the final escalation, unless you could see within the sandpile the interconnected chaotic networks.
What’s happened recently is that the stickiness of lender/borrower transactions has all but disappeared. Banks are afraid of lending because they have lost confidence in their ability to determine risk, people are cutting back on their consumption both because of increased inflation in energy and food costs and because of increased fears about the ability to continue bringing in wages, financial insurance companies (which at the end of the day sell risk management) are disappearing as they end up on the losing ends of bets, businesses either can’t borrow (they can’t get loans) or are paying much more for those loans, meaning that they are becoming more conservative in their plans. This is what happens in a credit crunch, and it keeps on happening as risk realignment makes its way through the system.
It’s likely that for the equity markets January will prove to be the worst month on record since the 1930s, though its important to understand that the economy as it exists today is several orders of magnitude larger than it was then - this means that in some areas its more resilient, but it also means that in others, there’s far more than can in fact go wrong. It will impact IT, though from the job front not as gravely as 2000-2004, at least for a couple of years. Where it will hurt worse though is in just about every other sector - manufacturing, retail sales, construction (both home and business), transportation, food production (to a lesser extent), even the energy sector. There will be a lot of people that will be thrown out of work - people not just in manufacturing but pretty much in every sector (and likely disproportionately from the well-educated middle class wage earner with strong employment track records).
My guess is that there will also be a lot of effort to force-feed money into people’s pockets, either through income tax rebates or through payroll deduction rebates (or other similar means). In all likelihood this will have almost no real impact in the short term, and in fact will probably be a waste of money longer term as well. I’m as up as the next person on receiving free money, but getting it at this stage is much like falling down a ski slope and getting a hand up to get to the ski lift just in time to see the entire mountain start to avalanche. I”m hearing figures like $150 billion in rebates - but to put that into perspective, today the TSX (which is a fairly small Canadian exchange) lost $90 billion dollars in one day. If the DOW follows form tomorrow, you could be talking about $500 billion or more (possibly much more) of wealth lost in just a week, with significant portions of that being in retirement funds and portfolios that most people use to secure their own pensions.
One final note here on economics before I move on to other issues - I’ve occasionally been castigated in the past by readers about not talking about XML (or at least programming) related topics on this site. One of the mandates that I feel is important, however, in writing to this site is that I write on topics that are germane to my audience. What’s happening here is not a discussion of partisan politics - for all that I think Bush mismanaged some of this, the original seeds of the current debacle have been pretty evenly spread between Democrats and Republicans - but rather is both a look at systems theory in a very applied domain and a warning that if you have not already done so, battening down the hatches in terms of jobs and personal economic management would be a good thing to start doing.
Some thoughts on modeling and systems theory
Systems theory is something that goes into and out of vogue, especially in computing circles. The 1960s and 1970s saw the first wave of systems theorists and chaos theorists, people who were interested both in understanding the way that complex, interconnected systems worked and who were coming to recognize that the moment the equations became nonlinear the underlying systems that were modeled by those equations went chaotic. Indeed, there’s a great deal of overlap between systems theory and modeling theory - to the extent that even today most “old school” systems theorists are more interested in modeling a complex non-linear ecosystem - economics, ecologics, traffic patterns, networks and so forth.
I recently discussed with some colleagues (thank you Peter, Chris, Anne, Joe and Guy!) the rather frustrating fact that while we have developed a fairly sophisticated infrastructure for systems modeling of computer hardware within an organization, systems theory has not, in general, made its way cohesively into either data or application modeling. My suspicion is that part of this comes about from the siloing of information that we’ve maintained over the last several decades - data is distinct from applications, which are distinct from networks, which are distinct from business requirements, and each particular silo as a consequence has its own “modeling” methodology that in general is not all that compatible with methodologies in a different silo.
Modeling is, to a great degree, what all systems architects are responsible for, and it is also, to a similar degree, what few of them actually do. Instead, what ends up happening is that the architect becomes responsible for choosing a particular application framework, produces a UML document that outlines each of the classes to implement, and lets data modeling in particular end up in the province of the database developer.
Until fairly recently the application stemmed from the database developer establishing the general tenor of both model and data access with those people at successively farther removes from the stack having increasingly less control over the degree to which they can affect the data model (usually making for poor user interfaces as a consequence), the increasing rise of SOA-based systems and mediated XML or JSON messaging has proceeded to wreak havoc upon a concentrated data modeling strategy. This is why I see systems theory beginning to gain in importance in programming circles.
N-tier systems are generally too simple for all but the most elementary of systems theory constructs. Communication is (for all intents and purposes) synchronous and proceeds cleanly from one tier to the next. In other words, most basic n-tier systems are linear - the modeling that can be done on them can be made with the assumption that there is generally a clear, definitive workflow and mediating framework for handling that workflow. It’s my suspicion that frameworks tend to work best in this particular model.
Unfortunately, things are changing. SOA-based systems introduce a number of critical factors that turn simple systems into complex ones. Synchronicity is no longer guaranteed and in fact can become a significant liability. Data streams may often originate from different systems and be mediated through different processor than the ones that are in any obvious tier. What’s more, the data may be more complex, may be distributed across multiple systems, and may not in fact be completely validatable (indeed, “business logic” itself - including determining data fitness - becomes externally imposed on the data stream rather than being an intrinsic property of the data “object”).
What this means in practice is that developing the data model and the application model (and often the corresponding user interfaces) becomes a task that bears a lot of similarity to creating … well, systems models - non-linear, asynchronous, highly networked, dealing more with “energy” flows than with application context. System modeling is harder in some respects - you lose the comfort of dealing with intrinsic semantics (meaning that you spend more time supporting generalization mechanisms) but what you typically gain is a more flexible architecture that is pliable in the face of change.
At this stage, building systemically is an option, a design approach with benefits and trade-offs. However, as applications continue to become more network centric and more oriented towards the movement of data streams (and data processors) in a heterogeneous, asynchronous environment, such system-oriented development will go from being an alternative approach to the software development process to becoming a necessity.
Curiously, there’s a takeaway lesson from the first part of this blog - instabilities tend to arise because most people are only aware of their own local “neighborhood” - and that neighborhood usually looks overwhelmingly linear. The non-linearities become evident only when you move away from attempting to look at the system as a set of linear patches and begin to see it instead as an overall, generally non-linear surface. Yet to shift to that larger perspective, you have to move out of the immediate silos of thought concerning the local semantics (whether that’s the sale by a bank of a mortgage to an unqualified lender or the creation of an n-tier system for deploying an accounting application) and migrate to a more semantically neutral viewpoint, giving up some local control for broader scale tools for managing complex, heterogeneous “pieces”. It’s rather debatable whether this can be accomplished in the economy (not at this stage, anyway), but it is certainly possible to see how such abstraction will affect the next generation of both data-centric and application-centric development.
Kurt Cagle is an author, analyst and information architect specializing in XML and web technologies. He lives in Victoria, British Columbia with his wife and daughters, and is learning (slowly) to say “eh”.