It’s blank slate time. Anyone who’s ever written for a living knows what I’m talking about. Before you is a piece of paper or a computer screen, marred perhaps only by the faint blinking line delimiting the carat. An infinite number of potentialities, yet the moment that you touch pen to paper or hand to keyboard those potentialities become reduced, the potential becoming the real. That is perhaps the real joy of writing or programming - in both cases you are dealing with the pure act of creation, making from the myriad possibilities the very real (and by nature, flawed) application, idea or concept.
It is often decried that the fine art of both reading and writing are disappearing … that fewer people are reading newspapers and more are watching the endless parade of images instead. I’m not really sure that’s necessarily true. That written discourse is changing certainly is, as is the medium by which we readily express the thoughts inherent in that discourse,
A hundred years ago, the bulk of all manuscripts were written by hand, with only the very wealthiest able to afford that marvel of the steam edge: the typewriter, and most published works were set due to the nimble fingers of typesetters moving blocks of wood and lead. .
Twelve years ago, publishing jumped to the web in the form of HTML pages, and while the words may have been somewhat fuzzy and jagged, the future of publishing was not so measured in the number of printed pages but the number of websites.
Six years ago, the XML revolution began, as mankind nearly universally agreed upon a single format for the expression of documents and increasingly the expression of data and interrelationships.
Three years ago, blogging, the marriage of web page publishing with syndication formats, took off, igniting a firestorm that has rocked the field of journalism and once again shaping our relationship with words on the page.
Today, this idea is being pushed even farther as we recognize that publishing is a form of data manipulation, as we have the infrastructure to combine collective search data (which is of course critical to the notion of research) with our own musings and to distribute this information via a complex network of channels. Established entities become increasingly irrelevant in the face of all of this. My XForms website, for instance, takes feeds from a couple of dozen different sources, filters out the ones that are not immediately germane to my audience, then combines them into my own, very narrow cast newsfeed targeting the requirements specific to the XForms community, adding to that articles, questions and commentary from the site itself.
The various news syndication services such as UPI, Reuters, or AP are finding their hold on the creation of content increasingly lost within the surge and spray of millions of such microfeeds. The privileged position of journalist is going the way of the typesetter and author - the ones surviving being typically not the Edward R. Murrow’s or Walter Cronkite’s of the world, but the pretty boy anchorman, reading from a script prepared as much to promote a political agenda as to inform or enlighten. This is unfortunate in several ways, alas, because a great number of talented writers and investigative journalists are forced to compete on the ground with legions of enthusiastic but far more poorly trained amateurs.
Yet the interesting aspect to this is that it shares a parallel to the period in the 1990s where the level of graphic art declined dramatically due to the opening of the field to amateurs through such tools as Photoshop, Pagemaker and Freehand. In both cases, many of the amateurs quickly realized that they really weren’t all that good as graphic designers or as writers and journalists, but there was also a core of amateurs who mastered the new media and were able to push the technology in ways that significantly advanced the field. In other words, the amateurs learned, and in time they were able to become the new wave of professionals, the ones who understood the art, the philosophy and the mechanics of their media. Moreover, the wise ones then went back to the eras preceding theirs and re-examined their history, picking up those things that were important and permanent from the mechanical minutiae.
The wheel turns and turns again. Today I use a small add-on to Firefox called ScribeFire (formerly Performancing) to write most of my more philosophical pieces (and use the more IDE-like Oxygen for my XML development work), but ScribeFire is really nothing all that special - a dozen or so buttons for functionality, a way of storing older material and working offline, and a way of connecting to the various blogging channels that I write to. Other people have their own tools, with greater or lesser functionality, but the interesting thing about most of them is that they are in their way as different from products like Microsoft Word or even Open Office as those were from their early forebears in the mainframe days.
The primary difference with most of these tools is that they recognize that a fundamental shift has taken place in the way that we work with documents. I can set up a blogging system that uses an Atom or RSS feed, can work with something like Google’s GData (which is a set of APIs and formats for the encoding of their own document and data structures), and can effectively work in a clean, transparent, collaborative manner with other people … without the heavy infrastructure that comes with most of the larger desktop authoring packages. Authoring becomes a mashup, an activity that exists as part of the moment, tied to the Internet but with enough of a local storage capability to allow you to work when not connected.
The trick here is to recognize that we’ve created a few dozen public publishing APIs, which are even now beginning consolidation to two or three. My suspicion is that one of these will almost certainly be the Atom Publishing Protocol (APP), which was introduced in tandem with the Atom feed format with the tacit recognition that the distinction between reading published content and publishing that content in the first place are simply aspects of the same publishing domain. Atom is the foundation upon which Google’s GData is based, among other uses, and I find that the number of tools that recognize Atom as the default (or at least one of the default) publishing formats has been growing fairly dramatically.
In the face of this, what then happens to MS Word? A few years ago, Word documents represented the de facto representation of the publishing art; any serious work of any merit was either in a Word document (if meant to be editable) or a PDF (if not). Yet today, more of those documents are being written in XHTML, or Wiki markup, or perhaps BBS format, often through a simple interface such as ScribeFire, or through inline FCKEditors within web pages directly. Given that these typically have about the same level of markup as is typically used within an MS Word or Open Office document, is this necessarily any worse an approach to take for creating such content?
My own writings of late, including such things as my blogs, my professional documents for clients and my fiction works are all moving increasingly into the post and comment model, with enough workflow in the base to handle walking a written work through a process of creation, modification, review and publishing. Our work flows are becoming increasingly distributed, moving away from the movement of a thing - a document - from person to person and moving toward the notion of touching a repository data source through different “filters”, customized to our own particular needs, requirements, and access privileges. The “document” as such doesn’t really exist as a distinct entity, instead devolving into a set of relationships between data structures and tables, perhaps located in very disparate locations. This document as process model is a fairly recent (and remarkably profound) concept, and one which again begs the question of relevance of desktop application suites in general.
These solutions, admittedly, do not work completely well within an unconnected world … what’s rapidly become known as the Airplane Principle. I’ve had a number of clients state that a “nice-to-have” feature in any content management system solutions is the concept that we should be able to work on airplanes as readily as at our desks (or coffeeshops), even though few airlines offer wireless services (yet … that too is changing, a concept I’ll revisit momentarily). Now, personally, I’ve found that being packed into a cattle-car with two other business travelers in coach is not generally that conducive to getting work done, and I find that a notepad and pencil or a good technical magazine are generally far more useful on long flights than trying to eke out a few minutes of time on my laptop, but being “disconnected” is nonetheless something that sends business travelers (especially marketing people for some reason) into an absolute tizzy.
Connectedness is improving, however, even on airplanes. What’s more, recent solutions to the issue of online/offline document creation management are increasingly designed intelligently to work online when possible and offline when necessary … and most tellingly, to minimize the distinction between the two. This ultimately comes back down to the concept again of recognizing that there is a fundamental distinction between a document and a file - the document exists as an abstraction, the file (or stream) is simply anchored in the background. Indeed, I think this is one of the fundamental benefits of the anonymous (Google style) search, as is found in such things as Beagle, Google Desktop and the Windows Live Search capability. Such search shifts our thinking away from trying to formally organize our document structures (something that’s becoming onerous when you have upwards of one hundred thousand folders on a contemporary hard drive) and increasingly towards relevance - does the file contain a relevant phrase, was it made around a certain period, is it in a specific medium? In this context, the definition of “where is a file?” becomes less relevant, save perhaps in the context of “is this document available to me now?”
Given this, it perhaps not that surprising to see that theres a sea change going on in publishing … and the software that supports it. Selling the office suite (and Open Office is no more immune to this than Microsoft Office) is becoming increasingly difficult in a world where publishing becomes a just-in-time proposition, where the document becomes an abstraction that can be rendered in any of dozens of formats as needed without losing the fidelity of the abstraction, where relationships between documents exist as links that can be rendered independent of platform and with such relationships being incorporated seamlessly into whatever format is most useful to the moment. This is not the suite spot. Office suites existed to provide, for a given set of formats, tools to manipulate those formats that can integrate with one another. In an age where XML has gained ubiquity, such integration is only an XSLT transformation away.
Watch GData closely. The apps that GData spawns look to me a lot more like the future of publishing than anything coming from the Microsoft Office Team. No doubt Microsoft is watching closely as well, and there are signs that it too recognizes the shape of publishing, one in which “Office” will have to become smaller, thinner, lighter, and more integrated with open standards. The question to me is whether taking this step will occur while Microsoft still has market dominance, or will only occur as their market begins to noticeably fade. It’s still early, of course, this is a scenario that will play out over the course of the next four to five years, but play out it most certainly will.