Layperson’s guide to competition and regulation in new communications media (802.11, 802.16, VoIP, etc.). Written as an end-of-the-year piece for the American Reporter.
Layperson’s guide to competition and regulation in new communications media (802.11, 802.16, VoIP, etc.). Written as an end-of-the-year piece for the American Reporter.
Related link: http://www.xmldatabases.org/WK/blog/1094?t=item
Kimbro Staken skewers a terribly contrived example of OO-to-the-max. It’s great fun at first glance, but in the end I find the treatment somewhat troubling.
I’ve myself undergone a conversion from mainstream OO orthodoxy to what I consider post-OO programming sensibility, and I have posted my own criticisms of the OO mainstream (recent example: “Objects. Encapsulation. XML?”). But there is good OO and bad OO, and I think that even OO advocates would scoff at the article Kimbro quotes as an unworthy straw man. I certainly hope I never perpetrated anything so ugly in all my years in the OO mainstream.
I think the stronger argument is that even when I think my designs were well considered, I could have done things better with dynamic, declarative and data-driven (D4) methodologies, mixing in OO in small doses only where it is clearly the best model. Aside: In my struggles to find good terminology for my recent thinking, “D4″ == “Agile programming” == “Post-OO”, where agile programming is not the same thing as agile process, such as Extreme Programming (which can be used with non-agile programming languages such as Java).
Anyway, Kimbro does point out the sore fact that OO often impairs maintenance and code reuse, two of its advertised benefits. Then he goes on to present an alternative solution in Python and XML to the example from the original article. XML really drives this example, and the fact that Python is the host language for the use of XPath appears purely incidental. It leads to my second worry about the blog item: that it seems to advocate reflex use of XML.
I’m a huge XML advocate, and I think it is the main catalyst, if not the quintessence, of the growing mainstream acceptance that it is okay to deviate from pure OO. But XML is certainly no panacea, and if you find yourself thinking that XML is overkill for a certain task, it probably is. Unfortunately, Kimbro does precisely say that XML is overkill in his code but proceeds to use it anyway.
As it happens, using only Kimbro’s code as evidence, I agree that XML is overkill. A Python dictionary would be a much better data represenation, and if persistence is needed, pickling would do the trick. Kimbro advocates XPath for its simplicity and I would agree if XML were a gven. But one must consider that XPath is much more complex than Python dictionary lookup.
Additional context might justify the use of XML, for example, perhaps the XML example is a standard interchange format. Of course, I would have little patience for such an XML interchange format. It brings me far too much to mind of Apple’s XML property lists, which rank among the ugliest uses of XML I’ve found.
In this case the key to fixing the mess presented by OO extremity is not data-driven extremity, but rather the dynamicism of languages such as Python. When I first got into REXX, my fist agile programming language, there was no XML, but it was very clear to me how REXX’s expressiveness was superior to the rigid object hierarchies I’d become used to creating.
This does not mean that “Dynamic” is the most important part of D4. If you go from toy examples to real-world problem solving, declarative and data-driven programming soon show their importance. If XML itself is not part of the solution, much of the mind-set that XML represents (setting aside reflex-OO tools such as SAX, DOM and parts of W3C XML Schema) helps bring sanity back to programming.
I don’t want to leave off without saying that OO is not always evil. It can be a useful way to package modest but strongly-cohesive bits of code (basically Abstract Data Types without declarative axioms). It can also make sense in systems-wide design if it naturally reflects the real world problem space the system simulates. I do think that such cases are rare, though (the original article was an example of how rigid OO can force you to invent all sorts of daft contrivances that have nothing to do with the actual problem space). The world is much too rich to be shoved into PIE.
What are your experiences with extremity in either OO, agile languages or XML?
A year ago I went to war. Besides the stuff I always have on me (10 meters of parachute cord, Leatherman multi-tool, infra-red chemical lights, and various weapons and ordnance), I took a lot more to the Gulf War Iraqi desert to keep me busy during the long stretches of inactivity. Luckily, we have plenty of desiel fuel and the generators that use them.
I started this war with a 20 Gb iPod, and it could not hold everything
I had, although 15 Gb was recorded episodes of href="http://www.thislife.org">This American Life. We often end
up far away from our home base for a couple of days, so the iPod is a
great way to take my entire music collection with me without
sacrificing space for other important things, like food and water.
With Audio Hijack, my wife
records my favorite NPR programs and sends them to me as MP3 files.
Since then Apple has even larger iPods, and I hear that other companies have ones even larger than Apple’s.
Laptops with DVD drives
We often have long periods of nothing to do but wait. I could buy a $200
dedicated-DVD player, but that only plays DVDs. Besides a larger
screen (up to 17 inches now), a laptop can also play games, work with
email, organize photos, play music, and a lot of other things to pass
A lot of people have laptops, but we do not carry around cables and
routers. I can turn on my laptop’s Airport to create a
computer-to-computer network. Some people use access points for
multi-player games. It is quick, easy, and works between different
USB memory keys
I do not get to use my own computer on the Army’s network. I get stuck
with approved computers. Even if I could use my laptop, I do not carry
it with me, and I never know when I might get to use a computer. I,
and a lot of other people, carry thumb-sized USB devices that
store hundreds of megabytes. We write email to send later so we make
the most of our limited network time. I installed Windows software
for SSH terminals, web site suckers, and a lot of other things I like
to use but cannot install on public computers. Windows and Mac OS X software live peacefully together on the same device.
Everyone seems to be passing around CDs of photos, starting from
events before the war to stuff that happened last week. We cannot
remember from where some of the CDs came or which units were involved, but we have some awesome pictures.
Indeed, digital cameras have become so useful that we carry them
almost constantly to document events that may be important later,
including pictures of people we meet, the cars they drive, and the
neighborhoods they live in. Cameras are a cheap and portable copy
Burning CDs is the easiest way for us to share photos and anything
else that we want to pass around. We run the risk that our disk
drives could take a bullet, although the dust and heat seem more
dangerous, so back-ups are more urgent. A lot of disk drives have
taken a beating in the desert.
We can also send CDs home for free, which is a great way to share
full-size photos with friends and families, especially since our
bandwidth is often very limited.
For most of the adventure I have carried a mini-disc recorder, and
have recorded close to 200 hours of audio diaries and sound effects.
I can make personal recordings that I send back to my wife, and
keep track of what I am doing for other projects. I can transfer
the audio to my PowerBook with AudioX or Peak, rip it with iTunes,
and burn it to a CD to send home, although I usually just mail the
What else might you need when you go? Solar powered battery chargers, rechargeable batteries (not just for the laptop!), power adapters that work with car batteries, 220V<->110V transformers, and duct tape.
What would you take to the remotest places on earth?
The big story in the media today (in the absence of the usual Al Qaida
assassination attempt) is the twentieth anniversary of the release of
the movie The Big Chill. And the media is actually
celebrating the movie. The Big Chill was supremely
exploitative and alienating even for a film industry totally
characterized by those traits. Historically it appears as a pathetic
attempt to scrape the “Me Decade” activities of the 1970s together into
a way of being, or a “life style” to use the degenerate terminology of
that earlier decade.
Who are the key figures in The Big Chill?
A businessman all boned up about manipulating the market.
A TV star who used to care about his work and now cares only about his
A woman so obsessed with wanting to have a child that she’ll even bed
down with a smarmy brat with the heart and mind of a ten-year-old in
pursuit of her goal.
Cannily, the filmwriters thus concretized the key actors in Reagan’s
society: the unscrupulous CEOs, the solipsistic media, and the
re-oppressed women. The fourth main character is the shadowy Alex, of
whom only one cut wrist is seen. Alex of course represents the
characters’ 1960s idealism, and his suicide is supposed to show the
folly of maintaining ideals. Those still living in the movie laugh at
each others’ excesses and occasionally their own, but the verdict is
in: there is no alternative. One has to give up one’s hope of changing
society, buy in to its corruption and temptations, and if possible get
rich off of the manipulation of the crowd.
It is a sign of how exhausted, battered, and hopeless the American
public felt, in the wake of the Reagan counter-revolution, that so
many people could claim The Big Chill “spoke to” them. Many
critics, to their credit, saw through it. Unfortunately, some failed
to go beyond the surface; I even heard The Big Chill compared
obscenely to John Sayles’s Return of the Secauscus Seven, a
truly sensitive, humane, meaningful (and low-budget!) film.
To the fortunate few, the 1960s was only a moment within a long chain
of activism. From the vantage point of 2003, it’s clear that 60s
activists underestimated–yes, underestimated–the venality
of society and the urgency of challenging its very precepts and
foundations. The goal we set for ourselves was to maintain our
idealism and get smart about it.
Who finds that The Big Chill “speaks to” them?
Back in the 1980s, several large firms (who later shrunk a great deal)
launched lawsuits against their competitors, on the basis that the
competitors had created products that deliberately looked and operated
like the originals. Had these lawsuits succeeded, there could be no
OpenOffice today with its cloning of Microsoft keystrokes and other
behaviors. The lawsuits helped to spawn the
League for Programming Freedom
in response. (It’s other issue, software patents, are still a major
Well, the look-and-feel lawsuit seems to be back. This time it’s SCO
claiming that Linux is infringing on its copyright because Linux uses
published, standardized interfaces that were invented as part of Unix.
See the commentary by BSD leader Kirk McKusick, referenced at the top.
To my non-lawyer mind, the fate of the earlier lawsuits should dictate
a quick dismissal of SCO’s claims, but apparently they’re hoping to
I like Microsoft Word. It’s bloated, but it’s a good word processor. If I were creating paper documents all day, I’d be happy with it. There’s just one thing I can’t forgive it for.
Word actively dumbs down the design sensibilities of those who use it. Word makes it frighteningly easy for the casual business user to create bad documents. For example, Word ships with that memo template with “MEMO” in huge letters, as if “Memo” is the most important thing for the reader to see first.
The most annoying Wordism that gets out, though, is the randomly-aligned, too-small, all caps, Times New Roman sign. Here’s a photo from CNN.com, showing a perfect example of this design troll. There’s an important message to be conveyed: The park is closed because of increased security. I imagine someone called down from on high saying “We need a sign to tell people the park is closed. Bob, can’t you do that in Word or something?” Why didn’t Bob just get a black magic marker and write it out? It would certainly be more visible from a distance.
With just a few small changes, the sign could be a good deal more useful:
If I could hand out copies of
Robin Williams‘ marvelous
The Non-Designer’s Design Book, which just came out in a 2nd edition, I would. With a cover price of only $20, it’s one of the best buys in the computer industry today. It should be required reading for anyone who creates documents of any kind.
Related link: http://www.nytimes.com/2003/12/21/national/21BRAU.html
Harold von Braunhut, the man who “invented” Sea Monkeys and X-Ray Specs, is dead at the age of 77.
To all Internetworking and Security Professionals.
The DNSEXT Working Group at IETF would like to urge you to review and comment on the DNSSEC document set. This specification has significant impact on the DNS and it is important that the specifications are unambiguous and implementable. The working group last call will conclude Jan 15 2004.
DNSSEC Document Set:
Extensive background information about the DNS Security Extensions can be found on the DNSSEC.net website.
Current search engines–even the constantly surprising Google–seem
unable to leap the next big barrier in search: the trillions of bytes
of dynamically generated data created by individual Web sites around
the world, or what some researchers call the “deep web.” You can’t
look up the status of a Federal Express package without going to the
Federal Express site, or the details on an eBay item without checking
the eBay site. Dynamically generated data can’t be spidered.
But the article cited above shows how this barrier is slowly
cracking. Now I can enter “fedex 791725670102″ into Google (not
Federal Express) and discover that the jigsaw puzzle I mailed to an
author in Australia was signed for by him.
Of course, Google has to send me to the Federal Express site (which
takes an extra click) to complete the search, but the principle is
established: a search at Google can kick off a deep search on another
The burn-out of the dot-com era left a smoldering envy of those few
dot-commers that managed to stay alive. Google is foremost among
these. If they can continue pulling in dynamic data from more and more
sites, their dominance may well continue–for access to dynamic data
is indeed the key to the next big improvement in search.
A generalization of the Google/FedEx collaboration would lead to what
is commonly called
a peer-to-peer solution to the search problem that involves a
radically different architecture from any of the current popular
engines. I said different, not new. The idea of peer-to-peer search
was aired at least as far back as early 2000. I described it in my
on peer-to-peer systems in May of that year:
Gnutella is a fairly simple protocol. It defines only how a string
is passed from one site to another, not how each site interprets
the string. One site might handle the string by simply running
fgrep on a bunch of files, while another might insert it
into an SQL query, and yet another might assume that it’s a set of
Japanese words and return rough English equivalents, which the
original requester may then use for further searching. This
flexibility allows each site to contribute to a distributed search
in the most sophisticated way it can. Would it be pompous to
suggest that Gnutella could become the medium through which search
engines operate in the 21st century?
What’s holding back metasearch is the lack of standards for
categorizing data and knowing what to search for. It’s easy to guess
that “fedex 791725670102″ should be interpreted as a search for a
Federal Express package, but anything less strictly defined is a big
A lot of people have dumped on the ideal of metadata, notably Cory
Doctorow in the article
So the waters of the deep web will be slow to stir, but as the
benefits become clear, more and more sites may emerge.
What business model would drive metasearch? That question is classic
in peer-to-peer systems, because distributed systems typically have
problems generating and distributing income. Sites could be motivated
to solve the metadata problem because they’d draw more traffic by
joining the system, and expose more of their data to people’s
As for the aggregating site–Google or a competitor–it would
potentially have an easier road to profitability than Google has
now. The aggregating site could continue to derive revenue from ads
and from the sale of search software. Since the computing resources it
needed would be vastly less than the current Google, it would need
less revenue from ads and sales. And since the use of its software
would be a prerequisite to joining (although one hopes it would
tolerate the use of compatible, competing software) it should be able
to land more sales.
Can metasearch become widespread?
I’ve never liked the metaphor of software development as manufacturing. For one thing, it’s emotionally disturbing to hear programmers alluded to as assembly line workers. More seriously, it confuses intangible software with tangible items.
Reeves has it right: the source code is the design. That’s not intended as an excuse for cowboy coding, nor is it a proto-justification for agile development (though it is clearly connected in many intelligent ways). It’s just the fundamental nature of software — perfect duplication is easy!
As my friend Jim Shore likes to point out, it’s the compiler that actually builds software. There’s your assembly line. The most important part of the assembly line, as I see the metaphor, is the perfect duplication of a physical product.
Of course, the people on the other side of this debate seem to prefer the cheap hordes of replaceable labor as the important point of the image. I think trying to build software this way dooms you to mediocrity, at best, and spectacular, if unremarkable, failures.
As always, I could be wrong, though I managed not to use the epithet “Taylorism”. What do you think?
Related link: http://www.phpmag.net/
PHP Magazine has a free issue coming up on 15 Dec 2003 to celebrate the new monthly version of their magazine to be published in PDF format. A few weeks ago, I was asked to write the cover article, an offer I happily accepted.
My article discusses sessions. After covering some basics about HTTP, maintaining state, and cookies, I spend
the rest of the time discussing impersonation attacks and methods of prevention. My approach is to give readers the background information they need to make educated decisions about the techniques they employ, and then to contrast a few suggested techniques with the steps necessary to subvert them. I think this contrast provides a nice metric by which to measure the strength of each approach.
One important point that I mention in the article is that there is no perfect solution. While I introduce a few different techniques that can be used to complicate impersonation, I am hoping that my readers will think of many more and be willing to share them. If you have a favorite technique for securing your sessions, please contact me and describe it. In exchange, I will send you a reply with my review of your implementation, and I will also compile my favorites and share them in my blog or as a future (free) article.
What are your favorite techniques for securing sessions?
I’m party to a lawsuit in small claims court. This guy named Paul signed a contract to buy our house back in June, and backed out of the contract, declaring it null and void. Now he wants his $500 earnest money back, and is suing me in small claims court for it.
So we both showed up in the courtroom on the appointed date, and the judge called us up. Paul and I stood in front of the judge, with our stacks of legal documents ready to go. We were all set to start in with our arguments, and the judge opened with a line of questions that seemed strange at the time:
Judge: “Are you Andrew Lester?”
Me: “Yes, your honor.”
Judge: “Do you have the $500?”
Me: “It’s in escrow, but yes.”
Judge: “Is he entitled to it?”
Me: “No, sir, he broke the contract.”
Judge: “I’m going to set a trial date for December Nth. As an alternative, you can work with the arbitration office to help settle this dispute.”
Well, of COURSE I have the money, and of COURSE he’s not entitled to it. Otherwise we wouldn’t be here, right? But what if those questions weren’t answered the way I did? What if I didn’t actually have the money, and I was being sued for no reason? What if I was willing to return it to him, but had just never been asked? We’d have gone to trial for no reason, and everything would have been cleared up that day.
How many times have you been on a software project like that? Where the programmers are just ready to go, like me and Paul with our manila folders of paper, waiting to be unleashed?
It’s a wise project manager who asks the dumb questions at the start, as well as provide potentially less-expensive alternatives to the stakeholders.
What lessons have you learned from unlikely places?
Related link: http://www.artima.com/intv/garden.html
Lately, I have been thinking about useful metaphors for programming. What other activity accurately reflects the real work of software development? I think quite naturally, many developers look to architecture and engineering for inspiration. Perhaps many of us have even been inspired by the architectural underpinnings of design patterns. However, I recently came across this article on artima.com: http://www.artima.com/intv/garden.html. The article recounts an interview with Andrew Hunt and David Thomas, authors of The Pragmatic Programmer: From Journeyman to Master. In this excerpt, they compare software development to gardening:
There is a persistent notion in a lot of literature that software development should be like engineering… We paint a different picture. Instead of that very neat and orderly procession, which doesn’t happen even in the real world with buildings, software is much more like gardening. You do plan. You plan you’re going to make a plot this big. You’re going to prepare the soil. You bring in a landscape person who says to put the big plants in the back and short ones in the front. You’ve got a great plan, a whole design.
But when you plant the bulbs and the seeds, what happens? The garden doesn’t quite come up the way you drew the picture. This plant gets a lot bigger than you thought it would. You’ve got to prune it. You’ve got to split it. You’ve got to move it around the garden. This big plant in the back died. You’ve got to dig it up and throw it into the compost pile. These colors ended up not looking like they did on the package. They don’t look good next to each other. You’ve got to transplant this one over to the other side of the garden.
Related link: http://www.wired.com/news/politics/0,1283,61527,00.html
I’ve been following the recent news on the
World Summit on the Information Society,
and it’s getting really bizarre. The Wired article cited above is one
example of out of the out-of-this-world coverage on the World Summit;
I heard a similar spin yesterday on a radio show that often shares
material with the BBC (but I haven’t seen a story about WSIS on the
BBC Web site yet).
What king or dictator or bureaucrat has signed the document giving
power over the Internet to one organization or another? Did I miss the
One laughable aspect of news reportage is that the founders and
leaders of ICANN always avowed, with the utmost unction, that they
were not trying to make policy decisions and were simply tinkering
with technical functions on the Internet. Of course, there is rarely
such a thing as a merely technical function, and that truth has been
borne out by the effects of ICANN’s policies on “intellectual
property” and on the allocation of domain names in general. Perhaps
it’s good for people to be talking openly of ruling the Internet.
But, in whatever ways ICANN has managed to wield its three-pronged
fork (domain names, addresses, and assigned numbers such as
protocols), it has never come close to being master of the Internet.
Now that the mainstream media have announced that the Internet is up
for grabs, they are presenting the debate falsely as a two-sided fight
between ICANN and the
International Telecommunications Union (ITU).
That body that has regulated telecommunications for over a century,
eventually came under the auspices of the United Nations, and has been
searching for several years for a way to gain new relevancy in the
Internet era. (I wrote an href="http://www.praxagora.com/andyo/ar/govern_itu.html">article
on one of their forays some time ago.) It has never gotten anywhere
The WSIS meeting has generated the most news coverage of the ITU I’ve
ever seen, so it must already be a success for them. If they can bully
the U.S. government and ICANN enough to wrest some piece of the ICANN
treasure from its grasp, I suppose they will consider the summit even
more of a success.
So what is up for grabs? Certainly the right to define new
top-level domain names (anybody visited a .museum site lately?) and to
hand out to various favored organizations the plum of domain name
registration (which really should be a nearly pure technical
function, and has been turned into a heavy-weight, politicized
activity by the “intellectual property” interests). But that’s not
really very much.
The fears that seem to be circulating around the domain name fight is
that governments or other organizations will use control over domain
names to censor the Internet. Ironically, the biggest threat to
freedom in the use of domain names has been from the private sector,
specifically the “intellectual property” interests. But the danger is
present that governments will catch on (China seems to be doing so)
and manipulating the system to restrict free speech. Still, with
search engines becoming more popular and more powerful all the time,
domain names are not the prime prizes they seemed in the late 1990s.
IP addresses are also a potential source of control that Internet
users should be conscious about, if not worried about. Addressing can
be abused mainly in a context of scarcity, and there has been debate
for years over whether IP addresses are getting scarce. (They’re
certainly scarce when you ask the average local ISP for more than
one!) A vigorous campaign to adopt IPv6 would remove most of the
worry over this potential choke-hold.
And who ultimately is in charge of the Domain Name System? You
are. You determine what domains you view. Somewhere on your personal
computer is a configuration option that determines where you go to
resolve top-level domains, and you can go far beyond what ICANN would
like you to see. Visit the
Open Root Server Confederation.
Well, I don’t really mean to say that the Domain Name System is
totally open and that nobody has control over it. ICANN is still
enthroned. The ORSC is mostly a form of protest, not a model for the
future. (It doesn’t solve the problem of name collisions, for
My point is that the Internet is a subtle ecology that has always
rested on the cooperation of multiple parties. This cooperation spans
a spectrum from the individual home user on his PC to the peering
agreements between major backbone owners. As these peering
arrangements and the history of ICANN show, systems have evolved
historically in a rough, unsystematized way, and some participants do
not like the terms of cooperation.
For instance, underdeveloped countries complain about the
interconnection fees they have to pay to more powerful backbone
operators in developed countries. Expanding interconnection points is
a way to bring down costs without trying to change the politics of
peering, but a review of the politics would also be pertinent.
While ICANN has bumbled many tasks and exceeded its authority on
others, its leaders have a sense of the fragility of the Internet
ecology. The ITU, in contrast, is tromping all over the grounds just
in the process of mapping it. I find it amusing that, in their search
for a boogie man, they have ceded to ICANN far more authority than
anyone else has.
(The U.S. government reviews its contract with ICANN every year or
two. It’s generally unhappy with what it sees and gives ICANN a
tongue-lashing each time. But so far no one in the government has had
the guts to propose something new. Given the problems of dealing with
Internet ecology, I can understand their reticence.)
There are so many people who have spent years fighting within and
outside ICANN to change the policies on domain names, that the view of
Internet policy as ICANN vs. ITU is truly insulting.
Anyway, it’s time for some responsible journalists to untangle the
mess caused by the current spin.
What’s up for grabs at WSIS?
Smart predicts the coming of the Linguistic
User Interface(LUI) for around 2020. Microsoft
Research is already working on it, actually
voice recognition, the first step towards a LUI will be an integral part of
Longhorn. Same at SAP,
voice can be used as one of the many channels to interact
with their applications.
But slow down, I hate it already today when people are yapping on their cell
phone in a public space. Although, I admit, that sometimes I am one of them.
Same in cubical country, it is super distracting when people are on the phone
in the office, which usually is only one or two. With the LUI it would be everyone.
Give me a break.
Maybe we can jump that, or leave it somewhere in the privacy of our homes. The data rate of 160 words per minute is far better than the 40 to 60 that you get on a keyboard, but how about going directly to a brain user interface (BUI).
I was getting hopeful when I saw professor Kevin Warwick, self proclaimed first cyborg, as he is the first human that implanted a chip into his body (you have to discount all the people with pacemakers, minor detail.)
He presented his research at Stanford a couple of weeks back. Professor
Warwick connected a chip to a nerve fiber of his left arm and was able
to send signals through that nerve to a computer, as well as getting signals
back from the computer to his brain. After a bit of training he could manipulate
a robotic arm even over the Internet.
The video that he showed, looked to me as if the robotic hand movement was
only binary on or off. I could not discern fine-motor movements. When I asked
him, he assured us, that while blindfolded he was able to grab a raw egg without
He also showed a video of an experiment, where he was blindfolded and the electrode
in his arm was hooked up to movement sensors attached to his head. When his
assistants would go towards him with a large piece of cardboard he would get
such a strong signal, that he would jump back.
I was sure, that the human body would reject such a foreign object, but to my astonishment he said that it was tough to get the device out of his body, because it was so grown in.
remember once seeing a documentary about the first flight of the Wright Brothers.
Back then I thought, what’s the big deal, they barely left the ground. I didn’t
realize back then that it was a big deal, because humans for the first time
left the ground powered by an engine. (Which if you have not heard happened
100 years ago this month: December 17th. The picture to the right has to be
over 100 years old and can’t be copyrighted anymore, or am I wrong? I will take
it down otherwise.)
Kevin Warwick’s findings felt a bit like this "barely off the ground",
total baby steps, but the possibilities are humongous. One of them is the BUI:
You formulate your email in your brain and like magic it appears on the screen.
Researching his work, or ahem Googling him, I realized, that you have to take
what he says with a grain of salt. The register even calls him Captain
Cyborg and has a whole list of articles dedicated to his publicity stunts.
Probably the LUI will be before the BUI.
Related link: http://www.localfeeds.com/
I just discovered Localfeeds, a search engine for feeds where the searches are based on geographic location. This seemed interesting enough, so I typed in my ZIP (10001) and was shown the most recent blogs within 50 miles of 10001 (New York City). Sure enough, there are a lot of people talking about the big snow storm we’re having here. Neat.
The current trend seems to be that people interested in a particular topic tend to read the same blogs. While this can be good in that you explore the perspectives related to a particular topic from people all around the world, it is pretty fun to see what random people who live near you are talking about. I would never think of writing about the current snow storm, for example, because most people who read my blog are interested in PHP or Web development, but it was cool to read blogs of people who did just that.
Not wanting to be left out, I went back to the first page to see how to get added to such a thing. Is your site ready for Localfeeds? I typed in http://shiflett.org/ to find out. I was shown the checklist for shiflett.org, which was much different than what you will see now. I did not properly indicate the coordinates for where I live, which I learned must be expressed in a meta tag:
<meta name="ICBM" content="40.750422,-73.996328" />
After adding this and returning to the checklist, I found everything to be in order, and I was told to click a link to notify Localfeeds and GeoURL. I then visited GeoURL, out of curiosity, and I saw my site listed:
Chris Shiflett: Home (near New York, USA. see neighbors)
Very cool. Of course, I feel like the last to know about this stuff, but maybe this will introduce it to someone new.
I finished my R&R leave two days ago and have finally made it back to the Middle East. At least I do not have to dread that anymore.
Not everyone on leave was so fortunate. The domestic airline, United Airlines, was code-sharing with a foreign carrier and somehow horked the reservations of a lot of soldiers. I was able to get my state National Guard headquarters involved to fix my problem since I had called up the airline the day before to verify my reservation. Soldiers who simply showed up at the airport ended up stranded at various places, and not just their point of origins, despite several empty seats on the flights.
We do not have to wait for that Terminator 3 moment when the machines take over, at least not for United Airlines, because they aleady let their machines rule them. Almost every customer service person blamed the computer in some fashion: insufficient access privileges, my record is locked, it is somebody else’s system, unscheduled updates, and so on. When that failed, they just told me “We do not do that.” The person checking in next to me at O’Hare, a German fellow I think, was having the same problem. They kept telling him that the computer said the opposite thing than his ticket. Snags are not so bad—stuff happens—but the ticket agent just kept saying “But the computer says…” without even listening to him.
My problem got fixed by force of will. The State just called the airline and said “Look, this is how it is going to be, I do not care what your computer says”. That fixed that. The stranded soldiers do not have anyway to bring that force to bear when business hours are over, though. In my experience, United Airlines would rather believe its computers, and stick by that, than actually help the customer. I was actually surprised at how hostile some of the representatives were, especially considering since I normally just say “active duty in Iraq” and companies fall over themselves trying to help me.
When that is the way the business runs, someone needs to take the computer aside and give it a spanking. The computers should work for us, not us for them. Customer service people should not be simply data entry technicians, and gate agents are not just ticket tearers, that is, unless they let the computers be the boss.
I should not be surprised at that though. I have seen a lot of places where people work within the limits of the computer vendor they have locked themselves into, rather than using what actually works for them. It still boggles me, though.
Have airline computers horked your travel plans?
Like many, I’m a happy slimp3 owner, and while I covet their new squeezebox appliance, my real joy is having the slimp3 server software. It is fantastic. Considering the recent interview here on O’Reilly, I felt that readers might like to get a picture of some of the benefits of the slimp3 server software for those who don’t own a slim player.
About 3 years ago, I ripped my entire cd collection of about 400 cds onto a networked drive at my house and put it onto a networked drive. I promptly began to hate my archive. Due to poor planning, the tags stank and titles were just barely accurate. The archive was hard to enjoy and use. As it grew with my collection, it became less useful. Less useful until I installed the slimp3 server software on my server.
The slimp3 software hosts not just the slimp3 hardware but every standard mp3 stream playing program out there, which as you know covers every platform that matters (sorry, wang vs users!) and, most importantly, provides a extremely usable web based interface for my archive. I can stream different music to multiple machines in the house at once and find music very quickly by genre, artist, song, etc and via the search mechanism.
So, if you have a large archive of music, go and download the slimp3 server software, you won’t regret it. The only caveat is that you’ll really want to buy a squeezebox to feed if you do.
All hail the Slimp3 server software.
Related link: http://sourceforge.net/projects/plucker/
I used to think that AvantGo on my Treo was pretty keen, if troubled. Imagine my happy surprise when I tried Plucker. Plucker, like AvantGo, is an offline web reader for a Palm based pda. It reads and processes the websites you want to browse offline and presents them to you in a palmish way on your PDA.
It really is a remarkable tool. I spend altogether too much time on planes, so it is very handy for me to be able to read, for instance, O’Reilly network articles on my PDA when waiting in one of the many lines that typify travel in america today.
AvantGo has some subtle annoyances…and not what you are thinking either…I’m not really bothered by ads, but avantgo allows some ads that are larger than the screen so you have to restart the app to get an ad small enough to pass on a 160×160 screen like the treo has. Also, the Avant Go program is always going to its home server…which is odd as I thought it was all offline storage. It makes me nervous when programs call home, and avantgo seems to work when I shut off the wireless networking on the phone, so I don’t know what is up with that. Also, avantgo costs money if you want a feed larger than 2mbs. I have no similar restriction from plucker. I can use as much or as little as I want. I’m cheap I guess.
Plucker allows for some very nice configurability, allowing you to chose how deep you want it to spider your favorite sites, whether or not you want images, if you want the data stored in main ram or on an addon card, etc, etc…. the only way that avantgo surpasses it is in the way that avantgo can update on the pda itself via the gprs tcp-ip connection, but that is almost not worth doing.
Plucker isn’t perfect, it doesn’t understand the Treos 5 way pad, and its interface isn’t perfect, but it gets the information to you, and that is what matters. Anyhow, for those of you with groovy palm based pdas, you should really check out plucker.
Plucker, Plucker, Plucker
The season has come around again. Presidential candidates are barking
insults at each other, and there’s a shadow of a hope for drawing some
attention to issues of true importance.
In the spirit of stirring up debate around what really matters for our
future, therefore, I am modestly offering a few of my own creative
solutions to the problems that the national campaigns should be
The energy crisis
There are so many simple ways people could cut down on the appalling
waste of energy in this country that it’s hardly fun to propose
anything new. But I have an initiative to offer, centered on the
crucial task of making public transportation appealing to Americans.
The terms “public transportation” and “appealing” sound so absurd
together as to be almost an oxymoron, in a culture like the United
States that handles public transportation as just another of the many
ways to punish poor people for being poor. The idea that public
transportation could be appealing didn’t come to me until I saw it in
action in other countries. And what I want to see in the United States
is even grander than what I’ve seen in Berlin, Rouen, and
Tokyo–something befitting an immensely rich and self-pampering
Why not present public transportation as an indulgence? Backed up with
the right resources, such a campaign might succeed. Who would want to
spend an hour driving himself to the office when he could sit in
luxury while someone else does the work?
This means buses (because stringing track is an expensive investment
that doesn’t pay off in the short term) that have comfortable seats
facing individualized media centers that offer news and educational
videos. Shuttles would run short routes on a frequent basis, and
customers would get to know their drivers. Comfortable waiting
stations would contain electronic maps showing the best way to get to
any local destination, and would show the exact location of each
vehicle as it makes its way through town–because people taking public
transportation like to have information in return for what they feel
is a loss of control.
Ultimately, of course, one can eliminate terror only by offering, to
the wide strata of poor and angry people whose environments give rise
to terrorists, a life better than that offered by the terrorists
themselves. Since the terrorists offer nothing but violence,
destitution, and grinding oppression, I can’t quite see why the rulers
of this world find it so hard to come up with a competing proposal.
But in the mean time we need to do something to improve our vigilance.
This past September, student Nathaniel Heatwole planted several
dangerous objects on commercial airplanes and notified the proper
authorities. They reacted with alacrity by fixing the problem a month
later, then arresting Heatwole for lack of better ideas of what to
do. And in Britain, Ryan Parry of the tabloid Daily Mirror obtained
easy access to Buckingham Palace, including the room where George
W. Bush is staying.
I can accept the argument that what these people did was both
dangerous and unnecessary, but we should examine the incidents for
possible merits. After all, we’re a competitive society with the
fervent belief that competition–along with accountability–brings out
the best in people and institutions. So let’s institutionalize
breaches of security, and accountability for them.
I wouldn’t reward someone for bringing actual weapons into airplanes,
nuclear facilities, state capital buildings, etc. But we could
encourage proxy violations, such as smuggling in inert metal rods
without being detected. Special, harmless, substances with certain
resemblances to weapons could be sold to people who want to try their
hand at the big sweepstakes. And institutions could be required by law
to set aside part of their budgets to actually pay bonuses when people
succeed in getting these materials past security.
It’s hard to say what institutions should join the initiative, because
you often don’t know you’re a soft target until you become one. But
every institution that was required to pay someone when its security
was breached would sure as hell spend money to improve security. This
initiative in fact would leverage the risk-based security philosophy
recently espoused by security expert
Turn over the country’s health care system to Fidel Castro, who has
presided for forty years over one of the world’s best health care
systems, one that recently
an important new vaccine for meningitis and pneumonia. Castro, could
perhaps be induced to make a swap and give up being dictator of one
country in order to become health care tsar of another, much larger
The digital divide
Access to online information is increasingly determining one’s ability
to understand the world politically, gain access to educational
materials, get a job, and even keep in touch with far-flung relatives
in societies where people are increasingly separated by thousands of
As with the other issues in this article, much ink and screen space
has been spent on debates over how much help the public needs and how
much the government should do. I will suggest one modest initiative
here that I think all could agree on.
Remember bookmobiles, those libraries on wheels that (even today, in
some places) bring reading materials to neighborhoods where people
don’t have the time or transportation facilities to reach traditional
libraries? We should do the same with Internet access.
Every day, at a predictable time, a datamobile would show up in a
neighborhood. Sporting a satellite dish on the roof, it would offer
high-speed Internet access to terminals inside the datamobile as well
as a wireless LAN hub that would make such access available to people
in surrounding homes.
In the short term, the datamobiles would help people get the
information they need for one day–perhaps throwing in a VoIP phone
call or two–and make them comfortable using the Internet. But the
initiative would be good for the long term too. It would create demand
for more permanent and available solutions. Perhaps neighborhoods
would band together to string wire, and people who thought they
couldn’t afford computers would scrape together the means to buy them.
Well, that’s it for my proposed campaign planks this year. Admittedly,
some presidential candidates may offer a platform that is easier to
implement, but I don’t think they’ll offer one that does more for
us. Anyway, I have to hold out the hope for a 2004 campaign that
consists of more than sound bites about gay marriage.
What solutions haven’t been thought of before?
i got the opportunity to attend
ApacheCon 2003 in Las
Vegas (Vegas baby! <g>) two weeks ago. i thought i’d blog my notes so that
you could get a feel for what was presented and how it was received. given
BEA’s growing commitment to open-source and Apache, i was looking forward to an
interesting conference (and i wasn’t disappointed). oh, there’s also an
wiki you can check
this was a (quite good) overview of how the ASF (Apache Software Foundation)
works for those people who aren’t already members. topics covered include:
i was a bit worried when this talk started that it was going to be a pure
marketing pitch for Sun because one of the first slides was a list of Sun’s
“strategic initiatives”. but that slide to the contrary, it was a pretty good
the goal of Tapestry is to create an O-O framework for building web-sites.
having not played with Tapestry myself, i don’t know how well it succeeds on
its goals (of which there are many), but it seems pretty interesting
one think i liked about what i saw was the fact that Tapestry enforced a
separation between layout (HTML) and code (Java). this allows you to use
whichever HTML design tool you prefer to edit the UI template, unlike JSP pages
where WYSIWYG tools must play many, many tricks to deliver a similar experience
here’s a list of goals and attributes of Tapestry based on the presentation
one interesting question was about a painfully slow development experience one
of the developers was having. it turns out that Tapestry has a cache which can
take a long time to heat up. in production, this is fine, but if you’re
developing content, the time taken to reheat the cache after each edit can be
pretty painful. there’s a way to disable this that you can find in the FAQ. i
looked but didn’t see it, but i probably just missed it…
Onno Kluyt is Sun’s Director of the Java Community
Process. in addition to providing lunch for everyone, he gave a
presentation on the Java Community Process. he talked about the membership of
the JCP, like the fact that there are now more individual members then
corporations, and about how the JCP evolves, like the fact that
JSR 215 is in final
approval (it should be approved within the next week or so)
it was interesting to hear about the upcoming mods to the JCP process brought
on by JSR 215. one change has to do with transparency. from now on, JSRs will
be made public during community-review, instead of just at public-review. the
reason was for this is that the feedback being produced was excellent, but it
was coming too late to be used. by public-review, the spec is pretty much
baked. now, feedback will come in time to have real impact
there are some other changes coming as well. you can read about the whole thing
online. according to Onno, these changes will take effect in the Jan/Feb
and then the battery on his PowerBook died, and since he didn’t have his power
supply, his presentation became “old school”, where he had to make points by
simply speaking. very retro <g>
one rather interesting question that came at the end had to do with the use of
NDAs (Non-Disclosure Agreements) within the JCP. several people in the audience
objected to their use, and pointed out that the ASF does not use them. Onno
replied that NDAs would always be used in the JCP, the reason being is that
corporate participants would be unwilling to disclose their reasons for seeking
changes to a JSR if they know that any competitor would have access to their
comments. if you think about it, this issue illustrates one of the key
differences between a standards process at Apache (if you can call Apache a
standards body) and at the JCP. interesting…
this was a “must see” presentation for me, what with working for
BEA and all. and it seems like i wasn’t the only
one who felt that way, as this presentation was packed
first thing discussed was “why another Open Source Java App Server?”. there
were several reasons given for this; no current open-source JAS is provided
via a BSD derived license, there are already several pieces of the puzzle
being provided by Apache projects, and no open-source JAS is currently J2EE
next up was a review of status of the various pieces. i hope i didn’t miss a
piece while taking notes (unfortunately the presentation given wasn’t exactly
the same as the one on the conference disk, so i’m doing this all from my
notes). here goes:
other tidbits: they are currently in the Apache Incubator (or “probation for
newbies” as they called it <g>), their target for release of their first
version is one year from when they started (Aug 6th), and they invite people
to get involved
and of course, they got a question about the current dust-up between them and
JBoss. there reply was “no comment”, but for those of you interested in
learning about what’s going on, this is the
that the JBoss Group’s lawyers sent to the ASF. it’s interesting reading,
and (IMO) shows that our entire IP rights system is, without a doubt, totally
and completely fucked-up
i should also mention that during the presentation, for all the individual area
status reports, a different person stood up to deliver the status, said person
being the owner/driver for that area. it was really quite impressive. and
wandering around the resort, where you saw one of them, you usually saw a whole
group of them, talking, hacking, laughing. David Bau and i spoke with them
over some beers Monday evening, and you can tell that they are all very proud
of what they’ve accomplished so far, and hungry to do a lot more. this is a
project to watch for sure
before you read my notes on this presentation, i need to proffer a disclaimer.
not like i’m a real journalist or anything, but still. ok, here goes:
David Bau is a very good friend of mine, has worked for me off and on
over the last 8 years (wow, has it really been that long David?), and was
working for me all during the development of XMLBeans, which was done here at
BEA where he (and i) continue to work. i think that XMLBeans is one of the
coolest things i’ve ever had the chance to be involved with (not like i wrote
any of the code or anything, i’m just a PHM) and i’m sure this colors my
judgment. ok, end of disclaimer
so what is XMLBeans? it’s a system for allowing Java developers complete object
support for XML instances whose type is defined by
XML Schema. in other words, if
someone has defined an XML type-system using XML Schema, and you want to read or
write XML types within that system, XMLBeans is the answer. and unlike many
XML-to-Java systems out there, it supports 100% of XML Schema and 100% of the
XML infoset (that’s all the information that can be represented by a given XML
instance). let me say that again, 100%. period. end of story. stick a
fork in it <g>
early on in the design of XMLBeans, David decided that in order to really make
the power of XML Schema available to the Java developer, you really needed a
100% solution. anything less led to this horrible system where developers would
need to inspect the schema of an instance they wanted to read/write, and then
pick the system that allowed them full access to that type. the world would be
so much better a place if the developer needed to learn just one thing, and
could use that whenever needed. so that’s what he and the guys built
ok, enough of the high-level, what is XMLBeans? well, it’s really 3
XMLBeans are currently in the Apache Incubator, and hope to get out of
incubation soon. and they are looking for help, so if you’re interested, get
involved! v1 is complete and usable today, and v2 work is just starting
i have to admit, this was the last presentation i attended on Monday, and i was
getting pretty tired. so my not taking really suffered at this point. you’ll
Cocoon is a web-publishing framework for portals. and it’s really, really big.
it has a pipeline architecture, and runs inside Java web-servers as a servlet
and at that point, my brain froze for the day, and it was time to close the
laptop and crack a cold one. sorry for the short-shrift on this presentation. i
spoke with Steven a couple of times during the day, and he’s a really smart guy
that’s right, what we need is wireless power. otherwise a laptop just can’t
make it through the day. since we’ll probably be waiting a long time for this
little innovation to show up, it’d be great if conference organizers would add
power outlets to the list of “things geeks need to be happy”. it’s pretty much
de rigueur that conferences provide 802.11b, but they must think that
all attendees are lugging a knapsack full of batteries because they sure
weren’t providing power-taps. so at the beginning of each session, you could
see people unscrewing those brass floor plates to get at power outlets
PowerBooks are everywhere
the number of PowerBooks present was stunning. i’d say at least 1/3 of the
laptops present were Macs, maybe more. the 15″ was the most common, with a
strong showing of 17″ PBs as well. i’m sure i must be the zillionth person to
say this, but the PowerBook is becoming the laptop of choice for the
alpha-geek. people keep talking about Linux on the desktop as the trend that
Microsoft needs to worry about. forget it. Linux is the powerhouse on the
middle-tier and back-end. on the desktop (and laptop) the trend that
should be keeping Microsoft up is the Mac. we’ll see…
and the winner is…
Workshop 8.1 (yes, the product i worked on) won PC Magazine’s 20th Annual
Tech Excellence Award in the
Tools category. and it (along with the other Tech Excellence Awards)
was presented Monday night at a party held at the Venetian. had a few cocktails
before the awards were announced, and had more then a few after winning <g>.
it was really great. David Bau and i got to go up on stage to accept for the
team. did i mention it was really great?
ok, back to ApacheCon 2003…
this presentation was very well attended. it started out with a description of
what log4j is. for those who’ve never used a structured logging facility, it’s
a system (represented by an API) that allows developers to add calls to log4j
throughout a body of code, and then control the information that flows from
these calls. it can be sent to files, syslog, SMTP, the console, you name it.
as a matter of fact, we use it here at BEA throughout WebLogic Workshop. i
can’t tell you how many times during a development cycle i’ve received a bug
report containing both a stack-trace and a log file, without which i
wouldn’t have been able to diagnose the failure. logging is a good thing
coming in version 1.3 are domains (new way of organizing logs), more
sophisticated log rollover, improvements in speed and memory size, plug-in
model, external receiver model (for generating events into the log4j world),
watchdogs, interop foundation for integrating with .NET, C++ and Perl, a whole
new Chainsaw. anyway you slice it, there’s a ton coming in the new version.
however, there is no current date set for deliver. it’ll be ready when it’s
one request that came during the Q&A session was for “TRACE” to be
supported. at this point there was a chorus of agreement from the audience. it
turns out this is in the process of being voted on, and so may be present in
future versions. stay tuned…
you know how when you wake up in the middle of the night, there are no lights
on, and yet you can still see? i don’t mean you can read a book, but you can
see the bed, the door, the desk, and can navigate without killing yourself.
and then suddenly, without warning, your spouse/sigot turns the light on and
even though it’s just a 100 watt bulb, or a couple of 60 watt bulbs, you
suddenly find that you can’t look directly at anything anymore, much less the
light itself? it’s not the intensity but the contrast that’s so jarring
well, that pretty much sums up my initial reaction to the first 5 minutes of
Chris’ presentation. don’t get me wrong, i really liked his presentation, his
delivery style, etc. this is a guy who loves to be the center of attention (and
it takes one to know one <g>). but after 2 days of solid geek-style
sessions delivered in passionate and yet muted tones, Chris’ delivery was a bit
of a shock, albeit a pleasant one
ok, enough of that, what did he talk about? well, that the use of email as a
tool for marketing is over, and will be/should be replaced by RSS. restated,
that opt-in/opt-out distribution list messaging via SMTP has too many problems,
and should be replaced by RSS via HTTP. among the problems with opt-in/opt-out
messaging over SMTP:
all of these problems are overcome by using RSS. since the user decides which
feeds to subscribe too, the user is in complete control. if the signal-to-noise
ratio on a given feed gets too low, you just stop monitoring. thus the
marketers job focuses on two things, a) making users aware of their existence
and b) keeping their feed relevant to users that are monitoring their feed
overall, i agree with Chris and think this is the direction that things will be
moving to in the very near future. however, there is one point where i disagree
with Chris, and this is an assertion that RSS is by its very nature
unspammable. or i should more properly say, i agree with Chris that RSS is
unspammable in the case where all feed info goes one way, from marketer to
user. however, many RSS-based news systems are adding discussion groups and
traceback capabilities, and support RSS on those news sources. well, now the
spammers have a place where they can jump in and wreak havoc
for example, let’s say i decide to monitor an RSS feed from Apple about product
announcements. and let’s also say that Apple allows people to comment and rate
those products. i’d want to subscribe to a feed that contains both streams of
info, appropriately threaded. so that if Apply came out with a new iPod and i
was thinking of buying one for my wife, i could learn from the experience of
people who’d bought one. and this all sounds great right up until some user
posts a comment titled “defect in my brand new iPod” and i started to read the
body of the post and it was removing unwanted body hair. now of course, Apple
would remove this post fairly quickly, but my RSS reader might have already
downloaded this post. you see where leads
but while this is an important problem to deal with, i agree with Chris that
RSS is a significant step forward that should be taken with alacrity. i had
wanted to ask Chris about this issue in his talk, but he ran out of time and
wound up telling people to email him with any questions. so i’m going to do
that and see if he’s thought through this wrinkle
Novell sponsored lunch on Tuesday to get the word out on it’s participation in
and commitment to open source and the ASF. Novel is involved in the development
of Apache, Tomcat, Perl and PHP, as well as MySQL and Ximian. in fact, Novell
has purchased Ximian. they are also a big believer and supporter of AMP
(Apache, MySQL and Perl/PHP)
they are also in the process of purchasing SuSE Linux. overall they are trying
to do what IBM is doing with Linux, but where IBM is doing it on the
middle-tier and backend, Novell is trying to do it on the desktop. they know
that in order to really make inroads onto the desktop, someone needs to produce
an integrated, easy-to-use experience. many have tried this of course, but
Novell thinks it has an edge because it can control and direct an entire “Linux
stack”. it’s a gutsy move, but if someone is going to pick something other then
Windows for the desktop, it’s hard to see another choice besides Mac OS X. but
i’m glad they’re stepping up. this is what free-market types call “animal
they did a demo of some admin functions on Ximian to show the inroads they’ve
made in ease-of-use. it was good work, but as Mark Igra likes to say, “if you
want someone to switch, you can’t just be 10% better, or even 100% better, you
really need to be 10x better”. Mark, i apologize if i ruined your quote <g>.
but regardless, you get the idea
another demo they did was of them wrapping lots of “conf” file editing (more
admin tasks) with a browser-based app scheme. it turns the whole thing into
forms, etc. it’s a really nice idea, esp. for remote admin. if they bake this
architecture throughout Linux, it could give then an interesting leg up on the
competition (Microsoft and Apple, IMO)
they wrapped things up with a discussion and demo of
Mono (that’s moe-no,
not mon-o, as they point out). it’s an implementation of .NET that runs
on *nix, as NT/XP and Win9x. they were immediately questioned about Microsoft’s
response to this product, either legal or strategic, but none of the speakers
from Novell could answer this question. it must be the first time any of these
presenters has demoed Mono, for i can’t imagine this question not coming up
every at every demo. on the other hand one of their presenters felt the
need to explain to the audience that GC stood for “garbage collection”
on a block diagram of the VM architecture <g>. so it may be that they’d
never presented to an alpha-geek crowd before
but their demo was quite impressive. they showed C#, VB and ASPX files all
running on their VM. i wish they’d of had someone really technical from the
Mono project present, as their are a bunch of questions that i’d have loved to
ask about. one fairly wacky idea i had was to focus Mono on being a Java
cross-compiler, so that instead of building a VM, you built a compiler and RTL
that mapped to the Java VM and RTL. i’m sure there are good reasons not to do
it this way (i can think of several myself). but it would have been cool to
hear about it from the perspective of someone faced with the job
all in all, Novell seems to be setting some extremely tough goals for
itself, but if even one of them succeeds, the rewards would appear to be great.
it’ll be interesting to watch this one unfold
Zeroconf is cool. this presentation was
about adding support for Zeroconf to Apache, but began with a brief overview of
Zeroconf and it’s uses
the one sentence description of Zeroconf can be found on the org’s website,
“The charter of the Zeroconf Working Group is to enable Zero Configuration
IP Networking”. what a simple and powerful goal. why? well, consider some
interesting scenarios, like you’ve got two computers that want to talk to one
another to play a game, swap some music or just trade some files. you’d like to
just plug a crossover cable into each one and just have things work. your
computer would get an IP address, as would the other, they would both find one
another and communication would ensue
but wait a second, how do they each get IP addresses? neither is running DHCP.
and how do they find each other? neither is running DNS. the answer is
Zeroconf. and more and more systems are taking advantage of it. one popular
example of this is iTunes music sharing. if you enable music sharing in iTunes,
you’ll see a list of all the other iTunes users who have also enabled music
sharing. there was no coordinator involved making this happen, it was
Zeroconf allowing them to discover one another. in fact, Apple switched from
AppleTalk to Zeroconf (they call it Rendezvous) in the Jaguar version of Max OS
here’s (roughly) how it works. first off, Zeroconf coexists with traditional IP
services like DHCP and DNS. so this isn’t some either-or decision. but assuming
that DHCP is not available when the device wants to communicate, it will first
pick a random IP address from the link-local address range (169.254.*.*). then
use an ARP (Address Resolution Protocol) message to allow an already in-use
address to defend itself
next, use mDNS (Multicast DNS) to
discover the address of other services. for example, a game can look for other
games, etc. it runs on a different port then DNS, and on every host. in this
model, machines name themselves. but there’s one thing missing, and that’s
dns-sd (DNS Service Discovery). this
provides for actual network service browsing via either DNS or mDNS. this is
how the list of iTunes shares is populated
the plan is to enable Zeroconf in Apache httpd 2.0 so that virtual hosts are
registered with the mDNS responder. this will allow other Zeroconf services to
browser and connect to services proffered by any instance of httpd
that pretty much did it for my first ApacheCon. all in all, it was a really
good trip and i learned a lot about what’s going on in Apache. there was some
very good energy at the conference. i’m looking forward to next year, and i
expect it to be even bigger
were you there? did i make any mistakes in my notes that you could help
correct? let me know!
Related link: http://www.fcc.gov/ipwg/
Pushed by vendors and states to make policy in the area of Voice over IP, the FCC held a session today which filled every available room and was broadcast on CNN. Clearly, the Internet is going to be occupying an increasing amount of the FCC’s time. Just after the VoIP forum, therefore, it announced a new Internet Policy Working Group. Now, let’s not hear any kvetching from the libertarian set about how the Internet should stay unregulated–there are important issues that government has to address and this development is a good thing. The current FCC is, if anything, weighted against regulation. It’s up to Netheads to keep it well informed and steer it toward rulings that are good for the public.
What should the FCC look at?
I was enjoying my morning, listening to Wendy Carlos’s Bradenburg
Concertos while the cats ran up and down the halls, and I was
half-heartedly flipping through the liner notes. I like her because I like Bach, and you may have already
heard her stuff as the soundtracks to href="http://us.imdb.com/title/tt0066921/soundtrack">Clockwork
Orange and href="http://us.imdb.com/title/tt0084827/soundtrack">Tron.
Glenn Gould’s name in the notes caught my eye. Included in the notes
">short article on Wendy’s efforts. If he is talking about Wendy,
he is talking about hacking. He liked to think that the artist should
be completely removed from market pressure (i.e. the audience). He
tended to see the world a lot like some open source software people do
He takes a stab at the professional musicians when he points
out Carlos’s technical kung-fu (indeed, she studied Physics as well as
music in college). The Moog synthesizer she used was not
some out-of-the-box, shiny thing—it looked a lot like the early
computers with all of its knobs and exposed wiring.
And the “performer” for Switched-On Bach is no
professional virtuoso taking time out from the winter tour for a visit
to the recording studio, but a young audio engineer named Wendy
Carlos, who, over a period of many months, produced performed and
conceived the extraordinary revelations afforded by this disc in her
She was a hacker and beleived in her right to innovate. Her
href="http://www.wendycarlos.com/biog.html">biography is more techie
and musical. She virtually sat in her living room with a Moog
synthesizer and electronics she created herself and re-interpreted
J.S. Bach, on her own and off the grid, so to speak. She took her
Moog and works not covered by copyright, and creating something fresh
and amazing. She was working in the middle of the creative commons,
and she beleived in what she was doing. She did it soup to nuts too.
She did not need somebody else’s studio. If she had
an idea, she could just try it, just like a lot of open source
software people do. She could hack the innards, especially since
Bob Moog, the inventor of the device, did not have today’s litigious,
For the real revelation of the disc is its total
acceptance of the recording ethic—the belief in an end so
incontrovertibly convincing that any means, no matter how foreign to
the adjudicative process of the concert hall and even if the master is
white with splicing-tape as this one must have been, is justified.
The result of all this hacking and freedom to innovate? Glenn Gould
concludes [in the liner notes, not the essay]:
Carlos’s realization of the Fourth Bradenburg Concerto
is, to put it bluntly, the finest performance of any of the
Bradenburgs—live, canned, or intuited—I’ve ever heard.
PHP has one of the largest developer communities in the world, yet we have
no community gathering place like those you can find for other languages
(Perl has http://use.perl.org/, for example).
Want to help change this?
I am coordinating the development of a Web site that is built by and for
the PHP community. Its features will include such things as:
More importantly, the features will be driven by the needs of the
community and not any one person. This list is just an example of the most
common features found on other community sites.
Will this site seek to eliminate other PHP sites that offer one or more of
these features already? Absolutely not. My hope is to help bring the
community together, both the people in the community as well as all
related Web sites.
I have spoken with O’Reilly, and they have agreed to support us in this
endeavor with servers, bandwidth, administration, and anything else we
need. All we have to do is provide the people to develop and maintain the
site and its content.
You don’t have to be an expert to help out. I need people to fill the
following roles (multiple people can fill the same role and a single
person can fill multiple roles):
There are likely many other roles to be filled. If you think you can help
out a lot, please consider the first role, site management and global
vision. If you want to help but don’t feel like you fit into any specific
role, don’t worry about it (any help is very much appreciated).
There will be mailing lists, CVS, and other tools available to assist in
the creation of this site. More information about these things will be
given to those who are interested in being involved.
Please send me an email at firstname.lastname@example.org if you are willing to help.
Mention where you are interested in helping and any information about
yourself that you think is important. This information is not intended to
determine whether anyone is “worthy” or any silliness like that, but it is
rather to help organize the contributors so that everyone is doing
something they enjoy.
What features would you like to see in a PHP community site?
Related link: http://www.perladvent.org/2003/
CPAN, the Comprehensive Perl Archive Network, is a treasure trove of submitted code from across the Perl community, and is one of the reasons for Perl’s great popularity. There are modules for everything from
traversing web pages like a web browser, to
handling bibliographic data for libraries, to
checking your Perl documentation for syntactic correctness, and that’s just the stuff I maintain myself. There are hundreds of other active contributors, and over 2500 modules in the CPAN module list.
With so many modules, it can be daunting for a Perl beginner to know what’s worth noting. (Heck, it’s daunting for us experts, too.) Plus, since so many modules are built on other modules (WWW::Mechanize is built on LWP::UserAgent and HTML::Form, and Test::Pod is built on Pod::Simple and Test::Builder), it’s important for module authors to know which modules are best-of-breed.
One source for direction is Mark Fowler’s Perl Advent Calendar. Each day in the month of December, Mark reveals a new module in his calendar, including an overview and mini-tutorial in its use. Today’s module is CGI::Untaint which encapsulates the validation of CGI parameters in handy functions. If you get antsy waiting for the next 24 days, you can see the calendars for the
Have you heard of similar projects for other languages?
REST, or Representational State Transfer represents an architectural style for building distributed applications. When applied to the world of web services, REST is most commonly used to refer to the transmission of XML over HTTP, and the identification of XML resources via URIs. According to REST, HTTP, XML and URIs provide all the infrastructure for building robust web services, and most developers can therefore safely skip over the pain of learning SOAP and WSDL. If you are new to REST, check out Paul Prescod’s excellent REST articles on xml.com.
A major element of web services is planning for when things go wrong, and propagating error messages back to client applications. However, unlike SOAP, REST-based web services do not have a well-defined convention for returning error messages. In fact, after surveying a number of REST-based web services in the wild, there appear to be four different alternatives for handling errors. Below, I outline the four alternatives, and then provide my opinion on which option or combination of options is best.
Option 1: Stick to HTTP Error Codes
In this scenario, the web service propagates error messages via standard HTTP Error Codes. For example, assume we have the following URL:
Option 2: Return an Empty Set
In this scenario, the web service always returns back an XML document which can have 0 or more subelements. If some error occurs, an XML document with zero elements is returned. The O’Reilly Meerkat news service currently uses this approach. For example, the following URL connects to Meerkat and requests all Linux related articles from the past two days, and formats the results in RSS 0.91:
In this case, Meerkat returns an RSS document with zero
item elements. This indicates that there are no matching results, but it does not indicate whether this is a valid category ID which contains no news items, or an invalid category ID.
Option 3: Use Extended HTTP Headers
In this scenario, the web service always returns an HTTP 200 OK Status Code, but specifies an application specific error code within a separate HTTP Header. The Distributed Annotation System (DAS) currently uses this alternative approach. For example, the following URL requests sequence data from Human Chromosome 1 from the Ensembl DAS Server:
If you click on the link above, you will see an empty page. However, if you have network sniffer, you can see the following HTTP response:
HTTP/1.1 200 OK Server: Resin/2.0.5 Content-Encoding: gzip X-DAS-Version: 1.5 X-DAS-Server: DazzleServer/0.98 (20030508; BioJava 1.3) X-DAS-Capabilities: dsn/1.0; dna/1.0; types/1.0; stylesheet/1.0; features/1.0; encoding-dasgff/1.0; encoding-xff/1.0; entry_points/1.0; error-segment/1.0; unknown-segment/1.0; component/1.0; sequence/1.0 X-DAS-Status: 403 Content-Type: text/plain Content-Length: 10 Date: Sun, 30 Nov 2003 21:02:13 GMT
As you can see, the DAS server has returned an HTTP 200 OK Status code and a required X-DAS-Status code. In this case, the code 403 refers to a DAS Specific error code: “Bad reference object (reference sequence unknown)”.
Option 4: Return an XML Error Document
In this scenario, the web service always returns an HTTP Status Code of 200, but also includes an XML document containing an application specific error message. The XooMLe application currently uses this approach (XooMLe provides a RESTful API wrapper to the existing SOAP based Google API). For example, the Google API requires that you specify a valid developer token. If you specify an invalid token, XooMLe returns an XML error document. As the XooMLe documentation puts it, “If you do something wrong, XooMLe will tell you in a nice, tasty little XML-package.” For example, try this URL:
<?xml version="1.0" ?> <xoomleError> <method>doGoogleSearch</method> <errorString>Invalid Google API key supplied</errorString> <arguments> <hl>en</hl> <ie>ISO-8859-1</ie> <key></key> <q>oreilly php</q> </arguments> </xoomleError>
Best Practices for REST Error Handling
Assuming you are busy implementing a REST-based web service, which error handling option do you choose? I don’t believe there are (yet) any best practices for REST error handling (for an overview of other best REST practices, see Paul Costello’s REST presentation, in particular [Side 59].)
Nonetheless, here are my votes for most important criteria:
<?xml version="1.0" encoding="UTF-8" ?> <error> <error_code>1001</error_code> <error_msg>Invalid Google API key supplied</error_msg> </error>
What is the best option for propagating error messages from REST-based web services? Are there other options beyond the four described here?