Slides from many of the sessions at OSCON 2004 are now available, including “Advice For Open Source Job Seekers” that I presented with Bill Odom.
Slides from many of the sessions at OSCON 2004 are now available, including “Advice For Open Source Job Seekers” that I presented with Bill Odom.
The best thing that I can say about this conference was that I stayed the entire time. I’ve bailed early on the last three O’Reilly conferences because I just could stand them after 3 days, but I stayed for the full five days this time.
I think part of this was my new conference mindset: don’t volunteer for anything and don’t join any new projects. Talking to people about getting things done is about as useful as wearing my scuba gear in my apartment: I might really want to dive in, but but that’s only going to happen if I do it myself without waiting for other people.
Some of us checked out a BBQ place on the other side of the river. Don’t waste your time, it’s Oregon which is just another way of saying
! South. They had neither corn bread, corn on the cob, nor hush puppies. They didn’t even have pulled pork.
We wanted to see the submarine at the Oregon Museum of Science and Industry (where Tom Pheonix used to try to set cermanic tiles on fire with a blow torch), but there was a three hour wait for the next available tour. Bummer. Randal ended up programming a simple computer bit by bit so its display spelled out “OSCON 2004″. (There are other photos on my moblog, until they scroll off).
After that the day was over. I needed a nap and Randal disappeared to his part of the house to do whatever he needed to do (I think it involved email and SG1). I woke up a bit later, played with Kwiki (cursing its lack of documentation in the code), and figured out how to get the Perl Curses module to compile on darwin (the hints file is messed up. longname takes no arguments and touch takes three arguments, not four). Maybe I should take another look at Konfabulator, but then, I couldn’t do my personal bloomberg application from any terminal with Konfabulator.
Randal and I went to a steak dinner and met up with a business acquaintance who happened to be in the same place. After a few drinks he was feeling his oats, and as a manly sort of competition thing wanted to see what it would take to embarrass me. Well, let’s just say I’m no homophobe and that’s what he was counting on. Gimme a break. I lived in Manhattan for five years and my wife is an opera singer. I run into more gay guys than straight guys. I shut him down with “That’s all you got? Bring it on <expletive>”, but he didn’t have the guts for it. That might work with the Java weenies who have to wear ties to work, but I’m open source all the way. All we have to do is call the bluff. :)
So that’s it. I’m leaving on a jet plane, don’t know when I’ll be back again. Sometime next year at a place they haven’t picked out yet, but probably isn’t Chicago.
Day 3 was mostly given to hallway sessions, talking about testing
and WWW::Mechanize, and talking about The Perl Foundation to booth
visitors. It was also great to be able to put faces to the names
of all the folks at O’Reilly and oreillynet.com who I’ve worked or
corresponded with over the years, like
Marsee Henon and
Derrick Story turns out to be
very tall. I’m glad he fights for the forces of Good.
Another person I’ve known for a while, but never met, is Pat Eyler.
We’re both active on the perl4lib mailing list for libraries that
use Perl. He gave a talk on Koha, the open source library automation
system that’s a competitor to the products that my former employer,
and corporate sibling, Follett Software Company sells. It was scary
hearing about all the cool stuff that they’re doing, but Pat also
pointed out that free software, and especially library software,
is more like a free puppy than free beer. It echoed Tim O’Reilly’s
keynote where he noted that when the software becomes commoditized
that the the service will be the differentiator. For the sake of
my own profit sharing check, I’m glad FSC has always pushed quality
service as much as quality software.
The Perl Lightning Talks were a lot less frantic and comedic this
year, and half the length. I gave my little lightning talk on why
you should use the prove utility, and a couple of people told me
afterwards that they were glad to have heard about it. Where past
years had a full 90 minutes of talks, this year, Geoff Avery used
the 2nd 45 minutes for a session called “Works In Progress.” It
was a chance for people to get up and tell about projects they have
going on, and call for volunteers if wanted. It’s a great idea
that I’d like to see done at future OSCONs.
Thursday night, I detoured from the Stonehenge party and went to
Powell’s with Marsee, O’Reilly’s user group contact. I think
Powell’s grew since last year. It just seems larger than last time
I looked. I ran into someone from OSCON, and talked about career
management, and then someone else who was looking for a book about
automated unit testing, which I was only too glad to discuss as
well. It’s just as well that the store closed before I was done
talking: I would only have bought a dozen books to stuff in my
suitcase for the trip home.
Finally, on Thursday was my “Open Source Hiring Tips” talk with
Bill Odom. About 100 folks sit and listened to us discuss how to
present yourself through all stages of the job hunt. It was an
honor to have Larry Wall and Damian Conway sitting and listening,
and when I noted that I am indeed hiring, Larry piped up “I’m looking
for a job.” What a hiring coup that would be!
I’m now at the gate at PDX, pleased with how many electrical outlets
there are. O’Hare has one per gate, so far as I can tell, and the
seats around it are usually already taken by someone else with a
laptop. I’m surprised, however, that there’s no wireless. For
such a wired city, I’d think that it would be available here, even
A final note: I finally got to meet Tom Limoncelli, and he’s one
of the nicest people you could ever want to know. My list of
“swell people I’ve met at OSCON” gets bigger every year…
Paul Graham’s talk was excellent, save for the audio set-up. Randal and I, both wannabee sound wannabees, bet that the awesome AV guys were gouging their own eyes out in the back. Besides that, Paul played the audience well. He speaks the geek language without sounding like it. Now I feel guilty about not hacking on anything this week. I have to track him down to get him to write “What Perlers Should Know About Lisp” for The Perl Review.
Most people don’t realize what it takes to make the AV portion of a big show work. I’ve done a lot of big shows, including COMDEX, and the people who do the O’Reilly shows are the absolute best. I don’t know the crew this year, but David Hooper is still around. I think I’ve seen him at almost every O’Reilly conference.
If you hang out in a room early in the morning, you may get to hear the AV crew talk about what they have to do, which is a lot more than you probably think it is. They are a lot more than just cable pullers. Take a look around at the conference tomorrow. You’ll probably see AV guys moving quickly and with a purpose. Hang out within earshot and you’ll hear them swapping stories about users just like system administrators. If they look calm and collected, it’s because their good, not because nothing is happening. My wife is a big shot opera singer, but when I talk to people about what I’ve seen backstage, I tell them about the union and trade people who make everything look effortless backstage. They are doing a good job when you don’t think about them.
However, there are some things these supermen of AV aren’t going to be able to fix. For instance, don’t walk on stage with a laptop they have never seen before and that you have never tested with their projection setup. They will try to fix things in real time, but then your audience has to see them. They end up looking bad if they can’t get your goofy computer patched in instanteously. If you are a speaker, sign up for an AV rehearsal slot, or show up really early in the morning when they aren’t fighting fires.
Here’s something that will freak them out: I challenge everyone at the conference to go up to one of the AV guys tomorrow and just say “thank you, you guys are awesome”. You don’t have to buy them beer unless you want extra credit. I bet if enough people do that, someone is going to ask Gina what the hotel put in the muffins.
Anyway, I ducked out before Damian’s talk (which is excellent: I saw it in zero time). He stepped into the slot of the Internet Quiz Show since Jon Orwant couldn’t attend (I hope everyone is sending him “Get Well” messages). I had to meet up with Randal after he setup a bunch of access points and NoCat for Bar 71. I still don’t know everything that is going to go on there, but I think there is going to be some live webcasts, and maybe something to do with The Perl Foundation. Bill is still being really secretive about the party: not even Randal knows the whole plan, and he’s paying for it. Bill did let me hold the check made out to the bar, and all the ink on it made it really heavy. It’s free drinks and food, and that costs a lot. Randal flew in some heavy-hitting musical help tonight. I can’t say who it is, but she’s not Prince, Michael Jackson, or Madonna.
I got back to Randal’s place to find my audio equipment had shown up. Apparently Bill stopped by Stonehenge today, saw the packages for me and took them back to Randal’s house. That’s good news for me and saves me a trip beyond the wireless range of the hotel. I should be walking around the hotel with a big microphone tomorrow (a Beyer M58 for you audio weenies). Find me if you want to be part of my new project “Talk About Perl”. I’d like to get people to simply tell me “What is Perl?” for an audio collage, but I’ll take unsolicited statements and rants and raves too. I want to create an oral history of Perl over the next couple of years.
My moblog seems to be working pretty well. The camera doesn’t take the greatest pictures, but I love the fact that I can email as many as I like from my phone for only $5/month. I just skip one Starbucks coffee a month and I’m all set. I can’t send photos from the lower level of the hotel, but I discovered that if I get on the escalator and hit send at the bottom, about half way up I get enough GPRS goodness to connect, and by the time I hit the top, the image is mostly sent. By the time I turn the corner to hit the down escalator, the outbox is empty and the image is one the way to my server in New York. Sadly, I can’t figure out how to make server side includes use a different time zone, so I need to add a hack tonight to set the file mod times to three hours earlier. I really don’t each lunch at 3pm.
Remember: freak out the AV guys by saying “Thank you”.
Press kits seem to be a godsend to a lot of journalists. They can take the text in them and put it right into their articles. No one calls them onto the carpet about it because no one wants to admit that the press releases are a bunch of crap and that no one really understands them. Let’s just pretend that we know what all these things are, and that technology reporters aren’t just data entry clerks for businesses.
Let’s look at the ones that vendors put out in the OSCON press room. I only open their folder and read the top page. Anything beyond that is too much work. If I don’t understand what they are by then, there isn’t much hope for me understanding what they are trying to sell me.
Gluecode Software: “Gluecode Software(TM) is an open source application infrastructure company. The company’s enterprise software solutions include business process management and enterprise portal.”
I have no idea what these people do, but they said “enterprise” twice in the same sentence, and they said “open source”. I’ve checked that second sentence a couple time because it sounds wrong: is there a missing indefinite article in front of “enterprise portal”? Did anyone proofread this before they spent the money for the four-color glossy flyer?
They don’t have a product, they have a “solution”, which linguist Geoffery Nunberg says companies say when they want to make us think we have a problem. When marketeers tell me they are “complete solution providers”, I tell them I’m a “complete problem provider”.
ObjectWeb: ObjectWeb is an open not-for-profit corsortium of leading companies from around the world who join forces to create open-source, standard-based solutions for middleware and distributed software infrastructures.
There are so many things that disturb me about this paragraph at the top of their page. It’s their most important statement because it’s white text on a blue rectangle. This is what they want me to see. Aside from being overly wordy (doesn’t consortium mean they have already joined forces?), what is middleware? I know what it is supposed to mean, what does it really mean? What’s the “solution” here? And what about these leading companies? Everyone is always a leader in their own press releases. Does anyone have the nerve to say “We’re a small company that hopes to make a big difference”? Out of over 30 company logos on this page, I’ve only ever heard of NEC, SuSe, Red Hat, and France Telecom. None of those have particularly seemed leader-like to me, unless leading means being behind the people in front of you.
MySQL: MySQL AB develops and markets a family of superior, affordable database servers and tools.
Any questions? I thought not.
M1 Global: M1 Global’s MDE Studio is an integrated model-driven development environment. Running on the Eclipse platform, MDE Studio transforms a simple UML model of an application into the majority of the implementation by executing MetaPrograms.
These sentences sound worse than they are. No, maybe they do sound as worse as they are. The page does have a side bar that defines Eclipse, UML, and MetaProgram. I started to scratch my head at “simple UML model”. If it’s simple, why do I need this thing? What if it is complex? Does it not work right? If I don’t know about any of these things, why should I even care about you? No fear: below the folder pocket it mentions Java and Windows, so I don’t even need to worry about it. Shouldn’t Java things run everywhere though? It also says it minimizes modeling and maximizes code generation. Most people try to maximize modeling and reduce code generation. Better design with less code seems like a good thing, but maybe they can’t sell that.
ActiveState: ActiveState Komodo is the award-winning, professional integrated development environment (IDE) for dynamic languages, providing a powerful workspace for editing, debugging, and testing your programs.
Well okay then. Besides the bluster about “award-winning” and the near meaningless-through-overuse “professional”, I know what these guys are selling and what it does. It edit, debugs, and tests my programs. That seems pretty simple.
Now here’s an interesting tidbit. I thought MySQL AB and ActiveState had clear statements. MySQL AB is in Seattle and ActiveState is in Vancouver, so it must be something about the water or the fresh air up there.
The afternoon was mostly me trying to stay awake. I’m getting too old for these week long conferences and all night parties. This is why you’re supposed to go to college right after high school—otherwise it would kill you.
I chatted with Andy Lester at the end of the day. I keep asking hom when he’s going to write a book on Perl testing, and he doesn’t have the answer yet. Somebody is going to write the book, and Andy is a good person for it, but the opportunity is not going to be around forever.
Randal, Brenda Brewer, and I ended up at Bar 71 for dinner. You may think it’s just a bar. You may think it is the bar where Stonehenge had it’s party. You may think it’s the bar across the street from the homeless shelter. You probably aren’t thinking about it’s baked Mac & Cheese, but you should. It’s the best I’ve had anywhere, and I’ve tried a lot of Mac & Cheese around the world.
After dinner Randal and I went to the mod_perl BOF. Stas Bekman wants to get the word out about mod_perl. The Netcraft and Securityspace trends reports show mod_perl penetration at at a plateau, which doesn’t mean that it is holding it’s ground, but that it’s not keeping up with new installations. The crowd had a lot of ideas about fixing this. David Wheeler wants mod_perl 2.0 so the community has a new version and new development branch to rally around. Various people talked about setting up a position within The Perl Foundation to oversee mod_perl advocacy, and a lot of the crowd agreed that mod_perl could use some good old public relations work (although it might be too pricey).
I think a devious path could help mod_perl: publicise mod_perl working when Microsoft IIS fails horribly enough to get on the news. I also think we need to hack the social network to personally get our message in front of mainstream media like The Economist and PRI’s Marketplace. I think we need to forget about the tech and think about the decision-maker level. You have probably seen “Oracle makes Linux Unbreakable”. That’s a full page ad in almost every Economist. Image the same ad “mod_perl makes apache unbreakable”. I want to write a non-technical article about mod_perl performance: “Ten times the hits, one-tenth the resources”. Other people came up with other devious paths too. Think MoveOn.org and Meetup.org style action.
Some party was going on in the bar, and I ended up over there. I don’t know whose party it was, what sort of product they have, or anything like that. At least the Stonehenge party made just about everyone wear the Stonehenge shirt (and there were ads above the urinals too, so you’re going to see the name). It was just free beer so that’s where everyone was: that is, if you could get in the door. They were counting people to keep within the fire code, which means people has to leave before people could go in.
I ended up chatting with Dale Dougherty about his new magazine Make, and we talked about The Perl Review for a bit. A lot of people are giving me a lot of tips and hints about magazine publishing, and I could use all the help anyone wants to give because I’m new at this.
After that I chatted with Dick Hardt about his new thing, sxip, which is going to be a Microsoft Passport sort of thing, but done right, and done in a distribute manner. No one should have to store your personal information ever again, and you could have a single sign-on to everything on the web. I like the sound of that because I only have to update my personal info in one place.
I left the party for The Perl Foundation auction a room over. Robert Spier and Schwern ran the show while Ann Barcomb, Bill Odom, and Jim Brandt rounded up the money. The auction was going pretty slow until Michel Rodriguez and I got in a bidding war for Peter Scott’s out-of-print Perl Debugged. For the hell of it I did $65 at the last moment, then it was game on. We went back and forth with the crowd going wild. Michel bid $90, and I wanted to slow it down, so I bid $91. He jumped it up to $100, and I yielded. But hold on! Robert produced a second copy of the book and announced it was a dutch auction, so Michel got his for $100 and I got mine for $91. Overall, they wanted to raise $2,000, so that was just about 10% of that. My bid was an ugly number, so I wanted to round it up to $100. I challenged Schwern to auction the shirt off his back, and started the bidding at $9. The next bid was $10 for Schwern not to take his shirt off. Chaos insued, and people lined up on different sides of the room collectively bidding for Schwern to strip or to not strip. It was pretty close: a hundred or so dollars on each side. What the hell, I bid another $100 for the strip side. It’s for charity! Think of the children! The non-strip side tried to match that, but they couldn’t come up with the money. Now I have the shirt, and TPF has a bunch of money. They ended up with $3,300 at the end of the night, and Schwern went home without a shirt.
I think Schwern’s shirt should go on a world-wide tour to the Perl Mongers groups. For a small donation to The Perl Foundation, it can make an appearance at your user group meeting. Who wants it next?
I discovered late last night that the GPRS service around here really sucks, so I had a lot of trouble updating my moblog. I also discovered that once my phone can’t connect to GPRS after several tries, something happens that just keeps it from creating a conection even when it should be able to connect. If I reboot the phone, though, it connects immediately. Great, now I have to reboot my phone to get it to work, as if computers weren’t already a pain in the butt.
Related link: http://www.real.com/beta/harmony.html
Real Networks wants to use its Harmony stuff to let iPod owners put stuff on their iPods. Apple is being a big poopyhead about it.
This reminds me that I’d really like to stream Real content over the Airport Express. I guess Apple is going to be a poopyhead about that too, now.
Related link: http://groups.yahoo.com/
You may remember that the word Hack and Hacking are banned from the Yahoo group descriptions.
Today they took it one step further, double XX is not allowed [see the picture]. I guess they are fighting the naughty side of the business. But fans of the Vauxhall Trixx will have to find another place to gather.
The funny thing is, that they cut into their own business, because the ‘xx’ in my group description is in the link from a Yahoo Map that points to the SAP Palo Alto location. Thankfully there is also MapQuest without X.
Sometimes I really wonder, why I still use Yahoo Groups.
Which other groups are Xed from Yahoo?
Related link: http://make.oreilly.com
Make: technology on your time is a new magazine from O’Reilly.
The promotional postcard has three stories:
On the back it says:
Make: is a technology project magazine for the Digital Age. Coming in early 2005, this full-color quarterly is loaded with exciting projects that make the most of your technology, at home and away from.
Where did I leave off? I think I was in the middle of the afternoon in the last entry.
Most of the Perl talks are over-subscribed, it seems. People sit in the aisles, stand in the door, or try to peer around the corner. The rooms are a bit small, but from talking to people, I think even with bigger rooms the talks would be just as crowded. A lot of people go do something else when they see the overflowing crowd.
In the evening I ended up sitting next to Brian Ingerson (Ingy) in the press room, and since it was just the two of us we decided to get dinner in the sports bar upstairs. Apropos of that, we started talking about cycling. He used to live in Chicago, and I now live in Chicago so I was picking his brain for good routes and shops. We both bemoaned the boring riding situation around Chicago. Most of the midwest is flat, so I don’t get to ride the hills like I could in Southern California, or Ingy could when he was here in Portland. I think it’s the first time I’ve talked to him about something other than technology, and it was a good subject break in the middle of the conference.
After that we wandered up to Nat’s “Presidential Suite” (that’s what it says on the door for the annual P5P thing. There is a wonderful view of the river from the 16th floor, and I ended up standing next to Michel Rodriguez on the balcony. He has an article in the current issue of The Perl Review. While Ingy wasn’t explaining the aerodynamic properties of the American carrot versus the Australian carrot in reference to glide ratios and the distance to the street from Nat’s balcony.
That party got hopping, and I got a chance to talk to Chris Nandor (pudge) for a bit. He explained to me why Derek Jeter is a jerk, how I should have set my TiVo to record sports events (like the tour de France) by recording several hours past the advertised end time so I don’t miss anything when it goes long, and why a whole new generation of kids won’t know who the real “Pudge” is.
After that I got a tech support call from my wife, and I solved it by sending her shell commands over iChat. That problem solved, I talked to her for a while since I have been neglecting her this week. This made me a bit late to the Stonehenge party, but I’m not a drinker so I didn’t need to get started early on the free drinks.
The party was amazing. The bar was packed and overflowing into the streets. It was so packed that Stonehenge and the pub renegotiated the price a bit so the bar would not lose money on all of the extra people who showed up. I’m not a great party person, and I got there when most people were a bit worse for drink, so I’ll leave the myths and legends to someone else.
The party was so hoppin’ that the The Perl Foundation auction decided it was too crazy to try a simultaneous global web-cast. It was just far too noisy. I think they are going to try again tonight around 9ish pm, somewhere in the hotel. They have boxes and boxes and boxes of donated swag as well as donated services, like an hour of pair programming with Ward Cunningham. They also have “First Copy” of The Perl Review. Keep checking the bulletin boards and announcements.
I got to drive Randal home, which was a bit of a treat for me. His car is really, really fast, and red. He kept telling me “this is a six speed!” and I kept telling him “but four is all I can handle!” Not only am i tired, but it’s dark, I’m in a foreign country, and everyone else in the car (save Paul Blair who only need a ride back to his car at the hotel) were feeling how people feel after these sorts of things. Randal’s instuctions to his house were “Blah blah blah south, turn right, go through 18 lights, turn right.”
This morning I got up late. I stayed up too late last night not just because the party ran late but because I decided to work on some things back at Randal’s place too. I ended up at the hotel around 11 am, just in time to get in line to interview Tim O’Reilly. He pushed back our original time to see a talk on Make, a new O’Reilly venture that is going to be the ditigal age version of Ready Made. It’s not available yet, and there is no official announcement, but look for it this year I think. I blew off the talk because I thought he was going to a talk about
make(1), which seems like a boring topic for an Emerging Topics track (also known as the “Things Nat Likes” track). I wish I had seen the talk, now. It ran long without anyone getting anxious to go to lunch, and I ended up chatting to Suzanne Axtell and Nat while we waited for it to end.
I grabbed Tim on his way out of this talk, having brought him a selection of bag lunches to save him a bit of time. Andy Gurevich showed with a camera crew to interview Tim for an open source movie he’s working on for Feel (This) Films, Inc., in Portland too (gees, Portland is a hot area). The details are developing, but not only is the movie about open source, but he might use shared footage too. I didn’t get much chance to talk to him since we were competing for Tim’s time. I yielded to him so he could get a quick interview before my longer one. I’m not sure where the film is going, but their flyer shows a person holding a sign that says “Software patents makes God cry”.
My interview with Tim went well. I might upload the raw audio soon, depending on whether or not I should embarrass myself with the juxtaposition of some of my fumbling questions next to his excellent answers. As I said earlier, I’m starting a project to capture the history of Perl in audio form, but not the dates and places and facts. The story is not the code. I’ll have more details on this later. I plan to interview a lot more people for this project.
After that I came back to the press room, where I am now sitting. I started at a table by myself, in the usual seat I’ve taken this week. Jay Lyman from NewsForge normally sits on the other side of the table, but I haven’t seen him today, and Steve Mallet and his crew take up the table at the back, although he’s already left. As I’m pondering my loneliness, Dr. Freeman Dyson comes in and sits in Jay’s spot and starts reading Tim O’Reilly in a Nutshell. I’m not making this up. I have a picture. I haven’t said anything to him because I figure he’s like me: he’s hiding out so he’s not mobbed by people asking him the same old question that everyone asks. Before I could get up the nerve to say “Hi”, a couple of interviewers showed up and started talking to him. No wonder he’s here.
I offered to get lost so I didn’t bother the interview with my existence, but they didn’t mind so I’m still sitting at the table. I’m at a live interview of Dr. Dyson talking about wireless networking, missions to Mars, the legal environment of innovation, cheap space travel, the social hierarchy of Princeton, the Kyoto protocol, global warming, the profitable business of protest, and raising children. Holy canoli.
Related link: http://conferences.oreillynet.com/os2004/
Kernel developer and author Robert Love wants to take all the fun out
of Linux. Someday you won’t have to specify /dev pathnames to
utilities when installing new devices, or configure CUPS and download
a driver to make a printer visible, or determine which application can
read data from a consumer device (unless you choose to do it the
old-fashioned way). Instead, when you plug in something, it will just
work, OS X style.
Love aired some even more extravagant fantasies, but we’ll stick to
plug and play for now. One of the pleasures of hearing Love’s talk is
that he brought alive the details of kernel and userspace
implementation for these features, a collection of interfaces known as
Project Utopia. Back-to-back with a talk from
freedesktop.org developer Havoc
Pennington on the messaging component called D-BUS, Love showed how
Linux will hopefully soon make the leap to the ordinary consumer.
Love’s and Pennington’s appearances also highlighted how the free
software community is pulling together around Project Utopia. Love
started working for Ximian around the time it was bought by Novell;
Pennington works for Red Hat. Love has deep roots in the Linux
community, Pennington among X developers.
The hardest parts of plug and play are actually implemented already:
detecting a plugged-in device, figuring out what driver handles it,
and loading that driver. Given that all this has been in the Linux
kernel for some time, the rest of the key steps toward plug-and-play
This kernel subsystem creates a directory named /sys and
populates it with pseudo-files (there’s probably a fancy, formal term
for that but I’m not going to waste search engine time on it) that
contain all the information the kernel knows about every device on the
system: memory size, addresses, USB information, and so forth. The
sysfs facility is similar to the familiar /proc directory and
is gradually replacing it.
This is a common industry term standing for “hardware abstraction
layer.” On Linux, it consults sysfs and presents the information about
devices and memory in a structured form convenient for programs to
query. (If I understand this right, the lower layer, sysfs, is
visible through both an API and as human-readable text, while the
upper layer, HAL, is visible only through an API. This seems a little
This has nothing to do with hardware buses, but just presents a
publish-and-subscribe interface that lets programs be notified when
devices are added and devices be automatically associated with
programs. For instance, when you plug your digital camera into a USB
port, a film editing program might pop up on the computer monitor. The
publish-and-subscribe model is familiar to modern GUI programmers and
in many other programming environments. In this model, the process
being monitored sends messages indicating events that occur within the
process, while processes who want to do the monitoring register
themselves to receive these events through asynchronous notification.
D-BUS consists of two components:
A library that implements the publication and registration.
A daemon that listens for events from publishers and sends them to
subscribers. Different users may want different tools to respond to
events such as the plugging in of a consumer device. Therefore, one
instance of the D-BUS daemon handles events that should be responded
to on a system-wide basis, while multiple instances potentially run
for each login session.
These provide a convenient and appealing user interface to devices and
events; an example is GNOME Volume Manager.
sysfs, HAL, and GNOME Volume Manager are implemented now; D-BUS is a
work in progress. Another facility, hotplug, which lets the kernel
handle devices when they are plugged in, is also implemented now. (I
did not attend a related relevant talk today, Greg Kroah-Hartman’s
session on udev.)
Another sort of internals was the subject of another session at the
conference today: Brian Aker’s “Building Your Own Storage Engine for
MySQL.” The talk was a tribute to MySQL’s clean, modular design and
carefully thought-out interfaces. The very existence of multiple
storage engines shows MySQL’s adaptability to different needs; it’s
quite impressive to learn on top of this that users can reasonably
expect to be able to create their own storage engines–and in fact
that some users seem to find this useful in order to accommodate
legacy data with particular needs.
How far is Linux from Plug ‘n Play?
Related link: http://www.oreillynet.com/oscon2004/
Wednesday morning I just missed the MAX train and arrived ten minutes late for
Tim O’Reilly’s speech. I decided to spend the time organizing my files and
setting up Monday’s report.
During the morning break there was a brief press conference with Tim O’Reilly
and Nathan Torkington. Paul Blair came with me to take pictures. Most of the
initial dialogue related to the talks I had just missed. Things were a little
rushed because Tim had to attend a panel; I also wanted to go to a speech in
the next slot. Before I left I asked Nathan about the possibility of a future
European OSCON. I remembered hearing about one before the economic downturn
and wondered if there was a chance of the idea reviving. Nathan indicated that
2005 was a possibility, then began talking about the difference between American
and European hackers. Although I was interested in the subject, I left to see
Megan Conklin’s talk entitled ‘Do the Rich Get Richer? The Impact of Power Laws’.
In the past I have primarily attended talks in the Perl track, supplemented with
a few on tools I use, such as Postgres. This year I have developed an interest
in analysis of social dynamics, which was why I found Monday night so intriguing.
Megan explained the basic scale-free model as akin to an airport hub system:
airports (nodes) which already have a lot of traffic (routes/links) get more traffic,
unless they change by becoming less appealing (lose fitness). In open source
development, projects are nodes and developers are links. Her initial theory
was that open source development would follow this model.
Using Sourceforge as a source of data, she graphed the number of developers per
project. There were thousands of projects with only one developer, and, at the
other end of the scale, one project with 180 developers. Most projects had no
more than a dozen developers. The ratio remained consistent even if inactive
projects were discarded. Furthermore, even highly successful projects–such
as Apache–failed to form a monopoly when compared with other projects of a
Therefore, Megan proposed, there are barriers in the open source world which
prevent this. She made several suggestions: projects are divided in to small
teams, so that they seem less monopolistic; one node simply can’t accept all the
links because there are too many links; there is a mutual selection process where
the node can choose to reject or re-deploy a link. Finally, the developers themselves
discourage a monopoly through a desire to be innovative and distinct.
The term ‘innovative’ reminds me of the comparison between programmers and
cats: trying to get all developers to do the same thing at the same time is
like herding cats. I’m not at all surprised that there are a limited number
of opinionated people who can work together on the same project.
Lunch was provided by Microsoft, but I opted for Indian food instead. After
lunch I went to the exhibition hall and collected stamps, swag and information.
Yahoo had the nicest freebies: pens that lit up if you tapped on them, purple
coffee containers, and playing cards. I also liked the bookplates from
Developer’s Library. It was interesting to see some local flavor in the form
of two booths promoting Oregon.
In the afternoon I attended the Perl Lightning Talks, hosted by Geoffrey
Avery. The network in the room was unbearably slow, to the point of affecting
Schwern’s talk. There were 8 talks in total. Thomas (who did not provide
his surname) gave a talk about his NGO in a box project, which provides free
software for non-profit organizations in a format they can access. David
Turner argued about Parrot licensing in a humorous fashion.
In the main, the talks were less amusing than they have been in previous years.
I was especially disappointed that Allison Randal didn’t follow her success
in 2002 (with a Dr. Seuss inspired rhyme on Perl 6) and her 2003 singing effort
by performing the Perl 6 Elements song (based upon the
6 periodic chart
Elements song), despite the fact that I hinted heavily that
I would like to hear it.
I decided to leave early in order to catch up on writing and get enough sleep.
After my exhaustion Tuesday, and slight weariness Wednesday, I wanted to enjoy
Thursday with more energy.
I got several compliments on the lucky find shirt from the
It reads: "Putting the RIOT back in patriot".
Related link: http://www.oreillynet.com/oscon2004/
After only four hours of sleep, I was naturally a little tired Tuesday
First I attended the tutorial on ‘RT Developer Training’ by Jesse Vincent and
Robert Spier. I was interested in the subject because I recently spent six
months convincing my now-former employer to use RT in place of emails for
bug and request tracking. After they decided to use it, I spent two months
twisting it to their specifications in my spare time, and was pleased that I
was able to handle all the requirements without even touching the code. However,
because I didn’t have to amend the overlay code, I wanted to hear how to best
At one point Jesse named some of the ways in RT was being used which differed
greatly from what he originally expected with writing it. In particular, he
cited a project in Germany to assist troubled teens which uses RT to track
interaction and has built in special features such as sending an automatic
message to all staff members whenever a transaction mentioning suicide is
I was so tired by midday that I skipped lunch and accepted Allison Randal’s
offer to let me take a nap in her room. This refreshed me somewhat.
Rocco Caputo’s POE tutorial was next. As with the RT talk, I was familiar
with the introductory section. Having not had the opportunity to use POE
for some time, I was interested in how it is progressing. POE is still an
innovative solution for time-slicing, and from the sound of things, it has
only gotten easier to use.
Several of us wandered down the pier for dinner. I got back just in time
for Larry Wall’s talk on screensavers. In his usual fashion, he drew seemingly
unrelated threads in to a tale about Perl and its hackers.
My favorite part of the talk was when he described diving. He saw a school
of cuttlefish swimming and changing colours in unison. For no obvious reason
they split in to two groups, each with its own colour scheme. He made the
parallel between this and some open source projects, noting that for all he
knew, the squid may have split over the colour choice, and, if so, they wouldn’t
have been the first group to do so.
Larry was followed by Paul Graham, who gave a talk about what makes a great
hacker and what the best environment is for innovation. Most of us appreciate
talks about ourselves (although I do not profess to be a great hacker), so this
look at the mindset of the hacker was fairly well received. I can’t imagine
that anyone–hacker or not–likes to work in a cubicle, however.
Next David Adler presented the White Camel awards. Earlier he had asked me
to hold up his laptop to display pictures of the two absent recipients, and
the idea of presenting it as if it were a famous awards ceremony by wearing
formal clothing. David had a tie, and I put on a gown. In the end we spent
more time getting dressed than on stage, but that was the intention. I displayed
the pictures of Dave Cross and Jon Orwant, and I understand the camera technicians
were able to zoom in on the laptop to display the pictures on the larger screens.
Because Jon Orwant was not present, the infamous and amusing internet quiz show
was canceled. Damian Conway filled in with his ‘Life, the Universe, and
Everything’ talk. It was the second time I saw the talk, but it was still
funny. At least one attendee vowed on IRC to switch to Perl simply for larger
doses of Damian. If you have never seen this talk, do not pass up a future
opportunity to learn about Perl, the game of Life and Klingon programming.
I left with the intention of going to bed early. I just managed to make it to
bed by midnight, in part because I stopped to purchase an extension cable. I
think this valuable piece of equipment will become a part of my standard
I’ve been hollering this for years now (softly counseling in the case of my clients), and I’m glad to hear others giving the same advice. As no less a sage than Mike Kay says:
“I wonder whether [creating huge XML files] is a wise way of using XML. Even with XML databases, most databases are optimized to handle large numbers of small/medium documents rather than a single gigantic one. I don’t think that using an XML document as a replacement for a database is a particularly good idea. It’s not the job it was designed for.”
Yes folks. XML is not designed to be a monolithic database instance implementation. If you’re dealing with gigabyte XML files, I can almost guarantee your design is broken somwehere. Between modern file systems and modern archive formats and tools, there is no reason not to decompose XML into reasonable chunks.
Update: for a bonus, see Kay’s argument against some overcooked RDBMS dogma. I strongly agree with him here, as well, even though I’d guess Fabian Pascal and gang are still looking for scalps of such heretics.
Related link: http://zephyrfalcon.org/weblog2/arch_e10_00590.html#e591
Good old Daily Python-URL led me to Hans Nowak’s musings on controlling the operation of generators from the “outside” once they had started running. Interestingly enough, his technique is precisely the same as one I (successfully) experimented with in my recently-announced Scimitar , an ISO Schematron implementation in Python.
Warning: This article is soaked in the inevitably arcane ideas introduced by the exotic code control flow opened up by Python’s generators. I do think these ideas are well worth soaking in if you’d like to harness Python’s full power. Way back when, I was very excited to hear of the inclusion of generators in Python because I had many plans to use them to supercharge XML processing in Python. I’m starting to harvest a good lot of that fruit for the market.
In my own work towards more flexible generators I started with Dave Mertz’s brilliant explorations in his article “Generator-based state machines“. The Mertz technique simulates co-routines (and Mertz has extended the technique to simulate microthreads). This is done by having generators yield a control cookie to a central scheduler in order to indicate which generator is the co-routine to which control should be passed. The generators also yield a global value (which Mertz called “cargo”) that can be seen within the body of subsequently invoked generators. No summary can do justice to this idea. If you haven’t read the article linked earlier in this paragraph, consider opening up a new browser tab or Window and giving it a quick read.
The Mertz technique is certainly sound, but what Hans and what I separately worked out was the refinement of encapsulating the generators in a class, and using instance values to share cargo between generators. This tweak not only makes the code a little cleaner, but it provides some allowance for re-entrance. I also found that in most cases, you can ditch the scheduler, and lose very little of the power of the Mertz technique using only semi-co-routines (roughly code that invokes generators which can invoke further generators).
My experiments led to a pretty powerful mechanisms for taming the notorious complexity of SAX processing. SAX involves having the XML parser call back into user code to handle elements, text, etc., not unlike classic event-driven GUI programming. This typically calls for complex state machines to stich the various snippets of handler code into coherent logic. In parallel to my experiments with generators, I’ve been experimenting with techniques for helping automate (no pun intended) the state machine design. See my latest article, “Decomposition, Process, Recomposition” for an example of how such state machine factories can boost XML processing. The
_state_machine class in listing 2 is the crux.
In Scimitar, rather than making the state machine easier to architect, I went for largely eliminating it through the co-routine approach (as I mentioned, semi-co-routines seem to work just as well in practice). Specifically, I passed SAX events to generators which would then effectively knit the separate call-backs into one smooth run of code, allowing me use local variables and the like to make state management a cinch.
I am very excited by these fresh directions in Pythonic XML processing, and I expect to marry the self-assembling state machine technique with the semi-co-routine technique in future Python/XML columns, and in future releases of running code. I do want to remind readers that Scimitar provides a very practical example of these techniques, implementing a Schematron to Python compiler in less than 500 lines of Python/SAX code. Even if you’re not interested in XML validation using Schematron, have a look at the code and let me know what you think of the basic technique. I already have plans for streamlining it, but that’s for another release.
I must say that from my perspective it’s hard to imagine a more exciting time to be processing XML using Python.
Have you tried co-routines or semi-co-routines in practice?
I’ve been walking around with my microphone asking people “You’re at a party and someone asks you what Perl is. What do you say?” I thought this would be an easy question for everyone, but more than a couple of people have immediately gone deer-in-the-headlights at the question. Surely everyone has gotten this question before.
The ones who don’t immediately go into shock fall into some distinct categories. There is the “maps my mind” crowd. I started to wonder if everyone had read the same queue cards. Is there a memo that I missed?
The clever crowd talks about oysters. One guy told me that his wife bit it a raw oyster to discover a real life pearl. He ended up setting it in a ring.
Then there are the salesman, which we may have called the evangelists long ago but have since then polished their pitch and can speak in complete sentences without steaming at the ears. I’ll come back to that later.
Chris DiBona assures me that the computer language Perl comes from a tawdry romance that Larry Wall had with Minnie Pearl (ayep, there really is a miniperl, too). She suffered a debilitating stroke in 1991: the same year that O’Reilly & Associates published Programming perl, the first book about Perl. Coincidence? Or not? No one seems to have any answers for this. What is everyone hiding?
There is an undercurrent of the B word here, although I dare not say its treasonous name. Most of the Perl people I talk to are not going to the Perl talks, and most of them give the same reason for it. When I ask people what they aare want to see, they tell me about things in the Emerging Technology track. Today’s favorite seems to be “Darwinian Software Programming”. One Perl hacker ended up there because he hadd the wrong room, but he stayed there because he liked the talk. Despite the many talks with “Perl 6″ in the title, the Perl track seems ho-hum. I’ll have to catch Nicholas Clark’s Perl 5.8.5 Was Boring (And Why You Should Be Excited by This)” later today.
Today’s free lunch is from Microsoft, and it’s one of those carbohydrate-hysteria-wrap menus. I don’t think they get to choose the menu, but leave it to them to put their name on crap. What I could really use is a grilled cheese sandwich on sourdough bread. This is the left coast after all.
It’s time to walk around again. I’m the silly guy with the microphone and the headphones. If you want to tell me something about Perl, just come up to me and I’ll listen (and record). :)
Related link: http://conferences.oreillynet.com/os2004/
The morning dawned cool and overcast, much to the relief of everyone present
for the previous week’s heat-wave. The industrial-strength hotel
air-conditioning was almost too intense.
I started the morning with Damian Conway’s ‘Best Practices’ tutorial.
I recently finished participating the lengthy process of setting guidelines
for an organization’s code and wanted to see if Damian had some interesting
ideas we hadn’t considered.
Damian dove straight in with the issue of style. In my experience, style
is a battleground and the issue is best saved for the end, when everyone
is worn out from discussing the important issues. Damian, however, is braver,
and suggested that because it is less important which style is selected than
to choose and stick with a style, the issue should be tackled right away.
I was pleased to learn that God (according to Damian) and Damian also likes
K & R style, a 4 space tab, and vi. Although I’m not sure I agree with
everything this pair supports, some arguments were quite persuasive. I will
certainly try things Damian’s way at least once.
It was when Damian approached the subject of naming schemes that I knew why
I was there. He put in to words the rules I’d internalized but couldn’t
express. For example, a subroutine name should give an indication of what
it does, so appropriate naming templates include verb_noun
(set_surname) and verb_noun_preposition (fetch_user_by).
After lunch I attended Tim Bunce’s ‘Advanced DBI’ tutorial. Having previously
limited my DBI complexity to transactions, I was eager to learn more about
what I could do with it.
Tim addressed optimization, error handling, and debugging. His tutorial,
which is also
CPAN, is lighter reading than the full DBI documentation. It covers
optimization, error handling and debugging. I look forward to the updated
version of his
I joined a group of people who were going in to town for dinner, then had
the pleasure of browsing in two of Portland’s nicer shops:
(used and new books) and Buffalo
Exchange (used clothing). We concluded
the tour with Mio Gelato
(new ice cream). All these establishments are
located near 11th and Burnside and are recommended. I’ll be wearing one
of my purchases Wednesday.
After returning to the hotel, Michael Schwern, Bill Odom, and David Adler
offered to help me write this article. We went to the hotel bar, which,
in retrospect, was a poor choice for productivity.
I was on the third paragraph when a woman from a nearby table came over
and introduced herself as Patty. She explained that she was a retired
educator and hoped we could answer some questions about OSCON attendees.
She was puzzled by the fact that she saw people sitting near one another,
concentrating on their laptops. In her eyes, this showed a disturbing lack
of human interaction. We quickly explained that most of the evening had
been spent with our laptops tucked away, but now we were working. As for
many of the people she’d seen, this early in the conference it was likely
that they were speakers polishing their presentations.
This led to further talk on the nature of communication, the merits
face-to-face and online interaction, and the purpose of conferences within
the open-source and free-software communities. I think that by the end,
Patty was convinced that we aren’t antisocial and isolated. When we
talk with like-minded people around the globe, collaborate with someone
without knowing her or his given name, and share our code with complete
strangers we’re communicating. By explaining more of the ideals of the
community I was able to think about some of the assumptions of what we
consider normal interaction, and what someone like Patty considers
The most difficult concept to explain was the meritocracy and the
position speakers occupy. Patty had overheard two people on the elevator
talking about using IRC to chat about a speaker during the his or her
presentation. Patty considered this unbelievably rude, and was incredulous
when we spoke of people who speak with IRC projected behind them.
We were unable to persuade her to give a lightning talk to present her
opinions on audience courtesy, in part because she left Tuesday.
It was interesting to note that although Patty peppered her questions
with phrases about not being part of our world she exhibited characteristics
that are valued in this world. She could have observed us and left the
bar with a poor opinion of us, instead of asking questions and engaging in
dialogue. It was such an interesting experience–and certainly the first
time in quite a while that I’ve encountered someone with no knowledge of
the world I live in–that I am not only writing this Tuesday, I didn’t
return to Château Poe (a friend’s house) until 03:00 and still
only regretted drinks, not the lack of sleep.
I’m glad that I don’t have to help out too much with the Stonehenge party: it sounds like its a really big production. It’s coming down to the wire and the details are working themselves out as they always do: at the last minute.
I started off sitting with a couple of people talking about open source employees. One person brought up the fear in his company that when people gave things away, they would later walk away from the company and try to sell it updates to the stuff they locked it into. I can understand that. Some places even have rules against this sort of thing: you can’t become a contractor after some long period that removes any advantage you might have from working there.
Other places want to own your mind, your body, and your soul. They aren’t interested in hiring you unless you will work for them for twenty years. They see employees as an investment rather than a trade. Anything that gets you out from under their control, such as participation in the open source community, frightens them. They can’t control you when you have opportunities. It amazes me how companies want to hire mediocre people just so they won’t have to hire again in a couple years. They don’t see any advantage to retaining employees by keeping them happy.
On the other hand, some places expect that you are going to change employers every couple of years. They recognize this and adapt to reality. If they can’t help their employees move forward in their own lives, they know that the employees are going to walk out. Instead of sticking their heads in the sand, they make happy employees. I know several big companies that have insanely loyal employees for just that reason. Phil Graham said as much in his talk last night: good people care more about how they are treated and what they work on than how much they get paid.
At the moment I am back in the press room, which is right next to the big room where the big talks are going on. I saw part of Tim O’Reilly’s talk, but since I tend to keep up with Tim’s writings and talks, I knew most of the talk already. However, I can still hear it while I sit across the hall. Not only that, I have subethaedit open, so I can read the transcript as it happens.
Subethaedit collaboration is this amazing thing (and Randal has been bugging me to start using it with him). If students can bring laptops into the classroom, this is going to be their killer app. Conference attendees are taking notes in the same document (each author gets a color), so they can add things that the other people may have missed. They end up with a very good set of notes, in real time. Authors fade in an out (probably chatting on IM) and other authors take join in. And, everyone, including me, can read it.
The subethaedit notes tend to read like a screenplay as the author not only include notes on content, but presentation and environmental details. If a picture shows up on the screen, they describe it inline:
blah blah blah
[picture: exterior nature scene]
blah blah blah
[picture: Sam Adams beer]
I think that this could even be the new boring-meeting game. Everyone looks like they are typing or adjusting numbers in their spreadsheets, but they are really just typing back and forth to each other. Maybe they have another document open with the real notes and everyone is talking one set of notes.
This could also be a really good tele-conference tool. I have always tried to convince my employers (back in the day when I had a job) that we should be on IM when we are conference calls with clients. We could warn each other about things we shouldn’t say, remind each other of things we should say, or tell people to stop talking immediately to avoid conversational land mines. Add subethaedit and you don’t have to designate a scribe because everyone can take notes.
Now, related to all that subethaedit raving, I have also noticed that the terminal room (sponsored by Apple) is almost empty almost always. Most everyone seems to have wireless, and the wireless coverage on the conference floor, and even my aluminum PowerBook gets four bars. We’ve come a long way since the first YAPC in Pittsburgh when it was a novelty. There are a lot of PowerBooks and iBooks here, and I bet most of those people bought Airport cards when they got the computer. The PC people seem to favor the Lucent (or whatever their name is now) cards.
In the fireside chat with Tim O’Reilly and Nat Torkington, Nat hinted at a possible OSCON maybe in Europe, but he didn’t narrow it down more. Besides making it closer to a far away market, a lot of people who don’t want to travel to the US for political reasons. That’s all he would let on. I’d be happy if they could just make it as far east as Chicago.
Related link: http://conferences.oreillynet.com/os2004/
The second annual Portland kernel hackers’ BOF took place last night,
bringing in five men and two women programmers to speak to a dozen
serious, hard-core Linux enthusiasts. Portland is a major, if scantly
appreciated, computing site, home to a large number of Linux kernel
developers (mostly working on Linux drivers and driver-related
subsystems) who meet socially once a month and are employed by OSDL,
IBM, Intel, and a variety of other local companies.
It was a pretty self-assured and convivial bunch, offering such
Copyright issues raised their head, too, as one would expect. (A Moot
Court on Monday night, led by professor Pamela Samuelson, laid out
both sides of the SCO v. IBM case, not necessarily with equal respect
for each side, and led to a lively discussion of the legal risks in
free software development.) The kernel developers agree that one
should ask contributors to verify that they have the right to donate
their code, but think that the safeguards put in place by major
contributors (notably IBM) are more than sufficient to ensure the code
This year’s White Camel Awards, which Perl Mongers started in 1999, go to:
Me! Holy canoli! No wonder Dave wouldn’t let me in on the secret.
The other recipients are Jon Orwant and Dave Cross, who couldn’t be here this year. Gees, all the cool kids skipped, and I’m the lone dork who has to walk up to the stage.
I’ll have to write more on this later. I have to go meet Randal who’s setting up wireless hot spots for the Stonehenge party.
Fewer sessions today, but far more hallway meetings. I’m still surprised by the long-term effects of the things I’ve said in the past, and amazed at what a well-connected group we are.
This morning started with Joe Celko’s Advanced SQL tutorial, a big disappointment. I’m a huge fan of his SQL For Smarties, but the presentation didn’t have any direction. I bailed after 45 minutes to go have a late breakfast / early lunch and crank out slides for my second Lightning Talk.
Somewhere in the halls, someone stopped me and said “Hi, I remember you from last year. You talked about your
10 Favorite Non-O’Reilly Books, and I picked up Seven Habits Of Highly Effective People. You’re right, it’s great.” It’s exactly why I gave that five-minute talk, and yet it surprises me (and pleases me) that it actually worked.
I’ve barely talked to brian d foy, but reading his blog entries still amaze me. I never realized quite how well-known he is. Between him, Bill Odom and Tom Limoncelli, I think they know everyone in this building. And I thought I was sort of well-connected…
I got my shipments of The Perl Review this afternoon, and a lot of my afternoon was talking to magazine people in the press room. There are a lot of experienced publishing people here, and they have all been very kind and willing to help. I have to talk to these people again after the conference so I can take advantage of their experience. I’m glad that I knocked myself out to get TPR ready for OSCON because it has let the right people know that I am serious about this thing, and they are here to meet me and offer their help. I have one copy I’m keeping all of my notes in.
Besides that, I talked to, or rather, listened to Brian Ingerson tell me how Kwiki can help TPR editors and authors work together. I’m sold. I didn’t have a use for Kwiki before, but now I do. It all sits on top of version control stuff, which is one of my big requirements. We also talked a lot about IO::All, which abstracts most input/output so it looks the same. I think I mentioned this before, but I keep running into Ingy as he toils away on his kwiki stuff.
I had another good steak at Sanders (down by the river and around the corner). I really am not that much of a red meat eater, but I’ve been a bit decadent lately. This obsession will pass, and maybe in several years my arteries will unclog too.
This evening’s activities included Eric Raymond announcing some new Open Source awards, saying that we need such institutions in the community to get the attention of the non-technical world. This is not the new thing he wants to make it out to be: Perl Mongers started the White Camel awards in 1999, and gave out cash prizes to the recipients. They have given out the awards every year since, and during that time, other communities, like the Apache folks, gave out awards too. This reminds me of all of the press releases in my email this week. Every one of them claims to be the leader of their field. How many leading companies can there actually be?
Larry Wall is giving his “State of the Onion” talk, but without much state or onion, but a lot of screensavers and his take on them. Still, Larry says that according to Sturgeon’s Law, which says that 90% of everything is crap, since this is his eighth talk, he still has two more chances.
While I am half-listening to Larry’s talk, I wanted to transfer some images from my phone to my computer via Bluetooth. I told my phone to scan for Bluetooth devices so I could connect to my computer. I had to wait a while, since all the PowerBooks around me apparently have Bluetooth, as does Tim Bunce’s phone, Nat Torkington’s something-whose-name’s-too-long-to-fit, several Macs, and lots of other computers. I stopped scanning after I found my computer, but by that time I had already found over 50 other devices. Bluetooth has been around for a while, but all of a sudden its everywhere.
Paul Graham is talking now. He gets to follow Larry Wall, poor chap. I’ll wait to write about that later since only two people are still reading at this point. The rest are probably aactually listening to Paul explain why Dilbert is not just commentary, but office anthropology, and why big companies squander their technological talent.
Randal needed to sleep in today, so we skipped the early morning portion of the conference. We thought we would beat rush hour driving into town, but apparently the construction of the highway has most of the Oregonians confused. Randal says nobody has ever taught them how to merge. He kept yelling at other cars “It’s like a zipper! One of them then one of you!” Randal drives a really, really fast car in a slow car world.
He dropped me off at the hotel then had to rush off to take care of some Stonehenge party business. I think they are going to have about 15 people working this party, counting security and the bartenders. Bill was up early getting ready for the party load-in. I don’t think either of those guys is going to get much sleep until thursday.
It seems that a lot of people at the tutorials are still doing phone support for the people back at work. There is usually one or two people in each of the hallways explaining over the crappy cellular connections how to log in and run various commands. It makes me glad not to have a real job. Wasn’t life much better before the telegraph?
All sorts of O’Reilly people are happy to see that I am alive. Really. It’s kinda spooky, actually. Tim O’Reilly came up to me today for a quick chat, which is quite the honor considering he has people for that sort of thing. He’s usually so busy that he’s practically jogging in the hallways. I guess everyone is happy that I made it back from Iraq, although CNN has infected their minds with dangerous stories that make them more scared of the war than I was when I was there. Still, I like the attention. I should do that every other year. Or not.
Steve Mallet went out geocaching. This was a new thing to me, and apparently people give out some clues along with an earth coordinate, and you have to search for the little prize. It’s not as simple as just showing up at the right spot: you have to find the object, which may be hung from, stuck under, or on the other side of something. I’d like to try this out with the Magellen GPS Springboard module I have for my Handspring Edge. Geo-anything seems to be a really hot topic. The trick is to get the data. The US has TIGER, and I hear the Denmark has a lot of geo data available. Other places are kinda spotty. I read once that France is the most mapped country because the artillery folks of the various wars needed elevation data to accurately blast each other to bits.
I talked with Brian Jepson (O’Reilly editor and Rhode Island resident) about the Salem Witch trials and the Rhode Island vampire Mercy Brown (who died in 1892, but keeps getting dug up in various vampiric scares), haunted houses in Rhode Island, and kids who have multiple cell phones among which they pass around a single SIM card.
And now the news:
The Perl Review is here.I have them. They are sitting next to me. We have 500 for the conference, leaving 500 back in Chicago for subscribers. Most of those are already spoken for, so if you want one, you need to subscribe. If you don’t want to do that, you can bid for one of the first ten copies to come off the press when Allison Randal auctions them off during The Perl Foundation fund-raiser.
O’Reilly is hiring: Can’t get O’Reilly to accept your book proposal? Take one of their advertised software engineer jobs and write programs to take over other people’s books. If you get the job, see if you can help me out with my bookshelf idea.
Lots of cool things at Powell’s: See the Powell’s booth in the exhibition hall (down there in the bowels of the hotel) for a free Portland Walking Map, a 20% discount coupon good at the main store during the conference, a “Carpe Noctem” t-shirt, and a Python Quick Reference laminated card that is either a really fat bookmark or a small piece of paper. Peter Scott (author of Perl Medic) will be speaking in the Powell’s booth on Thursday at 12:20pm. Buy his book: it’s really cool.
Runners unite: I brought my running shoes with me, but I probably won’t have time to go out with [robertom AT sas DOT upenn DOT edu] who has a note on the announcement board. He’s looking for running partners for 2 to 4 mile runs.
Rampant exhibitors: I think I need a hard hat to be on the lower level of the hotel. The exhibiters are loading-in their display, fancy booths, laser light shows, and giant inflatable peguins. At least, that’s what I think are in all of those boxes behind the black curtains. Watch yourself when you are down there.
White Camel Awards: The Perl Foundation awards this year White Camel Awards, which Perl Mongers started, um, a long time ago. David Adler still won’t tell me anything about it even when I show him my fancy, all-access purple press badge. I tried to get the news out of Allison Randal, but apparently she’s under an oath of secrecy too. The White Camel Awards happen right after Paul Graham tells everyone about painters and hackers and probably Lisp too.
Free O’Reilly books: I haven’t used O’Reilly’s Safari service so far because I’m really cheap and I don’t have time to read all the real O’Reilly books I already have. Safari is a lot more than just the O’Reilly books though: most of the major publishers seem to have their books there. It looks like there is already a 14 trial period anyway, but with that one I would have to give up a credit card number. The OSCON preview just wants my address. So far it looks interesting, although I don’t think I’d give up real books for it.
New Sea Otter book: I hear rumors that David N. Blank-Edelman has a new Perl for System Administration in the works. Since he first got the sea otter as his cover animal, I’ve almost lost all interest in being an O’Reilly author. It’s even worse now that there is a kitten on the cover of some other book. All are the good animals already taken?
John Kerry Town Meeting on Friday: Not really. I just made that up. Supposedly Bush and Kerry are running very close in the polls in Oregon, but Kerry has a prior engagement. His staff have not returned my calls, but that’s normal.
Related link: http://www.panix.com/~comdog/moblog/
The night before I left for OSCON I decided to get email service for my phone. It’s cheap: t-zones is $5 a month for POP email and some other lame services. If I had known that earlier, I would have gotten it straightaway, but I was confused with the several interenet plans T-Mobile has.
After working out a few issues, like finding a mail server to send the pictures through, I have it working on my personal site. After the conference, I’ll explain what I did to make it work, but it’s pretty close to the photo blog Wireless Hack.
After lunch with Rael, I decided to wander around for a bit.
Schwern showed me the Aegis source control system he’s using for some of his stuff. It has some cool ideas: every change is a branch, you have to declare what the change is going to accomplish before you start, you have to write a test that passes with the change and fails without it, and a reviewer checks it for it merges back into the main branch. Sounds great, right? Well, it has some issues, like not knowing anything about networks. Schwern says he’s had enough and is going to implement the same cool things with cvswrappers.
I ran into Adam Turoff and Jesse Vincent (Best Practical, the fine folks that make RT), and we chatted about this and about that. I brought up collabrative filtering for search.cpan.org and cpanratings.perl.org with the idea that we will never be able to keep crap and spam out of it, so we should just make it irrelevant. Instead of each rating having the same weight, a user can pick out reviewer or personalities and assign weights to them. Then that user sees ratings geared towards them as modules that their chosen reviewers like float to the top of the list. Someone mentioned mining CPAN to see which modules the popular authors actually use in their modules, so someone could ask questions like “Which modules does Tim Bunce like?” (Answer: he wrote most of the good ones it seems, so he likes his own).
David Adler (NY.pm despot) is the social butterfly of the conference, but I seem to keep missing him this year. A lot of Perl Mongers would have never happened without his help. I think he’s the third person I met at a Perl Conference, right after Adam Turoff and Clay Irving, two other New Yorkers. If you need a New York restaurant or bar recommendation, David is the guy to see. He practically runs the city. He can also fill you in on the finer points of Buffy, at least until the London crew get into town.
Jim Brandt wants to go to a Maple Leafs game in Buffalo. I guess the Canadians figured out that it is easier to get tickets fot the away games, so they have been taking over the crowd. Damn it, it’s time to fight back.
I met for the first time a lot of people I already know, and
although they tell me we haven’t met, I somehow can’t beleive it. Email is this evil thing that makes me think I already know these people, although when they come up to me in conferences, they tell me we haven’t met. Odd that.
A lot of people have been making noise about the noise on CPAN. Everyone seems to want to create new modules rather than working to improve and extend existing modules. Certainly diversity is good, but how many option parsers, config readers, and DBI abstractions do we really need? Instead of a lot of great modules, we have gazillons of almost-good modules. I get to nudge people in the right direction as a PAUSE admin, but I can’t really stop people from doing their own thing.
For the rest of the afternoon, I holed-up in the press room and talked to some other magazine people about The Perl Review. Five hundred copies shipped today and should be in Portland tomorrow. I have the tracking numbers, and they were scanned in Kentucky about an hour ago. They should be here tomorrow, and if you didn’t make it to the conference, you can still subscribe.
Towards the middle of the afternoon, I saw Larry Wall wearing a martial arts outfit and walking away from Damian’s Aikido talk. I didn’t see Damian come out of the room, nor did I see him for the rest of the evening. I hope he’s okay, and that those paramedics were for something else.
After the normal day was over, Randal and I went to Peter Scott’s (author of Perl Medic) annual Perl Trainer’s BOFH. We got to meet some new people, but the story is the same: how do we make more business? Everyone hurt for business over the past two years, and only Stonehenge seems to see an uptick in the economy at the moment. We talked about Perl 6 for a while, but no one seemed to think it was an important thing to think about at the moment. A conservative estimate put it four years off for training, and sometime after Randal updates Learning Perl for the new major version.
The SCO Moot Court looked popular, and the O’Reilly staff was holding off a near riot of people waiting to get into the room. This may not be as dangerous as the Democratic National Convention, but I think SCO does not have many friends here.
Someone walked off with R. Geoffery Avery’s bag, including his computer and camera. Bad karma indeed. Just return it to the front desk and life won’t kick you between the legs later on.
There is a rumor going around that Jon Orwant is pregnant, which is why he can’t make it to this year’s conference. Nat Torkington called it an ectopic pregnancy, but I’m not sure that is what it is. Apparently this is not a new thing, so Jon won’t make it into the record books.
Randal and I had an appointment with a steak dinner at one of his favorite restaurants, so off we went. I have been back in the States for about 3 months now, and I have been looking for the perfect steak. I have been disappointed until tonight. This place knows whatt “rare” means, and served up a tasty cut of cow. Those vegetarians that decided to skip out on us should really think about what they are missing. Again, Randal knows all of the staff and the waitresses, and he was recruiting more people to help out with the Stonehenge party. I think the party might have a staff to rival the official conference staff, and maybe even be just as expensive. If you want to party with the big boys, you have to talk to Bill Harp.
Back at the Stonehenge Ranch, Bill has his party crew getting ready for the Stonehenge party on Wednesday. He showed me some schematics of where everything will be, but we didn’t have much time to talk because he was supervising the people loading everything into the van. I guess they are starting the load-in tomorrow to be ready for the next day. I asked if we could get dancing chihuahuas for the party, and after thinking about it for a moment, Bill said “next year, I promise.” I guess it’s too late to change the plans now. Remember, get the Stonehenge t-shirt to get into the party for free.
So far I have not joined any new projects or exciting opportunities that seem to be all the rage at conferences. I have come to realize that they are really just a symptom of mass hysteria that dissolves naturally after three weeks leaving only a half-designed website and an idle mailing list. Nat told all the “olbies” to get out of the way to make room for fresh blood anyway, so all of us thirty-somethings will just browse the web and use IRC when people start talking aobut new things for Perl.
Tomorrow I get to sleep in which suits me just fine. I want to hack on my OSCON moblog, so I will be up pretty late.
Woke up at 7am, which was just fine with me, since it was a nice late 9am Chicago time.
Damian Conway’s “Best Practice Perl” was a godsend. It’s a good conference when the first tutorial on the first day pays for itself. The handouts were a much-needed “coding standards for Perl” that goes far beyond the perlstyle man page. I’ve been working on departmental coding standards, but most of the heavy lifting is in the handout. Of course, he couldn’t be perfect, and I couldn’t disagree with him more about not using inline POD.
The afternoon brought Damian’s “Presentation Aikido,” specific and useful information on giving presentations for people like me. It was funny to see how much of his talk echoed my hiring talk on Friday, since job interviews are a subclass of presentations.
Tomorrow: Joe Celko on SQL!
First, the announcements:
Ingy Shirts: Find Brian Ingerson to get your Ingy shirt. They come in pink or blue and only cost $10. Ingy is going to show off his cool IO::All module that makes input and output operations look the same: sending email, uploading a file with FTP, saving to a local file, and much more.
subethaedit: Get version 2 of this collabrative authoring software for the Mac and turn on Rendezvous. Watch everyone else taking notes of the sessions they are in.
PGP key-signing party: Meet at the Lower Level 2 stairs at 6:05pm on Wednesday. All the geeks are here, so why not?
I started today by sitting at the back of the room while Randal and Tom Pheonix got ready to give their Alpaca talk. Jim Brandt came by to give Randal their “Festivus (Software for the Rest of Us)” video that premeired at YAPC. If you haven’t seen it, then you should have paid to come to the conference (or just download it).
After that I got sucked into a language discussion with Adam Turoff, Ward Cunningham, and Schwern. Adam explained some of the finer points of Lisp, I said that my impression of Java is that people do it because they get paid to, and Ward talked about Camp Smalltalk that went on last week in Portland. Adam’s question for everyone he meets this week is “What is the programming language of the future going to look like?” I say it will look a lot like what we have now unless we run into some interesting new problem that needs a new approach.
Everyone seemed to agree that people are taking a lot of things from Lisp and Smalltalk and using them in their own language. If you aren’t at the conference and within Adam’s grasp, you can post your own answers here.
Steve Mallet is making videos of normal, everyday Portland people weighing in on the standard open source debates, even though they have no idea that Perl doesn’t have an “a” or that Java is not coffee.
I had lunch with Rael Dornfest (editor of the Hacks series). His website, MobileWhack, makes enough money to get him and his fellow site owners a newe gadget every couple of months. At lunch I had my HandSpring Edge with a Magellan GPS and my Nokia 3650 with a Bluetooth Jabra headset. He had a iPaq with GPRS, 802.11, and Bluetooth which is definately drool-worthy. His PDA is even a wireless stumbler. I also learned that the radiation pattern of the Airport Extremes are apparently very flat, so you have to turn them jut right to get signal where you want it. Besides that, Rael told me about a lot of cool things that he has in the works, and I promised not to repeat any of them. They are really cool though.
I tried to go to Damian Conway’s “Presentation Akido” talk, but people were already sitting in the aisles when I got there. For some reason he got a smaller room for this afternoon
Powell’s, the famous independent bookstore, is running the book concession this year. They have the O’Reilly books and a lot more, including a free guide to the O’Reilly animals.
A new company called sxip is the first advertiser on the jobs board. The advertisement is not long on information, but the company is based in Vancouver. “That must be a Dick Hardt thing”, I thought. Indeed it is. He’s moved on from ActiveState to something new. Good luck, Dick!
Randal picked me up at the airport, and then we went to the Marriot to register and pick up our badges. After going up and down several escalators, then trying a couple of elevators, we found the right combination of rooms and people to pick up speaker badges on the 16th floor and my press kit downstairs.
Some things are new this year. Gone are the associates of O’Reilly & Associates, and the big story is where they went. The latest rumor has them buried Jimmy Hoffa style under the new O’Reilly campus. It is now O’Reilly Omni-Media. No, wait, that’s Martha Stewart Omnimedia. It’s just O’Reilly Media, Inc., and they have a new book: Tim O’Reilly in a Nutshell. It’s a real book that is a collection of essays from Tim. It doesn’t have an ISBN, so this might be only a conference giveaway.
It looks like the conference will be packed this year. The hotel is smallish, and I hear that there are around 1,500 paid attendees (and various hangers-on like myself) which is an increase from last year. All of those people will compete for wireless signal and power outlets on the conference floor. Maybe this signals an improving economy.
We met a bunch of people in the registration room. David Adler (NY.pm reigning despot), this year’s White Camel Award committee chairman, won’t let on any advance news about who gets the trophies this year. Paul Blair (Portland.pm founder, now of Wisconsin) grabbed a bunch of people and herded them to the sports bar in the hotel. I ended up between Randal and Vivek Khera, and across from Jim Brandt (this year’s YAPC::NA organizer). Jim and I mostly drooled over Lance Armstrong’s bike and the latest line of Trek carbon-fiber road bikes. Jim chided me for ordering the pub’s buffalo wings. Andy Lester (Phalanx despot, testing guru, “How to Get a Job” speaker) showed up, but sat on the other end of the table.
I hung out with Randal for the rest of the night. The sports bar party fragmented, and part of it (including Paul Blair, former Portland resident) moved to a local pub where Randal wanted to recruit people to help with the Stonehenge party on Wednesday night.
The Stonehenge party is always a huge event: last year, Bill Harp (Stonehenge’s event planner) managed to get live alpacas into Bar 71. This year the party is going to be even more insane, but Bill is not letting on with his secrets. He does have a room full of boxes for the party giveaways, and it looks like he’s expecting close to a thousand people. He won’t let on how much it costs, but I know how much business Stonehenge has to book to cover the costs, so I can make a guess. To get any fancier, he’d need Dennis Kozlowski’s level of decadence. It’s free liquor and food all night and into the morning, with free luxury transportation from the hotel to the bar, so there is no reason not to go. Bring your conference bag so you can carry away all of the freebies.
Randal and I split off from that group and made one more stop before heading back to his place. He needed to get even more people to help with the party, so he stopped at his favorite pub in Beaverton (right next to Portland). Somehow he starting explaining the Schwartzian Transform to one of the waitresses, starting with “Imagine you have these teddy bears…”. I hope it’s just conference fever, otherwise these might be the waitresses who know the most about Perl without actually using it.
So much for Day -1. I did get to see a lot of the local area.
Related link: http://conferences.oreillynet.com/os2004/
I adroitly sidestepped Logan Airport, where thousands of Democratic
conventioneers were disembarking, and drove for an hour and a half to
Manchester, New Hampshire to fly away from the political circus and
reach the easy-going, no-standing-on-ceremony city of Portland,
Oregon. Here I will spend a week with the
O’Reilly Open Source convention,
a community and a pursuit very different from the Democratic National
The explicit thrust of both the Open Source convention and the DNC are
technical. OSCon pursues the technical goals of creating and deploying
software. The DNC tries to put in place the technical measures to
defeat an incumbent president. But behind the technical foci of both
OSCon and the DNC lie some essential principles
What does it mean for a group of intrepid computer hackers, rampant
individuals who celebrate feats of extraordinary programmatic
cleverness, to endorse the existence of the Commons and give away
everything they do?
Even more amazingly, what does it mean for IBM, Novell, Sun, and so
many other major companies to contribute to these public efforts, to
gladly draw software from these efforts to bolster their products, and
to offer the fruits of the efforts to other major corporations?
None of these individuals or companies believe in subordinating
themselves humbly to the community at large. They have learned that
pulling their oars together takes them farther than rowing along one
by one. But there’s a deeper principal at work too.
They all want individuals to strive and succeed as individuals, but
they recognize that good infrastructure fosters individual
achievement. They each plan on erecting their own proud edifices,
but free software is the firm platform on which they build.
I think we all instinctively understand that we need infrastructure in
order to achieve personal success. The infrastructure includes, for
starters, a government that recognizes danger in time and protects us
from criminal acts. It may include an educational system affordable by
all talented individuals. Someday we may decide it includes the
discussed in my most recent blog.
It is harder to find a principle behind the Democrats. Yesterday’s New
York Times Magazine (July 25, 2004) documented how liberals are giving
up hope of finding a worldview and set of policy proposals they can
live with in the Democratic Party and are concentrating more and more
on independent organizations for these things. The much-publicized 527
mechanism of the McCain-Feingold election law (a clause that
encourages parallel organizations to intervene in election campaigns
without coordinating with political parties), which was generally
thought to help the Democratic party, may actually weaken it in the
long run while strengthening the progressive movement as a whole.
As with the free software movement, the Democrats glorify the rights
of the individual. They uphold most of the recent tax gifts to the
wealthy, let companies treat both their accounting books and their
workers pretty much they way they want, and allow educational and
career rewards to stay funneled to the usual winners. There will not
be much change for a long time in who runs the global economy or how
it is run, regardless of whether the Democrats or the Republicans
dominate the U.S. government.
But along with this endorsement of laissez-faire, the
Democrats offer a sense that something must be done to handle the
negative consequences of this choice. Somehow or other, laid-off
workers need opportunities to jump back into the work force, poor
people need to get their blood pressure treated, the gutting of the
environment must be repaired.
One can see why the liberal movers and shakers profiled in the New
York Times Magazine find little excitement and inspiration in this
program. It cannot even talk about, much less solve, the looming
problems of resource depletion, racism and xenophobia, national
financial insolvency, and outrageous injustice. It is on the one hand
wonderfully thoughtful, concerned, and subtle, but at the same time
infuriatingly patrician, detached, and leery of commitment. No one
should be surprised to find that the party has chosen a chief
spokesman with all those qualities.
But once again, there is a respect for infrastructure in this
program. There is a sense that present is the basis for the future,
and that the future must be respected.
I barely regret giving up the press pass I was offered so I could join
the Democrats inside their convention center. I regret slightly more
giving up my chance to be searched by police (not the same police that
nearly disrupted the festivities) for the privilege of riding the
subway, or my chance of joining protesters in the plastic-wrapped
metal cage aptly called a “free speech zone.” I would rather check on
the progress of the revolution in social infrastructure called the
open source movement. But there is something similar in the air at
Where will free software take us?
Someone needs to invent a way to prevent the seats on
planes from being tilted back into my lap. Maybe it’s just a well-placed bolt. As it is, I’m typing this with the bottom half of my iBook laying on my stomach.
Thank goodness for touch typing.
I tried watching “Fearless”, from my stack of “DVDs I’ve
bought, but never actually watched”. Unfortunately, I didn’t realize it
starts out with the aftermath of a plane crash, charred bodies
and all, so I didn’t want to disturb my seat mates.
Besides, the DVD player is a huge suck on battery life.
Fortunately, iTunes piped through the headphone jack doesn’t
seem to have any real extra drain on resources while I play
nethack. I do so love “Candy-O” by the Cars.
Fellow OSCON attendees Arthur Bergman and James Duncan are
about 4 rows back. James will be giving his talk
on Enterprise Perl, which he also gave at YAPC::NA. It was as
much a discussion of coding standards as anything else. He
says he’s made changes since Buffalo, so I’m eager to hear it again.
My own talk, “How To Get Hired”, hasn’t changed any, because
I haven’t gotten together with Bill yet about what we’re going
to squeeze out. At YAPC, we had twice the time slot than we
will on Thursday, so we have some trimming to do. If you saw us at Buffalo and have any
specific parts that you feel must not be removed, we’d love to
I’m still terrible with names and faces. People have been saying hi to me, shaking my hand, and I have to be very apologetic and say “I’m so sorry, what was your name?”
That reminds me, I need to be carrying my little brown sign-in book around with me…
It’s two days to OSCON, but it seems like the conference has already started. I’ve iChatted with some of the people already at the conference hotel, I’ve been deluged with press releases since I am on the press list, and Randal Schwartz and I are already making plans for tomorrow night.
Tonight I have to pack. For the past month there has been a large box in the corner of my office that has gradually filled up with the things I have to remember to take to Portland. At the bottom are some goodies I brought back from Iraq as presents for the kind people who sent me things while I was deployed. There is a lot of Stonehenge Consulting stuff in the middle: my Stonehenge ball cap, polo shirts, and “Ask me about Perl Training” button). On the top is all of my airplane reading (Apocalypse 12, anyone?), my online check-in plane tickets, a hard copy of the conference schedule with notes by the things I want to see and question marks next to the things I probably should see.
I figure I should bring along BUGS in Writing, which I’m reading now at Nat Torkington’s suggestion, or bring Refactoring by Martin Fowler which I want to read again and more carefully this time. However, I know those books will end up as ballast, especially since I am staying at Randal’s house in the woods. For the same reason, I probably won’t bring any DVDs with me either.
My luggage is the least of the stuff I am bringing to the conference, though, since most of that stuff I am not even bringing myself. FedEx should deliver a big box of The Perl Review which people can get from me or one of my guerilla distributors (hopefully for a small donation so we can pay the authors). I have some new audio equipment somewhere in the UPS pipeline because I want to walk around with a microphone recording people talk about Perl for my new audio Perl anthropology project. I’m sending some other stuff ahead too, and I have a page full of phone numbers and tracking numbers that I’m checking each day. Somehow these things always end up being a last minute deal.
Although I am going to talk to a lot of Perl and technology people, these conferences are the chance for me to meet lots of behind-the-scenes people, like the editors and publicists for the books I want to review in TPR, or the nice people at O’Reilly who make the conference work (shout outs to Nat, Vee, Gina, and Angela) or put up with my never-ending requests (Suzanne, Denise, Betsy, and Marsee).
It will probably be the same story this year though: they’ll roll their eyes when I try to pitch my idea for an O’Reilly Bookshelf: not a collection of books, but actual shelves with little brackets you could slide onto the edges to demarcate the green books from the blue ones from the magenta ones. The sides of the shelves have actual etchings of the pictures on the cover. Imagine buying a ready-made library: just take off the cardboard of the shipping box and you have a complete collection of O’Reilly books (Unix, Windows, or Mac flavors) already sorted and shelved. I know the technology for this exists, because there is at least one company that does it for school libraries. Okay, so maybe you think it’s a dumb idea too. :)
I won’t actually start making a plan for the conference until I get on the plane, but I’m not going to try to hard at that because things change. I know I will be at the Stonehenge party (which I think is Wednesday, but I don’t know where), and at the Dyson lectures (just to say I saw three generations of Dysons), but beyond that it’s probably me wandering the halls talking to people.
I might try to keep a moblog, which means I should set up email on my Nokia 3650. What would really be interesting is a audio moblog since my phone also takes dictation. Hmmm… as usual, I’m setting myself up for too much at the conference. I always want to do too much.
I do have a lot of notes stored up on my experiences with Fedora Core 2, and I need to hurry up and get them posted, but I wanted to push out these notes on my experience reading pictures and movies from my Canon PowerShot A60 using the USB (1.1) connection to my Dell Inspiron 8600 laptop running Fedora Core 2 Linux.
First of all, the obligatory system specs:
$ uname -a
Linux borgia 2.6.6-1.435custom #1 Tue Jun 15 23:25:40 MDT 2004 i686 i686 i386 GNU/Linux
I’m not using a standard FC2 kernel because I ran into this bug and built my own kernel with the “ugly” patch Pekka Pietikäinen worked up.
Anyway, I plugged in my camera to my USB port. Using the
dmesg command I found the device messages:
usb 1-1: new full speed USB device using address 5
It turned out that it’s a good thing I glanced at this (like many Linux gearheads I glance at dmesg from time to time), because that number “5″ turns out to be a key to getting things working.
I knew I’d used gphoto in the past, but I just opened up the GNOME desktop menu to see what I could find to use with the camera. I was pleased to find the unmistakable entry “Digital Camera Tool” and clicked this, launching a program called “gtkam”.
I’d turned on the camera and set it to the preview mode. I know the camera has to be turned on, but I’m not sure what mode it has to be set to for data transfer (too lazy to RTFM). I guess the mode is not all that important because the camera seems to go into some special mode once a computer has started to talk to it, but I’m coming to that part.
I chose the “Camera -> Add Camera” menu item and clicked “Detect” in the resulting dialog box. It instantly changed the detected camera to the exact model, so I figured I was home free, but when I tried to click “OK”, I got an “unable to initialize camera error”. After a bit of fiddling ang googling I found this somewhat outdated page which provided the key clue: I didn’t have permissions to access the USB device. I checked the file “/proc/bus/usb/devices” which provided somewhat cryptic information about the attached USB devices. I did at least glean that the Camera is on bus 1 (my guess is that it would be bus 1 in most cases), and the device message I mentioned indicated it’s device 5. I was then able to allow myself access to the camera by executing (as root):
chmod a+w /proc/bus/usb/001/005
From this point gtkam worked like a champ, and the following image shows it in operation browsing the pictures on my camera’s CompactFlash card.
I was able to use the flexible gtkam “File -> Save Photos -> All” dialog to grab the images and AVIs. The only thing I missed was the option to, say, select the “DCIM” folder and just have it save the whole shebang to my hard drive in a mirror of the structure on the camera. As it is I had to select and save, in turn, each folder that actually held pictures: 100CANON, 102CANON, etc.
Once I transferred the files they were ready fodder for whatever Linux graphics and video tools I pointed at them. The video clips the camera records are AVIs and played flawlessly in mplayer and videolan client (you’ll need to use yum or apt-get to add these to your FC2 installation).
One issue: I found that the camera shuts down after a pretty brief period of inactivity (probably a setting I could tweak on the camera), and if so gtkam gets flustered (mostly it gives an erro rmessage, but once it “unexpectedly quit”). When this happens I just turn the camera back on and check dmesg again. The hotplug logic in kernel 2.6 increments the device number each time, so after the first shut-down it comes back as device 6 and so on. I found I have to do the chmod each time in the corresponding /proc… pseudo-file. I expect there are probably tools to automate all this for the user, and if you know of any, please comment with links and other pointers. I was happy enough with the simple formula that worked for me, that I didn’t do any more digging.
In general the exercise was quite painless, but I wanted to give others a few pointers around the minor potholes I encountered.
Side note: thanks to Matt Biddulph for recommending this great camera.
Do you have tips and experiences of your own to share with digital camera users on Linux?
Related link: http://www.archive.org/texts/bookmobile.php
In my “To Do” list, the item “Internet Bookmobile” never went away, mostly because I had no idea why I made that entry.
It turns out that there really is an Internet Bookmobile, and it drives around carrying a digital library and the tools to turn its content into real books.
Now that’s cool.
Related link: http://www.theperlreview.com
The first print issue of The Perl Review is coming off the press now. I just about worked myself to death to finish it up in time so I can have it at OSCON. That’s the problem of tangible products: I can’t just wait to the last minute to finish it and upload it. I really wanted to give some out at OSCON, though, so I don’t mind the lost sleep. So far, all work has been by volunteers using open source tools. Some day I have to write about that, but not now because I need to get ready for OSCON.
I’ll be sending it out to subscribers as soon as I can, and a PDF version for subscribers will be available soon.
Articles in this issue:
Test Driven Development — David Kosykh
Extending XML::XPath — Michel Rodriguez
Magick Tile Puzzles — Grant McLean
and some other things…
OSCON is next week, and I can’t wait to see Tom Limoncelli’s
Time Management For System Administrators tutorial. Tom’s written one of my “Ten Great Non-O’Reilly Books”, the excellent The Practice Of System And Network Administration, so I’m eager to hear his talk on time management. I’m sure it will apply to programmers just as much as sysadmins.
Tom’s sent me some preview materials, and he digs up the old chestnut about how “no one ever died thinking ‘Gosh, I wish I had spent more time at the office.’”
It needs a corrolary: “But many folks die wishing they’d accomplished more.” Time management isn’t about spending more time, but in making the most of the time you do spend.
If you’re going to be at OSCON next week, let me know and say howdy. I’ll be giving a talk with Bill Odom on
“How To Get Hired” that’s as much about career management as getting the job in the first place. I hope to see you all there.
Going to Portland? Don’t have time for it?
The notion of universal service in communications has great
staying power. Although the term “universal service” itself
has fallen into disfavor–I’ll explore why in just a
minute–the commitment to the concept remains high, even in
our troubled economic and political times. Just try going to
Thomas legislative information site
and do a search for bills containing the word “broadband.”
Most of these bills are striving for some form of universal
service, such as high-speed Internet in rural areas.
But a parallel political universe in universal service has
also arisen. A number of researchers in recent years, mostly
on the political right, have critiqued the long-standing
ideal of providing everybody with communications.
In the 1990s, Milton Mueller published a series of papers,
followed by the book Universal Service: Competition,
Interconnection and Monopoly in the Making of the American
Telephone System, that presented a bold claim–and an
economic analysis to back it up–that the universal
service policies undertaken by the phone company from the
very start did nothing to improve actual phone system
A policy analysis
for the Cato Institute by Lawrence Gasman argues that the
problems in providing phone coverage have been
exaggerated, and that the policies intended to create
universal service have been counterproductive because they
prop up outmoded networks.
Economist Hal Varian has also stated that geographic
subsidies should not be created toward the goal of
universal service, because the availability of
communications should be treated like the many other
factors people use in choosing where to live and work. No
one gets a parking garage subsidy from the government for
choosing to live in a major city, so Varian asks why they
should get a communications subsidy for choosing to live
in an isolated rural area.
Most damaging of all, perhaps, are
reports of fraud and abuse
in the one explicit universal service program mandated by
law in the United States, the E-rate program for schools
and libraries. (The law also provided funds for rural
health clinics, but that was spun off into a separate
These critiques offer serious food for thought and a chance
to re-engineer programs toward what’s most effective. That
the spirit in which they are offered is in no sense
constructive does not reduce their importance. It would be
easy to argue that the attacks are part of an ideological,
corporation-friendly campaign to paint everything
governments try to do for their citizens as bureaucratic,
wasteful, and pointless. But approached with open eyes, the
critiques can lead us closer to universal service.
FCC Chair Michael Powell, consistent with his free-market
views, has cast aspersions on the universal service ideal,
most famously with his joking complaint about suffering from
a “Mercedes divide.” But in other comments, he’s suggested
that there’s value in policies aimed at getting advanced
communications to people who lag behind.
The key lesson from surveying the available history is this:
universal service programs that enforce a narrow strategy,
and that distort economic realities to favor that strategy,
do indeed risk the kinds of failures claimed by political
opponents of universal service. Such programs can reward the
wrong things and set up an environment ripe for abuse and
On the other hand, flexible strategies that reward creative
thinking and keep everyone’s focus on the prize can be
surprisingly effective. Let’s look at the principles of
universal service and at some recent efforts to find out
what should continue and what should be discarded.
Transit systems are routinely subsidized in countries around
the world–and the United States is no exception. While the
Federal, state, and local governments pour most of their
transit billions into automobile traffic (with airlines
getting handouts too since the September 11 attacks), there
is also substantial funding for buses and train lines.
Governments clearly see a social good in transportation.
And the reasons for the importance of transportation go
quite deep. Mobility is key to the modern employment market.
People who can travel easily can find work in new places and
still keep in touch with their families. Businesses benefit
from transportation too: they can open offices in other
cities and keep in close touch with suppliers and customers.
None of these actors could budget for the entire
infrastructure of modern transportation and factor it into
their career or business plans. Government support for a
robust universal transportation system enables economic
diversity and social unity.
Note that universal transportation doesn’t try to be rigidly
egalitarian. No one says that traveling from the remote Alps
of Northern Italy to Rome should be as easy as going from
Milan to Rome. Geographic and demographic differences help
All the arguments for government-funded transit apply even
more strongly to communications, because it’s so much easier
to move photons than people and because the uses to which
communications can be put are so much more varied than
Access to communications has impacts that no one can budget
for in advance. As recent examples, look at the life-style
changes wrought by the Web and by cell phones. Universal
access (or more accurately, near-universal access, where the
percentage of population using the system reaches a tipping
point) has even greater social effects than the sum of
Universal service is not a luxury, as Chairman Powell
indicated with his “Mercedes divide” wisecrack. In an age
where people deal daily with large, impersonal institutions
(government agencies, insurance companies, multinational
retailers)–an age of global trade and development, where
money and goods travel around the world, and people of all
economic classes do as well–an age where people seek new
vendors and services more and more frequently, and where
information mutates so incessantly that no durable medium
can keep up to date–universal service is becoming a
A substantial body of research indicates that private
enterprise is inherently efficient. The people who wrote
that research appear to work at think tanks, however, not
Anyone who has worked in a private enterprise knows what
really goes on there. In any enterprise of more than a few
dozen people, bureaucratic barriers and pockets of
unproductivity crop up and stay around for long periods of
time. A bumbling but politically astute manager can hire
incompetent staff and maintain a whole department of dead
weight, dragging down the efforts of others. Companies are
irrational entities: they refuse to acknowledge errors
promptly and pour good money after bad.
In short, all the failings attributed to government happen
in private enterprises too. These failings are a fixture of
human nature and organizational dynamics.
Grossly inefficient companies do get shoved out of the
market eventually by more efficient ones; in that way
private enterprise has an advantage over government in terms
of efficiency. But such processes take decades and just
restart the cycle, because each new company obeys the same
laws of human nature and organizational dynamics.
Technological innovation may be speeding up the cycle, but
if businesses were truly efficient, rises in labor
productivity would come much closer to the technological
and social changes that drive them.
Certain independent variables sometimes render government
services more expensive than private services. Most
significantly, government tends to pay good wages and
benefits, a humane approach to the workforce that private
industry could learn from. Governments also create numerous
regulations, such as those regarding procurement, that may
get in the way of fast action, but that also has something
to teach private industry about honest financial dealings.
Innovation, dynamism, and creativity can be found in
governments. Some governments present excellent models of
entrepreneurial activity in the form of communications
services structured as public utilities, a trend I
documented several years ago in my article
Echo of the TVA Comes Over Municipal Data Networks.
The press has recently had a field day covering the trend
(which has been ripening for a long time) toward outsourcing
services from developed to underdeveloped countries. But few
writers point out that the whole phenomenon depends on the
availability of high-speed communications. What lessons can
developed countries learn from this? A tiny American
municipality such as
cannot be blamed for wanting the same economic opportunities
as remote call centers in India or the Philippines. Thus the
movement for municipal networks.
Municipal networks show that government agencies can be
efficient, entrepreneurial, and innovative. The goal of such
networks are to provide every citizen in a town or city with
the option to join a high-speed network. The range of
solutions is vast.
Some networks are pure fiber; most are a mixture of fiber
and copper; many of the new ones involve wireless too. The
Wireless MAN or WiMAX standards (based on IEEE 802.16) will
probably make wireless even more of a factor. Municipal
wireless hotspots were
praised by a very highly placed government official
this past June.
A stray thought: people seem to be willing to pay for WiFi
equipment but not for WiFi service. Perhaps, then, a
value-added tax could be levied on wireless equipment in
order to fund universal wireless service.
Some networks offer Internet access on top of raw network
connectivity; most are limited to offering connectivity and
open up the network to bids from competing Internet service
providers. This promotes competition far more than the
oligarpolic provisions of the 1996 Telecommunications Act;
it creates an environment where entrepreneurial small
businesses have a chance.
Telephone companies fear municipal networks. On the surface
their anxiety appears misplaced, because the two types of
systems are not in competition. Most towns started municipal
networks only after trying and failing to get bids for
private cable or high-speed fiber networks. The private
companies flatly refused, submitted ridiculously
unaffordable bids, or failed to provide acceptable service.
Most municipal networks, in short, began as acts of
But now municipal networks are proving their value and
viability. So the telcos pull strings and get state
legislatures to pass laws prohibiting the networks, or
putting in place restrictions to make it difficult for such
networks to start up.
The Telecom Act says that “any entity” must be allowed to
compete in the communications marketplace. This would seem
on the face of it to protect municipal networks. But the
Supreme Court recently upheld the state laws by declaring
that “any entity” refers only to private companies.
Thus, the court accepts the telco’s view of citizens as
helpless consumers who must simply wait for a telco to offer
them services under conditions chosen by the telco. And
perhaps the court has judged Congress’s intent rightly. The
Telecom Act is widely understood to be a boondoggle for
large communications companies; new competitors barely have
a chance. (It’s worth noting, though, that not all courts
have swallowed the telco line.)
And as the telcos go, so do the anti-regulators. While the
laws prohibiting municipal networks are an explicitly
burdensome form of regulation, they have never been
criticized by the supposedly anti-regulatory crowd.
The Cato Institute has not taken a stand on municipal
networks. But it has complained that
cable companies are effectively underregulated monopolies
municipalities regulate content and other aspects of cable franchises
beyond the minimal considerations of public safety. These
arguments are an indictment of the current, obsolete cable
system. With a broadband network of video-width capability,
there would be no need for picking and choosing cable
The city and town employees I’ve talked to in my research of
municipal networks seem just as thoughtful, just as
resourceful, and just as rich in vision as innovators in the
private arena. These employees put their talents to the
benefit of their citizens rather than to making a profit,
which does not mean they’re superior to private firms but
simply that they can carry out projects that private firms
don’t want to risk or can’t justify economically.
Municipal networks are not a total solution to universal
service. There are still rural residents too far from a
Point of Presence to benefit from those solutions; other
cutting-edge options such as satellite Internet may bridge
the gap for them. The digital divide is also exacerbated by
the widespread need for more education and hardware.
Finally, in many areas, private solutions serve most
people’s needs, so government may do best by keeping its
Thus, it is not only the actual histories of municipal
networks, but the general lessons we can draw from the
impetus behind them, that illuminate the path forward.
The E-Rate has provided the latest cautionary tale in the
history of government subsidies for communications
development. But as I pointed out in an article titled
An Expanding Universe for a Universal Service Program,
the universal service fund is far bigger
than its failures. Tens of thousands of institutions
have received Internet access thanks to the fund.
Critics of government efforts call for the abolition of the
fund, citing mismanagement as their reason. Using the same
logic, one could call for the abolition of stock markets
worldwide, on the basis of the destructive criminality of
Enron, WorldCom, Parmalot, and other companies that dwarfs
the abuses of the universal service fund.
Nonetheless, we can learn a lot by seeing what went wrong
with the E-Rate. I analyze the failings as follows.
The FCC built assumptions based on existing, widespread
models into its regulations, and thus required that new
installations be “more of the same”; this benefited
In particular, regulations prevented the use of funds for
the purchase of external lines or wireless equipment,
which would have been a low-cost, long-term solution for
many schools and libraries.
Schools and libraries were not given practical goals, but
simply instructed to spend as much of other people’s money
as they could. In other words, their goal was to spend the
available money on easily obtainable equipment, not
necessarily to make the best possible use of the
money. They had no encouragement to be creative.
The law provided only telecom equipment and networking
services. It did not consider other useful things one
could ask for to achieve Internet access. Such as
computers, for instance. Or trained teachers and staff.
The second point deserves a bit more attention, because its
causes and effects are complicated.
The FCC, of course, did not explicitly say, “We will pervert
the E-Rate to funnel money into incumbent phone companies
and to deny the schools and libraries control over their own
networks.” Instead, the FCC imposed a complex and arbitrary
set of technical regulations that led to these results.
According to Dave Hughes, owner of
Old Colorado City Communications
and a long-term master of community networking using
wireless Internet, FCC regulations permitted money to be
spent on leasing lines and services, and on equipment used
on an institution’s own right of way. Funds could not be
used to purchase equipment whose range crossed a right of
way, such as a public street or a piece of somebody else’s
First of all, these regulations made wireless networks
impossible. They’re too free and messy for those sorts of
regulations, as my next-door neighbor found out when I let
her know I was jacking in on her wireless LAN. A wireless
network can extend for miles, which is one of its great
Second, the regulations discouraged schools from investing
in their own copper or fiber, a “customer empowered network”
of the sort developed by
which is fairly cheap to acquire because of left-over fiber
from the dot-com boom, and which would provide a lasting
infrastructure. Instead, the schools funneled their money
into services leased from local telephone companies, the
only expenditures covered by the FCC’s interpretation of the
E-Rate. If Congress decides to take away the subsidy,
schools will be left with the choice of throwing more funds
at the leased lines every year or losing their Internet
In fact, schools want their local area networks to extend
outside their walls. They want to talk to other schools in
their districts, and Hughes has pushed them to provide
access to students and parents at their homes in
neighborhoods around the schools. Wireless extends the power
of the E-Rate–but the FCC treated that as a drawback rather
than an asset.
In 1996, wireless Internet was still a rather experimental,
fringe technology. Now it represents an obvious and gaping
failure in the FCC implementation of the school and library
fund. The option of buying fiber directly has also become
more affordable since the Telecom Act was passed, largely
because the WorldComs of the world strung too much fiber
during the dot-com boom and it’s no going for fire-sale
prices. It’s not too late to revise the provisions
surrounding the E-Rate.
The key lesson of the school and library fund is that
government action should be structured around results. The
E-Rate was oriented instead toward equipment. Once the
school or library got its money, it simply spent through
this money until it got as much equipment as possible. The
process did not deal with the question of whether the
purchase represented the most effective solution to the
problem–in fact, it didn’t try to define the problem.
As mentioned, other provisions of the law or its
implementation reinforced uncreative spending. The
excruciatingly spelled-out bidding process mirrored the way
schools and libraries had previously achieved Internet
connectivity and therefore led them to order more from their
current provider (usually a local incumbent Bell company).
And the “right of way” regulation ruled out the options that
would have been most cost-effective and powerful for many
Let’s contrast this with the success stories I
mentioned for municipal networks. Success was achieved
Municipal employees started with a clear definition
of the problem.
The problem was very broadly defined, with reasonable
parameters but no artificial constraints (other than those
imposed by enemies).
The employees were responsible for the budget, and
therefore had strong incentives to use their creativity to
keep costs down.
A lot of factors go into determining whether it’s worth
spending a lot of government money on a project that runs
counter to market values. A well-established technology that
is likely to remain useful for a long time–such as
electrical wiring–is a better candidate for universal
service than a technology this is still subject to
disruptive new influences.
And once the vast majority of a population has something, it
might make sense to subsidize the remaining few percent that
need it. In contrast, we should question mass undertakings
that try to spread something that has only recently caught
Perhaps these considerations can illuminate the discussion
around a popular bipartisan bill for bringing broadband to
rural and “underserved” areas, designed by the same senators
Olympia Snowe and John Rockefeller who proposed the E-Rate
in the 1996 Telecom Act.
There is no question that the bill will, to some extent,
throw money at large telecommunications providers. Insofar
as it encourages the extension of old models to new areas,
it would just prop up obsolete networks.
However, the definition of telecommunications in the bill is
quite broad and includes wireless options. If it leads to
new networks, and–even better–the entry of new companies,
it may be a progressive force.
We need much more research into what has worked in
communications, and more education for others interested in
that solution. For instance, a non-profit organization
called the Center for Civic Networking has organized seminars
on municipal networks and written guides for IT staff and
city officials interested in trying them. What we need is a
community and culture of people devoted to universal
service. We should not be afraid to cross ideological lines
and combine elements of different models in the pursuit of
access for all.
What social changes would universal service in high-bandwidth networking lead to?
Something that I always find fascinating is watching people who have never used a particular product, technology or service experience it when they have no idea how it works. I will always remember when I was younger trying to explain to my Nanna how to use the (then amazing) Grolier Encyclopedia on CDROM. I wanted to show her that I could learn things for school without necessarily having my nose stuck in a book. I sat there and explained to her how you type in what you want to know and then it searches for it on the CD and presents you with some content to view. Being 13 and not a usability expert, I never even considered that she didn’t know how to use a mouse; it seemed so obvious to me.
I sat there and moved the mouse back and forth and explained that when you move the mouse forward the pointer goes up and when you move it back it goes down. It appeared as if my sensitive instructions went in one ear and out the other and as she re-arranged her teeth she moved the physical mouse up and down in the air. To my Nanna she could not map in her brain the movement of the mouse forward and the pointer upwards and she knew nothing of how the mouse worked; she could not perceive that mice only works on a flat surface. My Nanna was an intelligent person and she had read more books than you can imagine, but it seemed that the mouse was fundamentally unintuitive to her.
The reason why this story jumped to the front of my mind was when I was reading Preston Gralla’s O’Reilly Weblog entry about him using Linux for the first time. Preston is obviously an intelligent chap and he knows Windows XP inside out, but when he used Linux it just didn’t sit right with him. Although it probably made technical sense to him, there was something about the system that just did not enthuse him to dump Windows XP, burn all copies of his book and tattoo a penguin on his posterior. This was the same kind of effect that I experienced many years back when that mouse just did not sit right with my Nanna. It makes me wonder what determines people to have preferences and choice.
The Linux desktop
Five years ago I remember reading how people were predicting that 2000 was the year of the desktop. These prophets were seeing the demise of the mighty Microsoft and Linux becoming a dominant system on the corporate and even consumer desktop. They were wrong. The Linux desktop is still in a heady time of rampant development, re-focus and innovation. I have been particularly impressed with many of the developments occurring on the desktop side of the fence such as freedesktop.org, X.org, GNOME and Project Utopia, but we still have a way to go before all of this integrates together into a single unified vision that sits right for the user.
Something I have rambled on about before is how the Linux community are great at creating frameworks; you only have to look at KDE for an example of this. There is DCOP, KParts, IOSlaves and aRts to name a few. What is interesting about KDE is that the developers seem to have a single shot vision of how the desktop should work - when you install tarballs of KDE there is little difference between that KDE and the KDE included with distributions. The KDE team set forth to create a single unified desktop that should work across distributions. This makes sense, but is it the best way forward?
We have already established that people react to technology in different ways. If we take away any kind of political agenda and put the perfect installation of KDE in front of people on a nice fast machine, the desktop will be great for some but not others. Others may prefer GNOME, Afterstep, Windows, Mac OS X, a command line, a VR headset and glove or simply not take to computers at all. The problem is that the entire KDE desktop and all of its software is all designed around this single shot approach at usability; if someone doesn’t like KDE that software is pretty much useless to them. Yes, you can run KDE applications without KDE but they are slower to load, look different and behave differently.
I am not picking on KDE for any particular reason, and all of the concepts here apply to every interface in question; I just picked KDE randomly. The point is that the functionality of an application should be broken down into a series of interactions that are fundamentally based upon the interface that you are accessing the application from. The current state of the desktops are not actually that integral to using the applications that run on them. If you load the GIMP on GNOME, there is no functional difference to running the GIMP in KDE. If you load an image file, you still need to use the file open dialog box that is common to the GIMP and not the KDE equivalent - the application still looks and behaves differently.
If you look at a typical application, you can extract the fundamental concepts of interaction from that application and put these interactions into clearly defined theoretical boxes. These interactions and visual representations should really be abstracted out of the application into a generic means of recreating that application in the desktop of choice. This way you could run any application with this technology in any interface and the application would adjust to the native desktop that you are familiar with. This could include changing the native font, using the native theme, changing the icons, using native dialog boxes, responding to interactions in ways that are common with the host environment and respecting usability guidelines. You could theoretically take this concept and apply it to Project Looking Glass too; a 2D representation of a button could be natively represented in Project Looking Glass as a 3D button. The kind of functionality I am discussing here needs to be implemented at a toolkit level; this would ensure that the native binding of interaction and visual representation of the application could be applied to all software written in that toolkit.
Making it happen
In many ways, this kind of flexibility is something that the freedesktop.org project is there for, and this would greatly reduce the sheer amount of redundancy between applications developed for different desktops. Many people seem to stick to the applications that fit into their desktop environment as they feel more integrated. As an example, I love Quanta and I also love Bluefish, but I should be able to have both fit into my native desktop and look and feel like a native desktop environment. As a user, why should I care that one is written in Qt and the other in GTK?
There is a lot of rhetoric and discussion going on about the merits of Open Source usability at the moment, and if we don’t try to abstract applications out to merge into native desktops, this usability is going to be largely lost in most cases. This is not Windows or Mac OS X; we don’t have a single graphical interface for our Operating System, and as such we need to adjust our software so that we can support these different graphical environments. This will not only make all of the software more integrated, but it will put software in peoples hands in an environment that is familiar to them. Sure, there are some serious technical challenges to this approach, but if there is one thing we have in this community, it is technical ability.
Valid points or pointless rubbish? Share your thoughts and discuss the merits of unified interaction…
I remember the last time the Macworld Expo was in Boston (1997) like it was yesterday. I was a student in high school, and I considered myself to be a diehard Mac user. The weather was absolutely beautiful. The atmosphere was electric: the grandeur of the expo was felt at the World Trade Center in Boston, at the Bayside Expo Center, and all around the city. Yes, the expo required two convention centers to accommodate all the exhibitors. That Macworld Expo will forever live in Mac lore. Power Macintoshes (there were even the infamous “Mac clones”), multimedia technologies, developer tools, and games (the anticipated release of “Quake”) gave Mac fanatics many reasons to cheer. We were all a part of history when Steve Jobs, returning to the helm of the company that he helped found, gave his keynote speech with Bill Gates on the jumbo screen above.
Seven years and many Apple innovations later, the Macworld Expo has returned to its humble roots in Boston. The face of Apple Computer has changed, from its logo to its business. Even the expo has changed. No longer is the expo just an expo, but it also incorporates a conference. No longer does the convention require two convention centers –it is now held at the new and towering Boston Convention and Exposition Center that has the like of a massive airport terminal.
The theme at this year’s Macworld was reflecting on the Mac’s past, and looking forward. The morning presentation on the first day of the expo by David Pogue, and a reunion of the founding fathers of Apple Computers (Jef Raskin, Andy Hertzfeld, Bill Atkinson, Jerry Manock), provided a humble celebration of the little machine that could, as well as the company that started it all. Each person also spoke of their experiences working with Steve Jobs, their present life after Apple, and their insights on the future of the company. The next day, Rick Smolan, head of the “America 24/7″ and the “A Day in the Life of” series, reflected on how the Mac contributed to his projects and successes for decades.
One noticeable spotlight at the conference was the attention on digital music and the iPod, which seems to be the future direction of Apple.
So no Apple, no Microsoft, no Adobe, no Macromedia, and a host of others normally at the Macworld Expo. It did not dampen the spirit of the Macworld Expo and Conference. Not even the bad weather nor the fact that Boston Convention and Exposition Center looked like a ghost town (only two conferences there) made a difference. One thing has not changed during the last seven years: the Apple community –the Mac fanatics, and those who keep the legacy of the Mac and Apple products.
Macworld 2004 in Boston marked the convention’s humble return to its home. I am satisfied that Macworld has returned to Boston, my home, that is synonymous with the Macworld Expo, and I am confident that this is the start of something insanely great in the years to come.
Are you looking forward to Macworld 2005 in Boston?
I used to think one of my biggest problems was finishing projects. Now I realize that the real problem is lack of specific progress toward a goal. I may never really finish all (or any) of my projects, but if they’re worth doing, I can work on them more productively.
Productively means that I need to work most on the pieces that will have the biggest or most important impact. That’s not to say that I’ll avoid maintenance work, but that I need to balance maintenance work with progress work. I may only clean my kitchen floor once every two weeks, but I have to spend a few minutes keeping it clean every couple of days.
Also, it’s more valuable to do something than to plan to do something. If there’s something I can do in a minute or two, it’s worth doing then and there, rather than scheduling it somewhere in my mental task list where it can crowd out other important things until future important things bury it.
Finally, regular progress is vitally important. If I let my office go too long without tidying, it’s a big job. If I tidy a little bit every day, it’s much, much nicer than if I did a big cleaning every couple of months. It’s also easier for me to focus on progress goals if my maintenance goals are in good shape.
Generalizing this to free software projects and communities is easy.
Without further ado, here are my seven rules for Just Finally Doing It!
How do *you* meet your goals?
When I recommended one of my pet Python debugging tricks to my brother, I realized maybe others would have missed these odd but useful bits of the Python standard library.
Inspect provides all sorts of juicy introspection goodies, including the current call stack. pprint is the key to making some visual sense of the __repr__ of a large dictionary, tuple or dictionary. I often use pprint to tame the stack trace list output from inspect.stack(). You can do the same sort of thing with exceptions and the traceback module, but I think inspect+pprint is the quickest way to figure out where those rogue calls to CrashMe() are coming from. Here’s an example:
>>> import inspect >>> import pprint >>> def a(): ... b() ... >>> def b(): ... c() ... >>> def c(): ... pprint.pprint(inspect.stack()) ... >>> a() [(, '
', 2, 'c', None, None), (, ' ', 2, 'b', None, None), (, ' ', 2, 'a', None, None), (, ' ', 1, '?', None, None)] >>> KeyboardInterrupt >>>
Related link: http://platform.progeny.com/componentized/index.html
A few months ago Ian Murdock announced his intent to build a componentized Linux. Though I read his announcement, I have to admit I still came away a little confused. He seemed to be describing what I thought Debian and Gentoo already were, minimal systems that let you tightly control what you put on top of them.
Now it’s a couple of months later and Progeny, Ian’s company, is releasing a Developer’s edition that delivers on his idea. I’ll have to download it and check it out.
I like the idea of rolling my own Linux distro. But to me the idea needs to go one step further. Don’t just make it easy for users to decide which packages they want (don’t all the distros do that?), or make it flexible for the users to change certain components like making it easy to rev KDE or Gnome (something hard to do in Lycoris, Lindows, and Xandros); instead, make it easy for the user to choose which hardware detection subsystem they want to use, which package installation subsystem, which gui or text based installer, which init system, and so on.
Let the user really roll their own Linux distro, which can be useful in businesses and universities that want to provide a completely customized Linux that completely suits their needs.
Maybe that is what Ian’s new project will do. I guess I’ll understand more once I download the iso’s and give it a shot.
Related link: http://www.python.org/2.4/
Marching onwards towards Python 3000. The first alpha of Python 2.4 is out. The 2.4 release is mostly about optimization and library enhancements.
The biggest new feature is Generator expressions, which is a natural follow-on to 2.2’s list comprehensions, mixing in the potential memory efficiency of 2.2’s generators. Some people complain that list comprehensions and now generator expressions are symptoms of the complication of Python, but IMHO, Guido and co have still not lost a bit of their knack for deciding on and working in the best suggested enhancements to the language. Python gets better with every release.
One of the updates I’m happiest with is the inclusion of Hye-Shik Chang’s wonderful CJKCodecs, a unified unicode codec set for Chinese, Japanese and Korean encodings. Now our friends in the Pacific Rim will need no longer make a separate download to get local encoding support. Personally, I’d be happy with the inclusion of a full-blown iconv database with Python (providing support for all commonly-used encodings), despite the boosting of download size. Good internationalization is immensely important.
Are you looking forward to Python 2.4?
I was talking with a friend and for some reason we got on the subject of blogs. We both sometimes write things in our weblogs so we can store the info. Instead of post-its on the monitor, bookmarks in browsers, or to-do items in PDAs, we have blogs.
I already “blog for Google”, which is the same thing as the old usenet practice of posting a post about some problem I encountered and how I solved it. These entries are not really for discussion, but more for the archives so that the next poor soul can find it. Randal Schwartz tells me this is how it was back in the day when he could read all of usenet in a half-hour.
Someday, when we get our heads wrapped around unstructured data stores, there may be Perl modules called DBI::bloxsom and DBI::PerlMonks to bring together this stuff. Until then we have blogs and Google, and blogging about blogs.
Security, especially that pertains to technology, has been a foremost issue in the media and in politics, and it was no surprise that it was major theme at this year’s USENIX ‘04 Annual Technical Conference in Boston.
The general consensus on security was clear: the issue is sensitive, difficult, and political. There cannot be 100% security, which is too extreme. An example presented was the airline industry: the only way to make the airline industry 100% safe is to not to fly planes. Of course, that would not be feasible by any means, nor would it be worth it. We need to understand and accept risks. Security involves trade-offs.
Unfortunately, there is a poor perception of security risks which downplays the issues significantly. For one, the media is to blame for exaggerating many of the issues. Many times, computer security in the news is “boring” and does not cover the bigger picture of the problems. Technology also has a share of the blame –for hiding how things really work. Many times, the risks for using technology are hardly realized or understood. Finally, the IT industry has a share of the blame for spending little time, energy, money, and leadership on explaining security issues to the public.
So what are the direct consequences that we see now? The conclusion from both sides of the debate on “Is an Operating System Monoculture a Threat to Security?” explains our current state the best: we have dug ourselves into a deep hole, and we need to find a way to dig ourselves out of the hole. We have little or no control over day-to-day security mechanisms. In many cases, individual rights are trumped in the name of security (e.g. Digital Millennium Copyright Act (DMCA) and Patriot Act). There are a handful of groups that have a major influence on our government and are successful in creating agendas favorable for themselves. Our situation reflects back on history: we are caught up in the circus of politics and the media.
So what can we, the technical community, do? Two words: be involved. It is important to continue to tell the truth about how technologies work. Most importantly, educating the public is critical so that the majority have a common understanding to understand the benefits and risks of technology. Attacking systems is a necessary part of security, and it is an integral part of educating the public. The public should not perceive the notion of breaking into systems as “bad” (or that “we” are crazy): breaking systems does not mean suggesting people to break things or to commit malicious acts. Finally, it is crucial to be partisan and to work with those who are curious.
Security affects our lives. However, we are not powerless. We can start digging ourselves out of a complex and sensitive hole by “stepping up to the plate” and make a difference to those who needs it the most.
Have we dug ourselves into a deep hole in computer security matters? Can there be better communication between IT experts and the general public?
We are facing an interesting time in the Open Source desktop world. Not only are a number a of interesting technologies being developed for making our computers work more transparently, but a new technology has been Open Sourced recently that provides a new playground for a new way of thinking about the Open Source Desktop; this technology is Project Looking Glass.
Project Looking Glass (PLG) is a technology that was created by Sun to create a 3D desktop environment. The environment gives you the ability to perform simple operations such as flipping windows, changing the perspective and view of an object and other functions. The software was created by Sun to explore the possibilities for 3D based applications and a 3D based desktop, and although fairly useless in its current incarnation, the prototype provides a level of usable framework to create 3D applications and experiment with a new way of interacting with software.
The aim of this article is to discuss some ideas and concepts for making use of a 3D environment. Before I continue, there should be a few disclaimers however. First of all, I am no usability expert, and I am actually fairly cynical about certain aspects of usability theory. As such, you should take my ideas here as simply ideas - they were in no way researched and are not backed up with data to prove their usefulness. Secondly, the ideas here can apply to any 3D environment or software, and not specifically PLG. Feel free to make use of these ideas in your own 3D environment.
I believe that a 3D environment could be useful. There has been much discussion on the net about the worth of a 3D interface, particularly considering that it is confined within the remit of your 2D screen and typical 2D input devices; keyboard and mouse. Although I share some cynicism to a point, I also do believe that people can perceive 3D sufficiently on a screen to interact with it. You only have to look at how we perceive 3D in video games and movies to see this. I think the biggest challenges that we face are not with perception, but with regards to the input and architecture of the environment.
I think it is fair to say that it is unreasonable to expect users of a 3D interface to go out and buy a special input device for their computer. We are not aiming to build a Minority Report type system here; the aim is to create a level of useful 3D interaction that is as familiar and intuitive as possible. I do believe that the mouse is useful here.
3D interfaces are based around three axis points:
When considering our input mechanism, we need to take into account these axis requirements. In addition to this we need to consider the selection requirements. I believe that selection will be as simple in the 3D space as it is in a 2D space; you need to be able to select something (such as loading an application when double clicking an icon) and you need to able to hold something (such as dragging an icon by single clicking, dragging and releasing). The only other possible requirement is a context menu, but then I am rather skeptical of these, and I think a better solution can be achieved in the 3D space with semi-transparent overlays.
With these considerations, one choice of input could be:
Although I have suggested which button can do what, these combinations can obviously be changed. The main point I am making is that you need a selection button and a means to control each axis. Some people have suggested using a Shift/Ctrl/Alt key in combination with the mouse, but I think this feels a little clumsy.
3D representation of objects
The 3D interface will never amount to anything if we don’t consider some specific use cases and how the interface can be best used. I think the key to defining ‘best used’ is to clearly separate out 2D and 3D functions. I see no point in making everything 3D; some things are inherently 2D (such as creating a word processed document) and the interface should allow you to edit your document in a 2D window as if you were using KDE/GNOME.
I think the true value of 3D comes in when we consider how we interact with objects. A while back a friend of mine told me about John Siracusa’s analysis of the spatial finder, and I found his commentary on how we interact with objects interesting. A 3D interface really allows us to take this concept and raise it to the next bar - in the 3D space we can truly interact with the object and not simply interact with iconic representations of objects.
Let us take for example, a file. In most current GUI’s, a file is represented by an icon. This icon can be interacted in the sense of moving it to different locations and clicking on it to load the file into a viewer. In the 3D space this file could be literally an accurate representation of the file itself. In this sense we could represent some of the following types of file:
Some icons will obviously be 2D by their very nature. A .png or .jpg image is obviously a flat 2D image and is represented as such, but the key is in providing a realistic representation to use of what type of content the object is. As an example, the user needs to see an intrinsic link between the document they type into and the document that comes out of their printer.
Application use cases
Before we can consider any kind of development effort, we need to come up with some ideas for how 3D applications will work. We need to formulate these ideas into use cases that can be clearly discussed and debated over. Here are some ideas:
If there is something that humans seem to have no problem understanding is that of drawers, cupboards, fridges and other square boxes with a door on the front. We also understand pigeon holes, boxes, containers and other methods of putting one object in another. We also innately understand that if you put two objects in a box you only need to move the box to move both objects. This can be useful for dealing with directories and moving files around.
I think what we need to create in this kind of interface is a number of of visual representations of real world storage containers. As an example, a hard disk could be represented as an office/storage room (we need to visually suggest that the hard disk is bigger than anything on it, so we need to visually represent the actual disk as a larger room). Within this room we then have a number of storage cabinets (directories) in which the files can be stored. Moving a cabinet from one room to another should be as simple as dragging it over from one room to the other. With the metaphor of cabinets we can also have different types of storage container for different types of information. A typical My Documents type directory could be a filing cabinet for example.
With this kind of metaphor I want to steer clear of someone walking into a 3D room and in a Doom III style manner and moving a hand around to pick up files. This whole metaphor is based on iconic meaning tied in with a real world relationship between the objects. Here is a use case:
Creating content. E.g. burning a CD
The concept of burning a CD follows my ideas for creating any type of simple content. For this we need to identify the core components of the object we are creating, and put on the screen a simple template that allows the user to click on the relevant part of the object to change it. For a burnable CD we will typically have the CD itself and a cover. We may also have a cover for the back of a CD case. Here is the use case:
What could be useful for this case is that when the user buys some new CDs he/she is encouraged to add them to the media store - this way the computer can let the user know when he/she is running out of media. This is particularly useful with the computer checking if the CD’s are working or damaged when the burning process is finished.
When a user plugs in a device, it should be visually represented on the screen. This will make an intrinsic link between the physical device and the virtual device, although they may look different physically (this is the biggest problem). With this device on screen, the user should be able to interact with it in a similar way to the real device. Let us assume we are plugging in a digital camera:
This system is not radically different to the current method of viewing pictures on a drive, but we are connecting together the concept of pictures on the device and actually dragging them to somewhere useful.
These use cases are not necessarily the right way to do things, but they provide a starting point for discussion. With more consideration and some prototypes we can better target the 3D aspects of the interface in the applications and make these use cases more representative of how we physically interact with the world.
I firmly believe that the 3D desktop environment has some great potential, but it needs to combine the best elements of the 2D methods we currently use and the innovative 3D ideas we will consider in the desktop of the future. This article has been written to hopefully pique the interest and ideas of people to think about how we can create an interface that is far easier to use and more representative of the real world.
The biggest challenge when implementing an interface such as this is how far you represent reality. As an example, when you plug your camera in and look at the pictures on the virtual screen, you should really be able to use the functions on the camera as if it was the physical device, but the software limits this potential to merely grabbing pictures and maybe taking a few shots. In this sense the physical representation cannot be fully imitated - we simply need to get a good batting average.
I would love to hear your thoughts on all of this, so feel free to get in touch with me or scribe your thoughts down in the comments box below. I am as interested to learn new ideas as much as coming up with new ideas; this could really mark a new wave in the Open Source desktop revolution.
What do you think? Feel free to share your ideas, views, opinions and money below…
I’m watching the live coverage of the prologue individual time trial for the Tour de France. Since Lance Armstrong is trying for a record six wins (and in a row even), he will ride last which gives the TV folks plenty of time to cut back to him as everyone else rides.
Now Lance is warming up on a stationary trainer and he has the signature white ear bud headphones dangling from his ear. Apparently his iPod shows up a lot in OLN’s The Lance Chronicles, too, says mblog: The Apple Project, which speculates that Lance’s girlfriend, Sheryl Crow, is the one who gave the iPod to Lance.
Related link: http://shiflett.org/archive/46
There has been a lot of discussion lately about scalability, brought about by Friendster’s move to PHP. Once again, I am amazed at how many people don’t understand what scalability means (even though I’m glad to see fewer and fewer people misspelling it). Scalability means “How well a solution to some problem will work when the size of the problem increases” (from Dictionary.com). This is interpreted in drastically different ways, and you can find my interpretation in What Is Scalability?.
Before I continue, let’s look at some of the clueful comments from Joyce Park’s blog entry:
Rasmus Lerdorf writes:
Scalability is gained by using a shared-nothing architecture where you can scale horizontally infinitely. A typical Java application will make use of the fact that it is running under a JVM in which you can store session and state data very easily and you can effectively write a web application very much the same way you would write a desktop application. This is very convenient, but it doesn’t scale. To scale this you then have to add other mechanisms to do intra-JVM message passing which adds another level of complexity and performance issues. There are of course ways to avoid this, but the typical first Java implementation of something will fall into this trap.
PHP has no scalability issues of this nature. Each request is completely sandboxed from every other request and there is nothing in the language that leads people towards writing applications that don’t scale.
Harry Fuecks writes (in response to someone citing performance benchmarks to support a scalability argument):
But performance != scalability.
Joyce Park writes (in response to someone suggesting that Friendster’s Java developers must have been sub-par):
1) We had not one but TWO guys here who had written bestselling JSP books. Not that this necessarily means they’re great Java devs, but I actually think our guys were as good as any team.
2) We tried rewriting the site in Java twice, using MVC and all available best practices. It actually got slower. Anyway, what does MVC have to do with speed or scalability? I thought it was a design cleanliness and maintainability thing.
3) We tried different app servers, different JVMs, different machines.
4) Anything that money could do, it did.
There has been a lot of discussion elsewhere, too. Harry Fuecks explains that The J2EE guy still doesn’t get PHP and discusses Why PHP Scales. Harry understands what scalability means and takes the time to try to it explain it to everyone else. If you have read The PHP Scalability Myth or think that scalability is a measure of performance (or both), please take the time to read what Harry has written.
Jeff Moore, in The PHP scalability saga continues, writes:
I think I’ll end this post with heresy. The field of web development seems to have a mental model of application development forged from the dot-com boom era. We operate with the vision that our applications are going to experience exponential usage growth. Perhaps this leads to an unhealthy focus on scalability in web applications versus other requirements. Perhaps this also leads us to employ optimizations prematurely before we can even understand their impact or even have a need for them. Perhaps these premature optimizations even hurt scalability and performance and needlessly complicate our applications.
Perhaps the Java Culture is more infected with “dot-com-itis” than the php culture?
Technical details aside, I think PHP can be made to scale because so many people think that it can’t. This skepticism means that people buy into the fact that it takes hard work and intelligent design to make a PHP-based system work right. ‘Intelligent design’ doesn’t mean adhering to MVC or design patterns, writing OO code or assembler. It means looking at your system as a whole, figuring out what it needs to do, and then devising a plan for doing that as cheaply as possible. The critical bit, of course, is that you need to put that sort of work into any large architecture; PHP doesn’t magically scale ‘naturally’, but neither will planting a Java Bean in your backyard create a magic scalable beanstalk.
His entire “answer” is very informative, even if most of it is obvious. Sometimes what people need is for someone to stand up and state the obvious, and I think now is such a time.
Of course, there are plenty of people who aren’t as clueful as Harry and George. Unfortunately, it’s difficult to know who to listen to. John Lim says “High Performance, High Scalability PHP is a Lie”. I assume that he just wanted a nice headline, but his statement couldn’t be further from the truth.
Last October, I briefly answered the question What Is Scalability?. Perhaps my use of Big O notation wasn’t the best approach, since most people who truly understand my point likely already know what scalability means. A simpler explanation might be better. In fact, we need to eliminate computers from the explanation altogether, because that alone seems to confuse people.
Compare a truck and a tractor (hypothetically). To simplify our comparison, let’s assume that both have the exact same towing capacity (this might be unrealistic, but such is the beauty of hypothetical situations). With no load, the truck has a maximum speed of 125 mph (about 200 kph), and the tractor has a maximum speed of 15 mph (about 25 kph). With a load equivalent to their maximum towing capacity, the truck has a maximum speed of 45 mph (about 70 kph), and the tractor has a maximum speed of 10 mph (about 15 kph). Which scales better? If you think the truck does, you’re wrong. Although the truck is faster in all cases (loaded, it is even faster than the tractor with no load), it slows down the most under load, proportionately.
If you’re only concerned with speed, you should choose a Ferrari Modena rather than decide between the truck and the tractor. If you’re only concerned with scalability (which is highly unlikely), you should choose the tractor. If you’re concerned with the best combination of speed and scalability, the truck is a good choice.
So how does scalability apply to the Web? First, you should ask yourself whether the Web’s fundamental architecture is scalable. The answer is yes. Some people will describe HTTP’s statelessness in a derogatory manner. The more enlightened people, however, understand that this is one of the key characteristics that make HTTP such a scalable protocol. What makes it scalable? With every HTTP transaction being completely independent, the amount of resources necessary grows linearly with the amount of requests received. In a system that does not scale (where “does not scale” means that it scales poorly), the amount of resources necessary would increase at a higher rate than the number of requests. While HTTP has its flaws (the proper spelling of referrer being one), there’s no arguing that it scales, and this is one of the things that made the Web’s early explosive growth less painful than it would have otherwise been.
The present discussion is about developing Web applications that scale well, and whether particular languages, technologies, and platforms are more appropriate than others. My opinion is that some things scale more naturally than others, and Rasmus’s explanation above touches on this. PHP, when compiled as an Apache module (mod_php), fits nicely into the basic Web paradigm. In fact, it might be easier to imagine PHP as a new skill that Apache can learn. HTTP requests are still handled by Apache, and unless your programming logic specifically requires interaction with another source (database, filesystem, network), your application will scale as well as Apache (with a decrease in performance based upon the complexity of your programming logic). This is why PHP naturally scales. The caveat I mention is why your PHP application may not scale.
A common (and somewhat trite) argument being tossed around is that scalability has nothing to do with the programming language. While it is true that language syntax is irrelevant, the environments in which languages typically operate can vary drastically, and this makes a big difference. PHP is much different than ColdFusion or JSP. In terms of scalability, PHP has an advantage, but it loses a few features that some developers miss (which is why there are efforts to create application servers for PHP). The PHP versus JSP argument should focus on environment, otherwise the point gets lost.
I actually disagree with George’s statement, “PHP doesn’t magically scale ‘naturally’”. Of course, I understand and agree with the spirit of what he’s trying to say, which is that using PHP isn’t going to make your applications magically scale well, but I do believe that PHP has a natural advantage, as I just described. Rasmus seems to agree with me, and George might also agree, despite his statement.
I think PHP scales well because Apache scales well because the Web scales well. PHP doesn’t try to reinvent the wheel; it simply tries to fit into the existing paradigm, and this is the beauty of it.
How do you define scalability?
Related link: http://mstation.org/parrot_games.php
Author John Littler recently interviewed me about my work on Parrot bindings to SDL. Read more on Mstation.org.
This year I’m attending several conferences. In July I’ll be teaching at the O’Reilly Open Source Convention. In October I’ll be at Perl Whirl, teaching again. If all the stars align I’ll also be speaking at YAPC Europe. From the outside it looks like I’m going to conferences to teach.
But, I’m also going to conferences to learn. I’m attending a number of tutorials at OSCON, as well as tens of sessions throughout the conference season.
Of course there will be play. It’s not possible to say that Perl Whirl in the Western Mediterranean is a purely business trip. Not with a straight face it isn’t.
None of these reasons are as good as the underlying theme of any conference, however. We all participate in something more important, more useful. Conferences offer the highest potential to advance our field. Why? Face time.
Throw two hundred like-minded people in a room and you’re bound to get something good out of it. Conversation is the inspiration for innovation. When you attend a conference the opportunity to contribute to technology is huge.
Conferences like OSCON present a unique opportunity for innovation due to the vast melting pot of expertise and interest. When Ruby programmers talk to Perl programmers the results can be impressive. Similarly, if SPF developers are talking with Microsoft Caller-ID developers… right, that already happened, now we have the New SPF, and a better technology.
To those who are attending a conference that interests them, I encourage you to get out and talk to people. You have unique knowledge and unique interests that can pollenate new ideas. Parties, more than conference sessions, offer a low barrier to entry to talk to like-minded geeks. Consider those sessions as the jumping point for new and interesting conversation. Sessions are designed to make you think. Conferences are designed to make you think together. That’s where The Next Big Thing lies in waiting.
To those who are not attending conferences, but have the means, I offer a truism. Anyone can sit at home reading books and consume knowledge. Likewise, anyone can approach the world and provide knowledge. Choose to give back, that’s how we grow.
Finally, to employers who use Open Source technologies. Sending representatives to conferences such as OSCON is very important. Your company can learn from tutorials and sessions. More importantly, your developers can get face time with the leaders of many projects that are important to you. Have a gripe or an interest? Your geeks can buy someone a drink and talk it over.
To conclude this thought, go to conferences to teach, learn, and play. But also, please, go to make things happen so technology can benefit from you today.