Let’s see if we can complete the “webring.”
I think that brings us fulls circle, don’t you?
I wanted to make a TV muter shaped like a gun. There, that’s free, you can use that.
it doesn’t sound so unique.
Let’s see if we can complete the “webring.”
I think that brings us fulls circle, don’t you?
I wanted to make a TV muter shaped like a gun. There, that’s free, you can use that.
it doesn’t sound so unique.
Related link: http://moogmusic.com/?cat_id=83
Bob Moog asked me for directions once. Which was pretty ironic, seeing as he blazed the path in my field, electronic music. But we were simply waiting for a bus outside a trade show. When I looked up to see who’d asked, I was shocked that the man whose name is synonymous with synthesis would be talking to me. (“We’re not worthy! We’re not worthy!”) I admitted that I had no clue if we were lined up for the right bus either, but he just smiled.
I think it was at that same trade show that Moog jumped up on stage during a Keith Emerson concert to perform a wonderfully raunchy theremin solo. The theremin, of course, is an instrument you play by waving your hands near two antennas—one for pitch and one for volume. Dr. Moog, grinning devilishly, white hair flapping, played the upright antenna like a…well, let’s just say the mostly male audience roared with laughter.
But that was Moog’s amazing gift: transforming seemingly soulless electronics into living musical instruments. I learned synthesis on a monstrous Moog 55. Even though you could hear the pitch of the oscillators drift across tape splices, even though patches were literally made with fistfuls of patch cords and “saved” by scrawling flow charts on paper, that towering instrument was hypnotic. Some of its circuit boards were built in 1969. Many of us have far newer instruments—especially digital ones—gathering dust in closets.
A few years back, I sat in the front row as Moog gave a lecture contrasting analog and digital synthesizers. No surprise which side he came down on: His life’s work had been building investment-quality analog instruments—unique, creative partners that could support a lifelong relationship. Digital synths, Moog said, were inexpensive, consistent, and offered a vast sound palette, but didn’t really raise his antenna.
So I was surprised later to see that he had written the foreword to Jim Aikin’s book Software Synthesizers. When I read it, I had to admire the line Moog drew, and it made me think again about the digital detritus in my closet. “My current laptop bristles with software emulations,” Moog wrote. “At times, all of this capability has a bittersweet flavor for me. My present laptop replaced a computer that was five years old and hopelessly obsolete. All my software is new too. No matter how wonderful my current software is, I don’t think I should become too attached to it, because I will soon abandon it in favor of the Next Big Thing. But then I realize that today’s technology is not about permanence. It’s about constantly learning and exploring.”
Then last fall, my co-author Kelli Richards bumped into Moog at the AES convention and asked him if he’d consider writing a testimonial for our book, The Art of Digital Music, which was about to go to press. I thought that was an odd choice, but the point of the book is not to praise digital technology; it’s to explore technology’s effect on artists and music. So it could be seen as an extension of Moog’s lecture.
Surprisingly, Moog agreed to look through the manuscript. He sent us a gracious note several days later, saying he’d spent quite a bit of time with the text, but felt the subject was so far outside his area of comfort and expertise that a quote wouldn’t be appropriate.
I wrote back, thanking him for taking the time to examine the manuscript and agreeing that a testimonial from him would have looked strange. I also told him that I’d enjoyed his foreword to Software Synthesizers and noticed that he was endorsing the Arturia software Moogs, so I thought asking had been worth a shot. Moog replied,
Jim Aikin is an old friend. The Arturia is an emulation of an analog entity, and we evaluated and licensed it accordingly. It was a tough decision, at least for me. However, I did understand what it was SUPPOSED to do, which is more than I am able to say about most of the contents of your book.
I still smile when I read that. Because as much as I enjoy digital music-making—for the very reasons Moog mentioned: accessibility, power, and the thrill of exploration—I know in my heart that he’s right. It’s the analog things that really matter in the end.
Bob Moog signed his e-mail “Chief Technical Kahuna,” but to me and untold thousands of others, he’ll always be the consummate artist. Thanks, Bob. Thanks for showing us the direction.
What was your brush with Bob Moog’s greatness?
Technology always affects art, and search technology is no exception. Listening to an NPR profile today of a band called Tilly And The Wall, I wondered if, consciously or not, their style might have been influenced by search-engine optimization. This after all is a band that features a glockenspiel player and a tap dancer.
In the networked world of hyper-abundant music choices, we get ultra-differentiation in marketing. How else to get noticed when Google reports “Results… of about 2,310,000 for ‘two guitars bass drums’”?
Although the universe of hyper-targeted fans may not be huge. According to Google: “Results… of about 22 for ‘glockenspiel NEAR ‘tap dancer’”.
So maybe instead we’ll also see more bands rushing to stake out searches linking music and more mass-interest items. With porn, it’s of course already too late. (With porn, the human brain can’t think nearly fast enough for it not to have already been done.) But how about an indie band called, say, The Barbecue Mitts? Featuring a drummer who plays, say, a Weber Smokey Mountain Cooker and a Ranch Kettle:
Google says: “Results… of about 701 for ‘barbecue grill AND “pop band”‘”.
Now that seems like a sweet spot–
Wait. Good lord, I should have known. They’ve already thought of it. The top-ranked result for ‘barbecue grill AND “pop band”‘ is mega-selling country act Lonestar, as profiled in Country Music Today:
People, we have no chance. I for one welcome our new Marketing Department masters.
Who’s in your band?
Related link: http://www.http://www.fatman.com/fatlabs.htm
BeigeBat, who should be admired for, if nothing else, getting away with that name, wrote:
Whither General MIDI?
2005-08-09 18:33:59 BeigeBat
Or is that wither, General MIDI? At one point, you led a crusade (http://www.fatman.com/fatlabs.htm) to improve General MIDI synthesizers. Are they now as good as they’re going to get? How are composers of interactive music using GM now? Or MIDI in general?
Are General MIDI synthesizers as good as they are going to get?
I think you’ll be surprised to find that the concept of GM is still very much alive and being supported by the cutting-edge synthesizers. The tones and libraries are improving as fast as any tones and libraries anywhere. Here’s a link to a colossal GM set called…Colossus
So, the tones are FANTASTIC.
Now then. Regarding the stuff that Fat Labs concerned itself with, the compatibility aspect of GM, I don’t think things are going to improve. They’re pretty good, but they are not really splendid, and certainly not up to the level of quality that will be demanded by people who use the Colossus library or anything like it.
General MIDI was never a standard. It was merely a set of suggested practices, and when it first was used, the attack times and volumes of the individual instruments varied so much from one device to another that GM files could not be used from machine to machine with any kind of dependable, musical results. The Catch-22 was that in order to define volume levels, one would have to identify a soundset to use as a standard, and that was politically impossible in the competitive environment of Musical Instrument manufacture. MI companies pride themselves on their unique sounds, and any comparison, even in volume, with a competitor’s tones was frowned upon. In other words, General MIDI was not general when it came out, nor was it likely to become so.
I swear to you, I had no idea that this was the case when I created the first GM soundtrack for a game, The 7th Guest. There was only one GM device in existence…how could I know? When I finally realized the issue, I felt responsible for the fact that a lot of people would hear my soundtrack and expect finished work, but get merely a rough mix. So, along with Team Fat, I started Fat Labs, a company that would provide a compatibility testing service to manufacturers of GM devices. For a while, it was impossible to sell a GM chip to a Korean sound card manufacturer if it didn’t have the “Fat Seal.”
Whee! Nice ego trip, no money. I never really had a good head for extortion.
After years of doing Fat Labs tests (details can be found here ) with many major clients, we got enough GM sets close enough to compatibility that I think a certain momentum grew around using the Roland Sound Canvas sound set as a standard for attack times and perceived volumes of instruments. After much friction, Mark Miller, my good friend and then chairman of the MIDI Manufacturers Association, reluctantly mumbled at a meeting something like “OK, George, the Sound Canvas is the standard.” I was never able to get him to make a formal statement to me, but perhaps one exists somewhere in the MMA archives.
So, I would say that GM is still not a standard. To call it such would be an insult to the MIDI Manufacturers Association. The MMA, the standards organization that is officialy custodian of MIDI, is very serious and thorough about their standards. When they say that something is a standard, like MIDI, it by-God is a standard, and it works every time. MIDI (not GM) is particularly wonderful in this way.
Not being a true standard, though, I will say that GM is a very handy tool. In other words, it might be convenient for me to write a tune on a Sound Canvas and pass it over as a GM file to Team Fat’s orchestrator/composer Dave Govett to tweak in Colossus. BUT it is important that we tweak. We cannot lean too heavily on depending on GM files to sound satisfactory when composed on one GM device and then translated to another.
I hope this answers the first part of your question. I will address the rest of your question in a separate entry.
Bring it ON!
Related link: http://www.cycling74.com/products/m.html
Quality of Life and a music question…
2005-08-09 19:32:32 SMR [Reply | View]
Many moons ago, there was a great software company called Intelligent Music who made a great piece if software called “M”. which was later distributed by Voyetra. Do you know it? Do you have any industry pull to get the code open-sourced? I actually have a copy of the source, however I can’t (won’t) do anything with it, because while the author gave me a copy, he didn’t have the authority to open up the license. It was GREAT. I MISS IT.
The Fat Man answers:
As far as getting it open-sourced in order to use the program again, I submit that buying a Mac and using the updated, supported version would be less trouble.
Since you loved “M,” I bet that you’ll find some other wonderful, weird music programs that are very much to your taste on the Cycling74 website. Try using the PC version of the incredibly brilliant audio-tool-building-tool MAX to hook up your own audio weirdness and leave the oldness of “M” behind.
Also, if you are such the Protoss (see Starcraft) that you have sworn on George Bush’s grave that you will never buy a MAC, then using MAX on PC lets you save face.
AND Thank you. You have inspired me to write the world’s best (OK, “newest”) engineer joke:
Q: Why did the engineer stick his head into the microwave and turn it on?
A: Because he figured out a way to do it.
Smilies make everything all right on the Internet.
The Oracle offer is still open. Post your best story, or ask me a baffling and incisive Metal-Hits-The-Meat question.
Related link: http://www.oreillynet.com/pub/wlg/7594
The other day, I happened across brian d foy’s post about the need for a better address book for the Motorola Razr. I agree. It also needs a better web browser, and I am hopeful that either Cingular, my carrier, will offer the new Opera mobile or that I’ll be able to buy a decent browser capable of HTML, rather than just WAP viewing. I mean, my PSP can browse the web. Why can’t my cell phone?
In any case, I started searching for little mobile Java apps built for mobile phones that would run on the Razr. I’d like a better email client than the built in system which seems to only check one email account at a time. There’s a little Java email client that is available for free from Cingular, but it only supports Yahoo! mail. I could always set up a generic Yahoo! mail address and pop-in all my other email accounts through that, but…ugh. I need something that can handle a good 4 accounts.
A better web-browser and address book would be nice too. So, I did a basic Google search for “Razr applications” and found a whole slew of sites and programs. The only problem is that nearly every program that fit the bill for what I wanted is no longer supported by the original developer (and a pay service so it just doesn’t work anymore).
So, as I am new to this whole scene, I was wondering if anyone out there could point me in the direction of some good Java mobile phone apps. Please share in the comments.
What apps have you found for your Razr and where’d you find them?
O’Reilly has published an article of mine on recording an album I produced in Nashville, working with digital audio tools such as Steinberg Nuendo–and great musicians. MP3 examples let you hear tracks in isolation, showing how they come together to form a mix.
Read it here…
Related link: http://songcarver.com/C151287817/E1050038590/index.html
I got a tip about Plasq’s comic-book creation software Comic Life today, and noticed that the company founders were all musicians. So I started exploring their personal sites. On one, programmer Keith Lang poses the intriguing question, “What if computers talked out loud to each other? What if we could learn that language?”
He talks about listening to his modem and wondering what it would be like to address computers idiomatically, the way a composer would write differently for a French horn and a guitar even though the pitch ranges overlap. He even includes sound files and frequency plots of potential conversations.
I’ve read that telephone voice-recognition systems decrease in accuracy as callers try to speak more precisely, because the computers are trained to decipher sloppy speech. Ironically, speaking more precisely is the natural reaction when someone doesn’t understand you.
Seems to me that we already go out of our way to talk to computers on their own terms. Perhaps the answer is to make them more active listeners, asking more explicitly for the information they need and working with a range of answers instead of rejecting all but the “right” one. Ray Kurzweil told me [Amazon “search inside this book” link] that the hardest thing for a musician is to be an accompanist. “To be an intelligent accompanist means really understanding what the singer or violinist is doing, and then trying to strengthen them if they’re reaching a weak point,” he explained.
The message is clear: Not only must computers and humans speak more clearly, both need to listen harder as well.
How do you get your computer to listen to you?
RSS began its life as a really simple way for content providers to syndicate their content and for content consumers to subscribe to their favorite providers. When the blogosphere emerged, RSS really took off. Now, just as its “simple” technology cousin, HTML, provided the underpinnings of the Web 1.0 technology platform, RSS is emerging as a platform for delivering the broadband and mobile ready applications of a Web 2.0 enabled world.
From this vantage point, RSS evolves beyond simple publish and subscribe to become more akin to web services. The concept of a feed is extended to support both a diverse range of data and content types, and feeds can contain rich “payloads.” Furthermore, feeds gain the ability to expose well-formed methods providing the intelligent “glue logic” for building loosely coupled applications. Backed by two application examples, this blog presents a thesis of the key moving parts integral to the RSS platform and how they come together. (Note: this is a continuation of an earlier O’Reilly blog that I wrote and postings on my digital media blog, The Network Garden.)
Beginning With the End in Mind
When talking about innovative technologies, it is sometimes tempting to get sucked into the “WHAT” (is it) and the “HOW” (do you use it) without satisfying the “WHY” (it is a big deal) side of the equation. Therefore, let’s begin by imagining a world where RSS is mature and ubiquitous to see if the sample applications that can be built around this platform are truly compelling.
Sample Application One: The Community Calendar
In this example, I use The Community Calendar to plan my Friday evening in terms of things to do in San Francisco. To do this, I define the activities of interest to me, flag the people in my social network whose recommendations I trust, and the people that I want to connect with while I am out on the town.
To figure out dinner plans for Friday night, I configure and subscribe to an RSS feed from OpenTable.com of Italian restaurants in SOMA (South of Market) that have 8PM reservations available for a party of two. These results are then automatically filtered according to restaurants most consistently tagged as “good restaurant” by food lovers that I trust. As reservations become available (or unavailable), they are updated in my calendar in the form of a list of verified restaurant reservation options. At any time, I can book a reservation with a click. Similarly, the calendar can be extended to include bars and nightclubs that people in my social circle are planning to be at within a given time range of the evening, including whether they will definitely or tentatively be there. I can even set a parameter so that as people in my social circle change their destination plans, I am automatically notified by SMS on my mobile phone.
There are interesting opportunities for advertisers to tap into this model. For example, once a restaurant, club or bar has been added to my calendar as a prospective destination, the retailer can selectively entice me to go to their venue with offers of free drinks or specials. Similarly, local content providers can automatically infill this information with reviews, which is of high value both to me as a consumer and to them as highly relevant real estate that they can sell advertising on.
Sample Application Two: Video Clip Channel
In the case of the Video Clip Channel (VCC), I go to a web site where I define the categories of video content that are interesting to me. To do this, I build a structured query of the specific content that I am interested in (e.g., Los Angeles Lakers basketball highlights), the content sources that I want to cull from (e.g., ESPN, LA Times, the video “clip miners” community site) and my content filtering parameters (e.g., aggregate storage limits, automatic removal of duplicate clips or content sources that I consistently fail to view content from, and auto-inclusion of all Lakers clips in my friend Leon’s video content store). VCC then dynamically builds a play list for me, and associates the content with my video player (or devices) of choice before downloading the applicable content locally or readying a streaming version of same. In turn, VCC exposes the play list for syndication to consumers that are interested in my Lakers clip feed channel.
This brings up an interesting point. As video iPods take hold, the presence of a service like VCC enables legions of micro broadcasters to propagate. Building upon The Long Tail model of market demand and demand fulfillment economics, as espoused by Chris Anderson, editor-in-chief of Wired Magazine, micro-broadcasters will fall into three different buckets: content producers (they actually create new content), content aggregators (they stock the virtual shelves with content from a multitude of content sources) and content filters (they categorize the surplus of content available and separate the wheat from the chaff).
Moving from “WHY” to “WHAT”
When considered as part of the same application platform, the above two examples suggest a few things. One is the premise that RSS is best thought of as an intelligent message exchange that can support not only the syndication and subscription of feeds (like headlines and blogs), but the actual delivery of rich content payloads as well.
Two is that payloads can consist of a broad range of data and content types, including blogs, photos, podcasts, videos, flash applications, news content, calendar items, TV programs, local movie listings, reviews, stuff for sale, special offers, product listings, and dynamic lists, like wish lists.
Three is that content items will evolve to expose “handles” such that the available actions associated with a given piece of content can become quite robust. For example, in The Community Calendar application referenced earlier, a calendar item can recognize the event “available restaurant reservation,” pipe the reservation data in from a remote provider based on well-formed parameters, filter that data according to social networking constructs, aggregate the data with third party content like a restaurant review and keep other members of the reservation planning process apprised as plans solidify and change.
Four is that to support these capabilities, RSS will become more like publish and subscribe middleware, gaining the ability to perform message queuing and routing functions, as well as handling of data transformations with and between consumer applications and disparate devices types.
Finally, in terms of deciding what gets pushed into or filtered out of feed channels and how such content items are categorized and ranked, it seems clear that there is the need for more systematic ways to define and negotiate sharing of information based on user profile data.
The ideal here is to disseminate between both user defined contexts, such as metadata “tags” (e.g., “good restaurant” and “bad restaurant” in The Community Calendar example), ratings and notes, and autonomous, algorithmically-defined ones, such as newest, most viewed and most downloaded. Among the major portals, Yahoo is taking some innovative, albeit wobbly, steps with My Web 2.0, and del.icio.us has emerged as a surprising powerful social bookmarks manager in the startup realm.
A key part of this challenge, however, is working through a model for defining how content feeds are categorized both on the syndication side and on the subscriber side of the information pipe. This is non-trivial because approaches that can systematically watch “what I do” and intelligently optimize feedback accordingly (such as what Amazon does with the feature “People who bought this book, also bought this one.”) have the benefit of being incredibly easy to use. By contrast, ones that can be customized based on direct user feedback are very powerful but more complex to use and more subject to be “gamed” by spammers.
Netting it out, for developers embracing RSS as a platform, there is tremendous value to add in terms of flagging, aggregating and filtering the stuff that I care about and the people or sources that I respect. This is one part information management, one part markup, one part ranking and one part algorithmic search. Similarly, there is a lot of value to be realized in terms of the handling logic for autonomously organizing, archiving and deleting the stuff I don’t get around to clicking on, as well as dealing with duplicates.
And from “WHAT” to “HOW”
So where do we go from here? In thinking about how to approach RSS as a development platform one has to consider Microsoft’s recent announcement that they “get” RSS and are embracing it in a big way within Longhorn/Windows Vista. Specifically, my knee jerk is that Microsoft is going to focus on getting the “message router” portion of feed management right and ensuring that hand-offs between a central feed repository and the consumers of those feeds (people and applications) occurs in a highest common denominator fashion.
They will also ensure that Visual Studio makes it easy to create custom applications around such a model. Microsoft understands that the best way to ensure broad adoption of Longhorn is to hook the developers first, as they drive the ecosystem that will compel consumers and enterprise to toggle over.
In parallel, one would expect that the open source community tackles many of the same issues. Towards that end, I am sure that multiple existing projects are logical candidates, depending on whether you view this as a middleware, content management, or application server problem.
Similarly, I would expect that fierce competition for developer mindshare between Microsoft, Google, Yahoo, Amazon and eBay will continue to push these folks to open up more and more of their APIs. For example, Google, Yahoo and Microsoft all have both deep investments in algorithmic search technology, and in case of Google and Yahoo today (and Microsoft soon), contextual advertising networks. They also all seem to get the premise that by exposing APIs to these functions, new and interesting applications can be born that extend their reach as a platform. My bet is that before too long, the filtration, personalization and ad serving functions get reduced to an API that a developer can plug into their application.
Finally, one area that will be interesting to see how it plays out is how much Web 2.0 looks like Web 1.0 in terms of continuing to be web browser based (especially as Ajax driven Web application models evolve) versus being powered by a next generation uber client application or specialized standalone applications.
As a blogger and an entrepreneur, as well as someone perennially on the prowl for better ways to manage online information in this age of digital media, I can say without hesitation that I believe that the emergence of RSS as a platform is one of the remaining gates to Web 2.0 taking flight. Or, as Yogi Berra might have said, the future of the Web is destined to be just like the past. Only different.
Do you buy the thesis of this blog?
ASK ME ANYTHING. OR TELL ME YOUR BEST STORY. Here is your chance as an O’Reilly reader to prove that you actually exist, and are not merely a programatically generated increase in the site hits that I am supposedly getting. And here is my chance to be sure that my blog is addressing your needs and various perverse curiosities.
Suggested Topic: Where the Metal Hits the Meat
I am particularly eager to address issues in which technology affects our quality of life.
I am considered an expert at audio on computers. I’m not as technical as some, but through my Project BBQ conference (10th year coming up!) many of the best technicians in this topic have honored me with their friendship, so I can refer tough questions to them.
I am also an author and crackpot philosopher, with an emphasis on using the nightmarish true tales from the game industry as parables.
Let’s all learn from your torture!
Hit the Fat Man…let’s see what you’ve got.
First off, OSCON’s new digs at the Oregon Convention Center worked out really good. Aside from one false fire alarm on Tuesday evening, things seemed to go smoothly. Most sessions had enough space for everyone to sit and in general the conference didn’t feel as cramped as last year. The WiFi had a hard time coping with so many laptops during the keynotes, but really, if these were the biggest problems then it must’ve been a good conference.
Attendance at the conference was up — there were plenty of new faces and plenty new exhibitors. Some cool expo schwag was to be had — the Tux beach towel will soon be a wall hanging in my office — its too nice to use as a towel.
I was also pleased with all the talks I attended — I noticed a trend that the people with the most slides and fastest paced presentations kept my attention better. As I mentioned earlier, some presentations were like getting smacked with an O’Reilly book and ingesting the information via impact-osmosis. I’d walk out of the presentation a littled dazed, but the info from those talks really stuck in my brain.
And once again, the first Southwest flight from Portland to San Jose after the conference could be dubbed the “OSCON San Jose Geek Express”. The density of geek discussions on the flight was amazing — I ended up chatting about Patents nearly all the way back to California.
I certainly had a blast at the conference. Meeting up with old friends at the conference and making new ones always gives me a nice warm and fuzzy feeling. Too bad ETech is so many months away. :-(
What were your thoughts on OSCON?
I got a funny call the other day. A guy who’d read my article about turning a Mac into a Dictaphone wondered if he could use a similar technique to make prank phone calls. He wanted to extract dialog from DVDs, trigger the sound bites over a phone to simulate one side of a conversation, and then record the whole interaction for posterity.
I appreciated the irony of being pitched on the idea over the phone. But having done many hours of phone recording and dialog editing for the DVD that accompanies my book, I was intrigued by the technical challenge as well. And of course I recalled the hilarious Flash video in which a sampled Arnold Schwartzenegger calls a hapless Gateway rep. (More celebrity prank calls here and here. You may want to listen on headphones; there’s a lot of cussin’.)
The prankster who called me said he has several Macs, but is not especially technical. So I suggested he simply organize the sound bites in folders on his Mac, switch to column view (by typing Command-3), and then trigger the sounds by clicking on the Play button in the window’s preview pane. He could label different types of sounds (questions, expletives, variations of “yes”) with different colors for quick selection during the heat of a phone call. Later it struck me that he could organize and trigger the sounds in iTunes instead, which would provide many more categorization options.
To grab the sounds from DVD, I suggested he use a streamripper program like Ambrosia Software’s WireTap. Because WireTap records everything playing through the Mac, he could use it to capture incoming telephone audio as well as the triggered sound bites.
I also recommended using an audio editor to trim out the dead air between the time he clicked Record on WireTap and Play on the DVD player. That would provide snappier response when he subsequently triggered the sound bite. GarageBand could work for that editing, and has the benefit of exporting directly to iTunes.
The physical connection between the phone and computer is trickier. To simultaneously get external audio into the phone line and record the conversation, you’ll probably need a hybrid phone tap such as the JK Audio AutoHybrid, plus some adapter cables. Injecting your voice as well may require a mixer or a more advanced tap such as the JK Inline Patch. (It was’t clear to me from the AutoHybrid documentation whether the telephone handset mic remains active during the call.)
Another option would be to record the output of an Internet phone service such as Skype. Doug Kaye at IT Conversations has an illustrated tutorial on doing that, and readers offer many additional suggestions below the article. Kaye also recommends using a second machine for recording, which I agree is a good idea, because it simplifies the signal flow and increases dependability. (I used a Korg PXR4 to record my phone interviews, but you could easily use a second computer.) Also note that calling someone who isn’t a Skype user costs money.
Hackers have used computers for tele-teasing at least since the days of the Falwell Game, but with advanced sampling and recording techniques, the creative possibilities are skyrocketing. I don’t know if the Gateway rep really believed the digitized Arnold was a live caller, but it’s clear that a pinch of technology and a dash of psychology can make robot voices seem human.
How are you using telephones and digital sound bites?
Jason Schultz’s “What every open source project should know about patents” was the last session I got to attend at this year’s OSCON, and I’m really glad I did. Even though I knew the most crucial things about patents, there were a number of things that Jason managed to clue me in on. Before I dive into Jason’s presentation, please be aware that these notes only apply to the US. The European and Japanese patent systems are different and many of the concepts and laws presented here may not apply.
Jason defined then purpose of patents as: “To promote the Progress of … useful arts”. A patent grants an innovator a limited time monopoly (currently 20 years from when the patent was filed) on an innovation, in exchange for making the innovation public. The idea is to reward people for doing research and to further the public good by making the patents available to the public. A patent needs to be novel (never been done before) and non-obvious (must be a hard problem to solve), enabled (teaches public how to apply your patent) and described (limits of the property right must be described). A patent is comprised of claims, a specification, figures and prior art. The claims section is the most important part of a patent — each method in the claims section represents an independent right and someone who infringes on a patent can be sued on each one of these methods.
Prior art are ideas that came before the patent in its field of art, and proves the idea was either:
In order for an idea to qualify as prior art, it must be documented in print or corroborated prior to the filing date of the patent. After introducing the basics of patents, Jason went on to talk about the US Patent and Trademark office that issues patents — and this is where the problems start piling on. The patent office examines about 300,000 patents each year and each patent examiner only gets to spend about 10-15 hours of time on each patent. With such a limited amount of time per patent and limited amount of resources for patent searches, all the examiners can do are quick and dirty reviews of patents. No wonder that most patents are granted!
To make matters worse, if someone knows that a patent exists and they infringe on the patent, they can be held liable for three times the damages if they had not known about the patent (willful infringement). Given this twist, companies will send out letters to other companies in the same field when they are granted a patent. Using this sneaky scheme, the company that was granted the patent can inform others of their patent and those who have received the patent then know of the existence of the patent and are thus automatically liable for three times the damages. The overall effect is that companies attempt to not educate themselves about patents, in fear of being held liable for greater damages.
And as if things weren’t bad enough, Jason dove into Lexicographic games that patent applicants play in order to trick the patent office to issue another patent. In US patent 5,728,005, a simple kids slide is described as: “Slide with lateral side channels”. In US patent 5,842,927 the same slide is described as: “Children’s slide with integral raceways”. Both patents were issues and are fully valid, even though the first patent should’ve served as prior art to the second patent. Intentional obfuscation on the part of patent applicants greatly complicate the patent examiner’s job, making it nearly impossible for a patent examiner to do a thorough job on any one patent.
Finally, Jason presented few tips on what open source projects can do to protect themselves a little:
While Jason’s talk was a great intro to the perils of patents in the open source space, you should seriously consider seeking the help of a patent attorney if you think that you have some patent problems to worry about.
If you have other important insights to patents and how they affect Open Source, please speak up!
Michael Radwin’s “HTTP Caching & Cache-Busting for Content Publishers” talk was very much like being smacked with an O’Reilly Book and absorbing its contents via impact-osmosis. A little intense, but very informative. This talk was very similar to Ask Bjørn Hansen’s talk — many slides at lightening pace — one blink and you would’ve missed something important.
This talk was aimed at people who need to be aware of HTTP caching — both for creating web sites with more/better/secure functionality, as well as being aware of caching proxies. Cache proxies are frequently used to prevent duplicate fetching of the same content to reduce network load. For instance, AOL uses cache proxies frequently to reduce their overall bandwidth use by reducing the number of times that a given page gets fetched from the server. Proper cache handling is important to make sure that dynamic web pages aren’t tripped up by caches and that caches can be effective, which in turn will reduce the load on your own site.
Michael broke web content into three categories:
He suggested that each of these types of content should be treated differently when considering caching strategies. He suggested five strategies for dealing with these types of content:
This talk covered many more details that I can’t really convey here — if you’re interested in finding out all the details on how to deal with caches, Michael suggested to pick up a book on the topic.
Have you found caches to be useful in your web work?
This year’s OSCON was packed with information, people and quite a bit of good times. As a result, I spent very little time looking at my own computer other than to take notes and chat on IRC. So I’m just going to take bits out of my notes and lump them all here.
“Free Software has no off-switch!” -Nat
“If i was going to name an evil programming language, I wouldn’t name it after a snake.” -Larry Wall
“All good Americans know that good plans come in 4 year versions, not 5″ -Larry Wall
“Someone who wants to run Windows on servers should first be made to show what they know about servers that Google, Yahoo and Amazon don’t know.” -Paul Graham
“Meetings are wonderfully relaxing, because they count as work. Just like programming, but much easier!” -Paul Graham
“Being a professional Lara Croft impersonator in Russia, you have access to better armaments.” -Damian Conway
“Pretty graphs are like ‘manager porn’. It makes them so hot!” -Anthony Baxter
“How do you own a small business? Start with a big business!” -Randal Schwartz
Interesting programs/technologies/web pages to check out:
CodeZoo now has resuable components for Ruby.
Shtoom is a open-source VoIP client written in Python, along with Doug, a SIP server application framework.
Bacula looks like something we could really replace our expensive and complicated backup software with.
Treemaker is a program to take basic drawings and turn them into foldable one-page printouts to make origami.
Threatnet is something I hadn’t heard of until Randal Schwartz’ talk on spam.
BSD is not something I’ve spent much time with unless you count Mac OS X, however I am seriously rethinking that, especially for security reasons. Jason Dixon gave a really cool presentation on failover firewalls using OpenBSD and CARP, the Common Address Redundancy Protocol. His slides aren’t available yet, but he has an article here in Sys Admin Magazine.
Jeff Waugh showed off just way too many cool new things coming in GNOME:
Possibly the coolest thing by far at the conference was HowToons, one-page PDF cartoons aimed at 5-15 year olds showing them how to build useful things, toys and even practical jokes (DIY whoopie cushion anyone?)
Much has been said about the new location this year for OSCON at the Oregon Convention Center. While I don’t think it’s the “new COMDEX” I did like the grown-up feeling the conference gave off this year.
It is true, though, that some of the smaller comfy community feel of the past years was not there. I think something that would go a long way towards fixing that would be to have a large social area where people can gather to talk, eat, sit, type, etc. Somewhere with lots of couches, chairs, tables, close to refreshments. There was a very tiny version of this by the Gibson guitar booth on the exhibition floor - several couches, some tables and chairs, and snacks - all in the same area.
Speaking of Gibson, I’m extremely upset that I didn’t win a guitar… but kudos to Nat for getting them to exhibit and give away guitars! The Gibson booth was very nice, about 10 guitars you could sit down and play with effects into headphones (so no one else has to hear you attempt “Stairway To Heaven). I spent a good bit of time there refreshing my bad chording skills.
The Exhibit Hall was big this year! Lots of vendors, large and small and a big row of non-profits. Lots of schwag this year too! T-shirts, primarily, but other cool bag magnets like USB hubs, keychain drives, pocketknives, etc.
Lots of parties and receptions this year. I made it to five of them - three in a row on Thursday evening - which was probably a mistake. Either that or i should have had less beer. But definitely thanks go out to Stonehenge for throwing yet another cool party, this time at an arcade with free classic videogames and pinball machines. Also good receptions from Apple and MySQL, and it wouldn’t be OSCON without Nat’s party. I just wish he would provide free tomato juice the next morning…
I just got out of Ask Bjørn Hansen’s “Real World Scalability” presentation and my head is still spinning. Ask’s lightening fast paced slides zipped by in a blur of practical tips for speeding up your web site. This presentation, in contrast to Theo Schlossnagel’s scalability tutorial, was packed with real life advice. Theo’s perspective focused more on the planning and general rules behind scalability, whereas Ask peppered us with details on what to consider and how to make changes to your site.
To set the stage for his talk, Ask talked about vertical scaling, which entails buying faster hardware to make your site run faster. Simple math shows us that you can buy one really fast machine or hundreds of faster machines that collectively have much more power than the one really fast machine. The key to scalability lies in breaking your Internet site into small independent chunks that can be pawned off to many commodity hardware machines.
The first real life bit of advice focused on caching — if you can stop regenerating often requested pages and cache the output, you can make dramatic speed improvements to your site. First, check to see which pages are most often requested and attempt to cache the whole pages. Check your database and web server logs to find which pages to cache first. Some pages present a bigger challenge to cache than other pages — pages that contain per user information (e.g. a shopping cart) can’t effectively be cached in one piece. Each cache copy is useful for only one person — its more effective if each copy can be utilized by as many requests as possible.
The most drastic way to do this would be to take all of your dynamically generated pages and write them to a cache, so that pages only get regenerated when the underlying data changes. If that is not possible, you can take data chunks that make up the pages and cache those chunks. The next time a page is requested, it can be regenerated from the cached data chunks. This works really well for slow database queries — pulling the data in the cache can be multiple times faster than re-querying the database.
Caching your pages could be done in process memory, but that data isn’t shared with other processes. Shared memory works better, but its still not shared between machines. The best solution for caching is using memcached, which was developed by Brad Fitzpatrick from LiveJournal. Memcached uses a server model to cache random bits of data — accessed over the network, a memcached server can be queried from many client machines. This makes memcached the most flexible web site caching solution around today.
Ask’s next round of advice focused on scaling databases — his first point was to never rely on one server for all your database needs. If your web site does 99% reads and only a few writes, then this is not so critical. But for a lot of other sites that have more write needs, using one master database server for writing data and a network of replicated read-only database servers spreads the load onto multiple servers.
If your site still needs to have more write power for any one server, you should consider partitioning your data. Identify independent chunks of data and store these chunks on separate servers. Make sure to select data chunks so that you don’t need to do database joins between chunks. If you can do this with your data, you can have multiple write database servers and even more read only replicated servers. You see the pattern here — each of Ask’s pieces of advice aims to spread the work onto more machines, rather than forcing vertical scaling.
Next, Ask focused on storing your users sessions (e.g. user log in information, shopping carts, etc.). The best method is not to store the entire user profile or a whole shopping cart in a session, but to only store its associated id — store the actual data in the database where it can be shared with other machines. Ask suggests that the golden session balance is to store important information in the database and not-so-important data in the session.
A good way to manage sessions is to keep the data in the session light and to use a cookie (or a few) to store the data. Never keep state on the server — keep everything stateless. This allows any web-server to handle a request for a user and not a specific server that is keeping the session for the user.
Ask’s last major point was using light processes for light tasks. You don’t want to have a heavy apache process that contains a scripting interpreter (e.g. perl, python, etc) to serve out a 49 byte single pixel gif file. The best thing to do is to split your web servers into light weight front end processes that serve out static files and images, and a backend server that does the heavy lifting for accessing the database. This one change that can be accomplished easily with Apache 2.0 (or mod_proxy) as your front end and Apache 1.x (or 2.0) as your back end.
Between Theo’s tutorial and Ask’s sessions, I feel that my scalability curiosity has been satisfied — well satisfied. Time to go and pay attention to some of the other cool presentations. And with that, I’m late for the next cool session…
Do you think these tips are useful?
Related link: http://randomshow.com/media/video/20050804a.mov
What other approaches could be taken to communicate this vision?
Robert ‘R0ml’ Lefkowitz’s “Semasiology of Open Source (part II)” just wrapped up and upon reflecting on his presentation I’m both a lot more clear about what he presented last year and I’m also utterly confused about both of his presentations. Against my better judgment I’ll try and summarize what he presented this year and last year.
I attended this talk last year and was fabulously entertained by his presentation, but in the end I was confused about where he was going with his talk. It turns out that he misjudged the length of his presentation and actually only managed to present half of it — hence my confusion. So this year he finished his presentation from last year and I’m… erm. confused. Still.
So, I managed to track down R0ml between sessions and asked him to elaborate about the overall point of his talk and how all his points tie together. His response: “I guess that’s really for the 3rd part — consider this as a journey with many interesting points”. Phew. I feel a little better now that I’m not trying to tie together all of his excellent, but disparate points.
R0ml first illustrated the concept that:
“Programs must be written for people to read and only incidentally for machines to execute.”
This certainly seems contrary to common practice, but 70% - 80% of development is maintenance, and 70% - 80% of maintenances is reading code. This makes half to a third of software development reading code — and thus it follows that code should be written to be readable by humans. Much along these lines, R0ml went on to point out that in 1984 Donald Knuth claimed that programs are works of literature:
“The practitioner of literate programming can be regarded as an essayist, whose main concern is with exposition and excellence of style.”
Good point. I think open source programmers should probably be more concerned with the readability of their code — if they care to have their project attract other hackers to contribute. Personally, I need to pay more attention to this valuable point.
Next was R0ml’s point on the evolution of reading and the public performance of … source code?? R0ml asserts that copyright protects against unauthorized copying, derivation, distribution, and public performance. But what does it mean to publicly perform source code? In R0ml’s eyes that means reading code out loud, and to check the accuracy of this he digs into the history of reading.
All early reading involved very simple code recognition, since early writing was primitive. The early function of reading was transmitting information not receiving it as we do today. To read meant to read out loud — the concept of silent reading for one’s own enjoyment didn’t come along until much later. So writing was an aid to the reader in the process of passing down information that would’ve normally been passed on orally.
Later on the space and punctuation were invented and words became recognizable units, as opposed to an undelimited sequence of characters that had to be read out load. Silent reading became possible and was no longer a public performance. So, his thesis on reading source code out loud being a public performance doesn’t hold water.
In the end, R0ml stated:
programming == literature, and
reading != performance
This was in fact a journey with many interesting points — I’m looking forward to part III next year. Afterall, at some point I’d like to make sense of this all.
Did this talk make sense to you? Please do tell if it did!
Ever since Chris Anderson introduced the Long Tail last year, long tails have been popping up everywhere. Most recently, Kim Polese used the long tail in open source to illustrate the concept behind her new company Spike Source. Kim pointed out that open source software also has a long tail: The most prominent projects like Linux, FreeBSD, Apache, perl, python and Mozilla get a lot more mind-share and attention that the smaller projects in the long tail.
Projects in the long tail haver fewer eyeballs to spot bugs in the software. This is where Spike Source comes in — they are working to create a collaborative test infrastructure that allows corporations to collaborate on testing open source projects. It turns out that a lot of large companies are using tons of smaller open source projects and therefore spend a lot of time testing and working out interdependencies between projects. Spike Source aims to reduce the amount of time that companies spend doing that and leverage the community to collaboratively tackle this problem.
Since I am a fan of collaborative projects, Spike Source sounds really cool and has the potential for bringing open source into more enterprises. However, my fascination focuses on the long tail meme. Since its introduction, its been used in many places (and perhaps overused) to analyze markets (books, music, open source, etc.). I think this simple meme allows people to change their perspective on well established markets and opens people’s eyes to the fact that there is life in markets that were previously underserved.
Applying the long tail to open source yields a number of interesting observations. Long tail projects have fewer users (eyeballs) and therefore less exposure. And less exposure translates into fewer people reading/using the source code and therefore finding and fixing fewer bugs. Fewer people using a project also means that there are fewer people who are interested in hacking on the project. I find that people are more interested in hacking on a project when there are visible signs of other people hacking on the project. Activity begets more activity.
As projects become more popular, they move away from the tail and towards the head. And as they move, they pick up exposure and hackers willing to contribute to the project. The challenge is putting in effort to make progress so the project moves further towards the head.
Don’t get me wrong — the characteristics exemplified by the long tail have been there all along. The long tail merely presents a new way of looking at these markets and analyzing their behavior. Personally, this meme helps me understand a number of things that I didn’t grok previously.
What do you think about applying the long tail to open source?
In this morning’s tutorial Brian Fitzpatrick introduced the Subversion version control system and compared it at great length to the venerable CVS version control system. Brian stoked his presentation with a lot of history that explains how Subversion arrived at its impressive set of features. Subversion come into being when a number of software developers were fed up with CVS and wished to create a system that improved on CVS. The goal of Subversion is to support the same features as CVS initially and then improve on the system to eventually surpass and displace CVS.
Brian outlined the current problems with CVS since many small projects use CVS successfully and have never really run into problems with it. But, many corporations that have attempted to run CVS on a larger scale have run into serious problems and performance bottlenecks. CVS works only on a file-by-file basis and during a commit one file might commit OK whereas other files will fail — it’s not very transaction oriented. CVS has problems dealing with binary files, wasn’t designed for network use and cannot act directly on a repository — a local copy is always needed. If you look at the features and strong points of Subversion you can see the stark contrast to CVS.
The first and most drastic improvement in Subversion is the concept of a global revision number that gets updated each time any file is checked into the repository. CVS users may not be comfortable with a single revision number for a repository, but this is really Subversion’s strength. To underscore this importance of this revision number Brian was wearing a shirt with r8810 emblazoned on it. When an audience member asked what it referred to, Brian jumped up with joy — he was waiting for someone to come and ask that question. It turns out that r8810 is the global revision number when Subversion itself went to release 1.0. A good way to drive one of the most important points home — well done!
The global revision number comes from Subversion’s atomic commit feature that commits files in a single transaction: either all files are committed, or none are. This prevents collisions from happening if someone else has changed the repository at the same time that you are committing files.
The next improvement deals with the handling of binary files — CVS stores binary files whole and for each new revision a new file is stored. If you make a 2 byte change to a 10MB file CVS would gladly suck up another 10MB of disk space. Subversion only stores the difference between the two files, regardless if the file is text or binary. The size of the Subversion repository grows proportional with the size of the changes — not proportional to the size of the files contained in the repository.
The Subversion creators designed the system to work over the network from day one, and therefore whole files will only be sent across the network when a repository is initially checked out. After that, only diffs between files are ever sent across the network to reduce the overall network traffic. This extends even further to allow a lot of actions to occur without a network connection to the main repository. This idea really excites me — I can’t count how many times I’ve sat on a plane wanting a diff against the repository, but being left high and dry with no net connection.
Subversion will also never change the content of your files — not even to expand inline keywords such as $Id:$ or to change the line endings on your files. All of these things are done on the client side, and never inside the repository. Subversion can even allow you to move files inside the repository gracefully — it keeps track of the locations where a given file has been. This is a profound change when compared to CVS — with CVS once you checked a file in, it was there permanently. Moving files in CVS will cause all sorts of problems and is not recommended — I’m really glad that this is possible in Subversion.
To address CVS’ performance issues, Subversion introduces the concept of cheap copies. Cheap copy files require very little space and can be copied fast in constant time. Based on this concept, Subversion provides branching and tagging features. A branch in subversion is simply a copy of the trunk that can be modified, and a tag is a copy of a file that is never modified. Applying a tag to a repository in CVS was a slow process since it needed to touch each file in the repository. Since Subversion uses cheap copies for tags, it becomes a constant time function and tags can be applied to a whole repository nearly instantly. Brian pointed out that tags may not really be necessary anymore since the global version number can be used instead of a tag — a tag is now just a more human friendly way to express the global version number.
There are many more facets to Subversion that Brian covered in the tutorial — if you’re interested in delving into Subversion for your own projects, you may want to check out Subversion’s metadata features that allow the user to associate key=value pairs with directories and files. You’ll also find that Subversion’s command line syntax is not completely unlike CVS’ syntax, but improved and simplified. (really, who would think to use the update -j command in CVS to merge a branch?)
However, when setting up Subversion you’ll need to consider if you want to run it as a standalone system or if you want to run it inside an Apache installation. Subversion has many more configuration options and network protocol options (e.g. WebDAV) which make it considerably more flexible than CVS. This also means that you’ll need to put more thought into how to set up your first Subversion repository.
It’s clear that the Subversion team carefully analyzed CVS and all its shortcomings and then set out to create a replacement that surpassed all aspects of CVS. Brian certainly did a good job of presenting Subversion and why someone should use it. I personally plan to investigate how much effort it will be to move MusicBrainz over to Subversion. I’ve been frustrated with CVS many a times.
Have you made the Subversion switch? If so, what do you think?
One of my favorite presentations from last year was Theo Schlossnagel’s presentation on Whack-a-mole, so when I saw him giving a full tutorial on scalability this year, I had to go and check it out. And this year I wasn’t disappointed either — Theo presented a solid tutorial that exuded his practical experience in this field. Of course its impossible to summarize four hours of a tutorial in a blog entry, so I’ll try to summarize Theo’s three simple rules that he applied repeatedly in his presentation:
Theo kept reaching back to these points throughout his talk — other points he stressed repeatedly were the use of a clearly established release procedure. Unless the team that rolls out new software onto production servers has documented procedures to follow, mistakes will be made. And as systems grow in size, the likelihood of fatal errors increases dramatically. To spare yourself from this fate, document your release procedure and use a version control system to keep track of everything you do. Again, seems like common sense, but often it is not.
Aside from general rules, Theo covered a number of open source solutions that can eliminate the need for expensive dedicated hardware boxes like fail-over switches and load balancers. My favorite example was the use of the Whack-a-mole toolkit for when one machine fails. The toolkit allows an architecture to determine when a server fails and automatically reshuffle the work that the dead server covered. Using whack-a-mole allows people to save money by not buying expensive redundant/fail-over systems and only use commodity hardware.
Another great tool that Theo covered is the spread toolkit that allows multiple machines to easily communicate in a coherent manner. Spread allows machines to create a communication channel that is shared and sequenced between all the computers that have joined that channel. Each listener in the channel receives all of the messages posted to the channel in the same order as everyone else — this is an important feature that allows this toolkit to be used in mission critical high availability setups. My favorite application of this toolkit is to create a multi-server logging facility, where multiple machines write their log files to a spread channel and one machine writes a correct interleaved log file for all the machines.
Theo’s tutorial set the stage for people who are facing scalability issues — he presented a lot of thoughts and hard earned experiences from his extensive past. Scaling issues are generally very dependent on the system, and having a general set of rules to consider has given me a framework in which to consider scalabilities in my own projects.
Have you faced and overcome scalability issues? Speak up if you have other general rules that should be considered.
Once again, OSCON started in California for me — I ran into geeks headed to OSCON at the San Jose airport and started chatting about the conference. Right away we started speculating about its new location at the Convention Center, leaving behind the cramped Marriott Hotel.
Once I got to Portland it was clear the new location is going to be a winner — the Portland MAX stops right in front of the convention center, which makes OSCON by far the easiest conference to attend. The convention center looks brand new and is plenty spacious, which is a great improvement over last year.
A brief look a the map of the exhibit hall confirms that everything is going to be bigger this year. The show floor will be huge compared to previous years and it looks like a lot more companies will be exhbiting. There also seemed to be a lot more pre-conference chatter and invitations to parties flying about — it all shapes up to be an exciting year.
When I attended the afternoon tutorial, I was really pleased with the size of the rooms — I think we’ll see fewer people standing in the back of the room for popular presentations. And given how filled the rooms were, I think attendance of OSCON may also be looking up.
The first day started strong — I’m looking forward to taking it all in. Stay tuned, I’ll be blogging here all week.
Made it to OSCON yet? What do you think of the new location?