When a local elementary school produced a play about the California Gold Rush this spring, I volunteered to help with the audio—and got surprising results.
When a local elementary school produced a play about the California Gold Rush this spring, I volunteered to help with the audio—and got surprising results.
My response to David Battino’s post on dealing with USB dongles:
Yeah! I suggest they stop using dongles!
Seriously, I don’t understand the recent resurgence of copy-protection dongles. While the nominal reason is to prevent copies, I feel that what dongles do is prevent many potential users from evaluating and purchasing the program.
Let’s face it: Not everyone can afford to plunk down $300, $500 or $999 for a piece of software, only to find that it’s
As I receive more high-end audio software for review, the copy-protection dongles have been multiplying annoyingly. Swapping them in and out as I launched various programs and plug-ins was becoming a hassle. In one case, a program crashed the computer when I inadvertently quit it while its dongle was unplugged.
Thanks to a tiny USB hub I picked up at the Game Developers Conference last week (one of the better pieces of swag I’ve received), I can now plug once and get to work without doing the dongle shuffle.
It sure would be nice to have a universal dongle, though, or an even smaller hub. Suggestions, anyone?
I’ve been listening to Moments From This Theater: Live, a wonderful album by the great Memphis/Muscle Shoals songwriters Dan Penn and Spooner Oldham. It’s just the two of them, accompanying themselves on acoustic guitar and electric piano, recorded at 1998 shows in Ireland and England. They play classics including “Dark End Of The Street”, “Do Right Woman, Do Right Man”, and “I’m Your Puppet”, which have been recorded by, among many others, The Box Tops, Aretha Franklin and Percy Sledge.
I’m struck once again by how stripping away production often reveals the greatness of the song within. For example, “I’m Your Puppet”. I’d never really thought much about that song; for me it was just a piece of the oldies background—obviously catchy and well crafted, but fluffy.
GDC, formerly known as the Computer Game Developers Conference, starts in earnest today, and there are scads of audio presentations. Although I’m not a gamer, I always come away from the conference with fascinating insights on the differences between “linear” and interactive music. For instance, in a game, the composer is often more akin to a sculptor than a painter, because the listener’s actions affect the playback—sometimes on the note level. It’s like walking around a sculpture and seeing it from different angles; the sonic sculptor has to account for all the musical possibilities.
I said “start in earnest” because GDC has been going on for two days already, with an initial focus on mobile games. It’s intriguing to think that the synthesizers in cell phones are now as powerful as commercial keyboards from a few years back. (See “Could Ringtones BE More Annoying?!”)
With the spring rush, I still haven’t had a chance to blog about my audio discoveries at NAMM or the Windows Vista Audio Summit, but now there are even more dots to connect, so this could get interesting. Let me know if there are specific GDC sessions you’d like to hear about.
MP3.com just published an interesting article on optimizing the battery life of digital audio players. The article concluded that copy-protected Windows Media Audio tracks knocked several hours off the playback time on each tested player.
Savvy readers then pointed out that because WMA is more highly compressed, it naturally requires more processor power to play back. A more conclusive experiment would have been to compare encrypted and non-encrypted WMA files, not WMAs and MP3s.
Still, given the dramatic difference in battery life with different playback formats, I think manufacturers and reviewers should test and publish a range of playback specs.
Contemplating dying batteries made me wonder how much of my life I’ve lost typing in software serial numbers and filling out nosy registration sites. Digital Rights Management (DRM) is a drain in more ways than one.
On the other hand, it makes possible some exciting scenarios like one I witnessed at NAMM, when I introduced Steve Turnidge of Weed to David Zicarelli of Cycling ’74. It turned out they were both fans of Roger Manning, Jr., so Turnidge whipped out his high-speed thumb drive and transferred several hundred megabytes of rare Manning tracks to Zicarelli’s computer. Because the tracks were “Weedified” WMA files, Zicarelli will be able to play them three times each before deciding if he wants to purchase them. If he does, Manning, Turnidge, and Weed will all get a cut.
Although Weed demands some form-filling itself, it essentially rewards people for sharing music, which strikes me as a better approach than punishing them.
For over 50 years recording artists have been bound to one primary system for their careers — involving getting signed to and distributed by a major record label. Since the Internet age kicked into high gear, the playing field has been leveled. With avenues like dedicated band web sites, iTunes, and MySpace, artists are finding more ways to connect directly with their fans — and keep more of their own proceeds. This means they can sell fewer copies of CDs and make as much if not more money than they could have tied to a label deal — because they keep the majority of the proceeds (in some cases 90-100%) vs. the 10-15% they might have gotten through a deal with a major record label. There is still room for the label to partner with an artist — in terms of marketing and distribution, but the artist now has options and no longer needs to give up their masters or publishing for a label deal. I will explore more specific examples of emerging models that empower the artist (& which allow for a more impactful and direct connection with their fans) in future blog entries.
The QWERTY keyboard is a tricky interface for music-making, but many inventors have come up with equally tricked-out ways to overcome its limitations. Here are a few of my favorites.
I’ve long been a fan of Mixman, which turned typing into synchronized grooves. But simply triggering samples doesn’t allow much expressivity, so the company eventually designed its own input controller, the DM2:
You can trigger samples from the QWERTY keyboard, but the light-up buttons on the Mixman DM2 controller are much more fun. (Source: Create Digital Music.)
Down in the mad-scientist hall at NAMM a few years back, I spent 15 minutes talking with Leon Gruenbaum, inventor of the hippest musical computer keyboard ever, the Samchillian Tip Tip Tip Cheeepeeeee:
All together now: “Tip Tip Tip Cheeepeeeee!”
The keys on the Samchillian transmit relative MIDI notes rather than absolute ones, which means that every time you type, say, a K, the pitch on a connected synthesizer will go down by two semitones. A comma might shift the pitch up a perfect fifth. That makes it easy to play furious arpeggios, as Gruenbaum demonstrated.
But the standard QWERTY keyboard still has potential. How about velocity-sensitive keys on which typing harder would shift letters to bold or ALL CAPS? Creative Labs did something similar by bonding a piano-style keyboard to a QWERTY one in the Prodikeys. At the other extreme, there’s the astonishing Optimus, in which each key has its own OLED display.
All of which makes me wonder: Is the DVORAK keyboard layout inherently more musical?
Looking to do more with FireWire audio on Windows? CEntrance has released a free beta version of its universal audio driver. Among the features are device aggregation (which lets you use multiple FireWire audio interfaces with a single program) and multi-host capability (which lets multiple programs address a single interface). Those were among the top requests from developers at last week’s Windows Vista Audio Summit, which I’ll cover in an upcoming blog.
I hadn’t heard of CEntrance until recently, but it turns out that the company has written audio drivers for a number of manufacturers including Alesis, DigiTech, Lexicon, Mackie, and Shure. The commercial version of the universal driver ($79.95) should be out at the end of this month.
Alas, ETech is over. Fortunately I’ve had a chance to catch up on some much needed sleep and had a few moments to reflect on this year’s conference. Even during the first day I noted the improved gender balance — in computer science and its related fields males typically dominate the scene. Sadly this is also the case here at ETech, but over the last few years the balance has improved significantly. I think this year more women attended ETech than ever before and the gender balance is probably better at ETech than in the general computer science field. But, keep in mind that my observations are far from scientific — you’re only getting my off the cuff impressions of the general attendance.
Another impression of the conference is the use of the themes and a delayed response to them. This year’s conference was titled the Attention Economy and many of the presentations featured the word attention in their titles. I was generally curious about the attention economy and what it all amounts to, so I attended a fair number of the attention themed presentations. However, many of the talks briefly tied their presentations to the attention economy before moving on to a their less related main focus. Consequently, I still feel a little shaky on the basics of the attention economy.
While, we’re on the topic of the conference themes, I found it mildy amusing that a number of remix and mash-up themes were present in this conference. Remix was the theme for last year, yet this year Yahoo was passing out “Mash up or Shut-up” stickers and shirts. And a number of presenters talked about doing mash-up applications just prior to ETech. Ray Ozzie from Microsoft started it when he talked about how his team had two weeks to mash-up something cool with Windows Live. Other presenters followed suit and talked about how they mashed up or otherwise remixed their applications in 2 hours prior to the talk. Then someone else said they did a mash-up in 20 minutes. In a sense, I think that the theme from last year was better represented than this years theme. I guess I’ll have a number of cool attention focused talks to look forward to next year. :-)
I also promised to touch on the attention economy in my wrap up, but I don’t really have much more to present. I attended R0ml Lefkowitz’s talk on Root Markets to find out more about the company and to further my understanding of the attention economy buzz. R0ml managed to outline a few more details about how they are approaching this new field, but that didn’t really bring many of the issues that I am concerned with into focus. He didn’t address the issues of privacy and only touched data ownership — many of the people that I spoke to about the attention economy expressed concerns about their data getting used for negative purposes, rather than positive ones. We already have enough unresolved privacy issues even without the attention economy, and I fear that things will only get worse.
Now that Google has started censoring its search results in China and sits on the verge of becoming evil, who is left to trust? I personally do not feel comfortable with any corporation using my private data. We hear about data losses frequently, so why should I trust someone like Google, or even a small startup with my private data? I think there are a lot of people who share this view, so we’ll really have to see where the attention economy is headed. Regardless, this should continue to bring privacy issues to the forefront — I for one will appreciate more people talking about these issues.
Finally, I want to share one quote from the last session at ETech. Jason Schultz from the EFF was about to start talking about the lawsuits the EFF has cooking or anticipates in the next year. To introduce these, he said (roughly): “We’ll present these in no particular order, just shot-gunning them Dick Cheney style”. Giggle. Thanks Jason!
And thanks to O’Reilly for another great conference!
If you made it to ETech, what did you think of the conference?
I’ve been doing more and more work in politics since the ‘04 election (communications consulting and web production). I got involved at first out of a sense of civic obligation–and because my wife told me to “stop ranting and go volunteer”. I expected to dread a lot of what I’d encounter, figuring politics is where the worst people you knew in high school went. You know, the manipulative weasels fascinated by power. But I’ve been surprised by how deeply interesting and satisfying the experience has been so far, and by how many people I’ve met who are still motivated by service.
The intersection of technology and politics is one of the most fascinating areas of all, so I think I’ll be writing about it some.
Today is the last day of ETech and things are starting to wind down. The hotel broke the network and moved us to the third floor, so the smooth flow of geeks milling about is feeling a little disjointed today. Regardless, the sessions continue and maybe people are paying a little more attention since they are not being distracted by their laptops (as much).
The most interesting session I attended this morning was Meredith Patterson’s “One of these is not like the others” talk on her Query by Example extension to Postgres. Since I am a MySQL refugee who now uses Postgres every day, I was intrigued by this extension. Meredith also had an interesting observation on MySQL: “The Postgres back-end is really nice, unlike MySQL. I’m sorry all you MySQL users, but your code f*cking sucks.” Of course, that elicited a chuckle from the crowd.
Meredith started out by talking about some basic ideas and challenges of data mining. She outlined how in large data sets patterns and trends emerge and how clustering techniques attempt to find data points with similar traits by identifying locations where data points form clusters. Classification is another data mining tool where all the data points that lie on one side of a line (for 2D datasets) belong to one class and the other points belong to the other.
Her Postgres extension uses the classification technique as part of a support vector machine (SVM) — fortunately she didn’t go into much detail on how these work. They use far too much math to think about in the morning — if you want to delve deeper into SVMs, you might start with this Wikipedia entry.
Meredith did describe the characteristics of SVM machines a little more:
So far, all of this has been a little bit abstract — let’s put this all into perspective by showing you what cool things you can do with this nifty extension:
SELECT title FROM songs WHERE EXAMPLE KEY title LIKE ("Canon in D", "Moonlight Sonata", "Air on the G String") NOT LIKE ("Closer", "Take On Me", "Sell Out")
This example query selects titles from the songs table that are similar to the three mentioned classical pieces of music, but not like the three pop music pieces. I must admit — this example is a little confusing since she mentioned that text isn’t supported yet in the postgres extension — so far the extension only supports real numbers. Regardless, the example query shows the power of her extension. Postgres’ LIKE operator is pretty inflexible, quite slow and overall not very useful — Meredith’s extension improves on this by making the LIKE operator much more flexible and fuzzy. But the real power of her extension comes from the ability to specify items that should not be like the given examples. SQL can’t do that without this extension.
Now consider this ranking example:
SELECT title FROM songs ORDER BY EXAMPLE KEY int_id (((1, 2, 3) > (4, 5)), ((6,7) > (8, 9, 10)))
This standard SQL query with a funky ORDER BY clause uses the order by example concept. The two tuples of data ((1, 2, 3) > (4, 5)) and ((6,7) > (8, 9, 10)), indicate which items of data are more important than other items of data. In this case (1,2,3) is more important than (4,5). Then, (6,7) are more important than (8,9,10). Again, the actual data values may be a little insignificant, but you can see how ordering by example can be a really powerful concept.
Meridith says that she will continue to work on this project and once she rounds out some rough edges she will release the extension under the GPL. I think this extension is going to be a really useful extension to postgres — the postgres team has already said they are interested in including it in Postgres’ contrib section. I will keep an eye on this extension and once she posts the source code, I will post a link here.
Do you think query by example is going to be useful in Postgres?
Only at ETech can I walk into a session and have the speaker tell me tons of things about myself that I didn’t know before I walked in the door. This feeling is eerie, creepy and enlightening all at the same time and only someone as intense as Danah Boyd can take me there.
Danah’s presentation on G/localization focused on the collision of global cultures with local cultures and the ugliness that ensues. Danah first started out by defining culture and what it means in the online space. Culture represents a set of values, norms and artifacts (shared language and expressions) that de-mark boundaries both in the real world and in online spaces. However, these boundaries are not just de-marked by languages and nation-states — there are many cultural divides on the net. For instance, a motorcycle enthusiast web site will have a culture focused around motorcycles — the expressions and lingo there will be drastically different from an arts and crafts website.
She went on to talk about popular sites like Flickr, MySpace and Craigslist and what common traits these sites share. The designers of these sites are passionate about their ideas and fanatical about listening to users and engaging the users to participate. Instead of ruling a site with an iron fist, these designers merely set the tone for the site. An integrated feedback loop that drives a quick upgrade cycle of the site is one of the keys to creating and maintaining organic growth of the site. The public personalities of the people who design these sites are represented in the sites themselves. You can see this when Flickr has problems: A personal and apologetic message lets the users know that a problem exists. The people who run the site are sad and concerned that the site is down — the voice of a passionate human being posts these messages. Not some corporate drone who is merely doing their job.
Danah’s points about embedded observation resonated with me, since they express many lose and unexpressed thoughts I have about my own projects. According to her, embedded observation is the process of site designers being part of the site the design. Being embedded in the culture they create allows an unprecedented understanding of the people and semantic workings of the site. Craig, the creator of the popular site Craigslist has an official “Customer Support” title — this shows the dedicated focus on the customers of the site.
When sites embrace these concepts and empower their users to shape the community to their own tastes, a unique culture emerges from it. And this emergent culture is both beautiful and also the cause of many cultural conflicts when the emergent global culture on these sites collides with conflicting local cultures. The problem arises from varying local cultures from all over the world — if you have users from many cultures, whose morality are you working with? For instance, clothing acceptable for women in western nations is drastically different from the acceptable clothing in many Islamic nations. Yet, these people with very differing viewpoints must somehow co-exist on one site without (proverbially) killing each other.
So, what can you do to prevent this from happening, or lessen the impact when it does happen? Dana suggests to diversify your staff to understand the local cultures participating in the site. For instance, when Google’s Orkut was inundated with Brazilians speaking Portuguese, Google hired a person who spoke Portuguese who could help Google understand the emergent culture. Another key is to not control your users — Friendster is the perfect bad example for what happens to an emergent culture when the site designers shut down people who don’t act according to the site designers original intentions. Once Friendster started clamping down on fake accounts, discord rippled through the site and started tearing apart the community. Its best to enable and empower your users and to nudge them in the right direction — never control them.
Danah further suggest that site designers need to allow users to personalize their online space and to culturalize it, which allows users to craft a space that is comfortable for them. These features combined with the power let users manage private/public spaces and also leaving the door open for opportunities of synchronicity reduces the chances for cultural collisions. The last thought that I think will really make a difference is to let users become cultural spokespersons for the site. A user who is part of the colliding culture and who represents the site can help settle cultural differences and diffuse tense situations. Of course, this concept requires that sites give up strict control over what official statements that are being made about a site. This is quite a leap for many corporations, but it likely to be one of the greatest tools for combatting cultural collisions. And sites like Flickr and Craigslist are probably already quite familiar with this concept.
This presentation provides tons of food for thought since the writable web opens up many cans of worms like these. Global vs local problems exist in many other asects of the net, yet Danah suggests some amazingly simple solutions for complex problems. I only wish that we could find some simple solutions for when various local laws start colliding on the net.
Do you have any other tips for how to avoid online cultural clashes?
Last October I attended the Web 2.0 conference and all during the conference I was trying to put my finger on what Web 2.0 really means. And even after the conference I still wasn’t quite certain what it meant. The lack of a clear definition for Web 1.0 only adds to the problem — if the first version isn’t well defined, how can you define the second version?
Here at ETech, I think I finally got an answer! Yesterday afternoon during Tim Bray’s presentation on Atom he mentioned that his boss, Hal Stern, obvererved that if you replace 2.0 with writeable it makes a lot more sense. Web 2.0 is the writable web and the Web 1.0 was the read-only web.
Compare the killer apps from the dot com boom — early search engines, online stores and tons of static web pages — with the killer apps of Web 2.0: Blogs, Wikis, Flickr. These sites all involve the user as an active participant, where the user can modify the web content.
And now the term Web 2.0 makes sense to me.
Do you have any other suggestions for how to define Web 2.0?
Sharing your images via iPhoto 6 and RSS has advantages over building web pages and clogging up email with bulky attachments. I’ve published an overview of this technique on The Digital Story titled Photocasting with iPhoto 6: RSS Made Easy and a drill down piece on Mac DevCenter, Photocasting: Serve the Right Picture Size. These posts will help you get started serving up your images via RSS right out of iPhoto 6.
The prevalent buzz here at ETech this year focuses on attention, attention economics, partial continuous attention streams and many other attention buzzwords. But what does it all mean? For the first half of today I walked around with the general impression that this whole attention concept is mostly hype and doesn’t really have much real world applicability yet. Even after hearing three seperate talks on the same topic it was still unclear.
Then I had the chance to talk with Seth Goldstein, the CEO of Root Markets to ask him a few pointed questions that set me onto the right track. Seth pointed out to me that our everyday actions on the net actually make us producers of information. For instance, the listing of sites I visit in a day (my browser history, essentially) is information and this information could potentially be useful to other people and companies. My Amazon purchase history is another great example of information that I produce as a by product of going about life. By making a purchase, I generate one more piece of information. We constantly generate these piece of data: Each phone call we make, each tank of gas we buy and each web site we visit presents another piece of data in an ever growing stream of attention data.
Framing humans as generators of attention data, almost like incidental bloggers, starts bringing the attention concepts into focus for me. Now that we’ve established that humans generate data, we can start to explore how to take this raw data and turn it into useful information. By becoming aware of our own attention data, we start to use it to fine tune where we spend our attention and improve what we do with our time overall.
For instance, if Cory Doctorow, the main prolific blogger behind BoingBoing were to send his click stream (a list of web sites he reads) to a Root Vault with Root Markets, I could follow his trail and tune which sites I read in a day. I’m certain I would discover an array of new sites and resources that I didn’t know existed and and broaden my horizons. I could dump my less useful sites and view the world more through the eyes of a hyperproductive person. (I see this as a live version of the The 7 Habits of Highly Effective People book)
There are tons of obvious privacy issues associated with attention data. For instance, if I were surfing the net and researching a blog posting materializing in my head, I might want to choose to share my click stream with others — others might find value in following my research and seeing how I came about my blog post. But then if after my blog post I decide to surf the net for porn, I would probably not be interested in sharing that information with the world. But what if I forget to turn my click stream off? That could be embarrasing quickly. The same applies for spammers/phishers getting a hold of my surfing click stream. If your attention data ends up in the wrong hands you can quickly become a victim.
One core idea behind attention is that time is a scarce resource. No matter what you do, you can never have more than 24 hours in a day. And you spend nearly a third of that time sleeping, which leaves precious few hours in the day, thus where you focus your attention in these few hours becomes a greater issue. With so many distractions in a day (phones, email, IM, social networks, etc.) many people are getting overloaded with information and may not spend their attention wisely.
The three presentations today all mentioned time as a scarce resource and that properly using your attention assets would allow you to make better use of your time. When I consider this concept, I start thinking about how neat it would be to follow other people’s attention streams and see the world more like them. But in the end, this doesn’t sound like it’s going to focus my attention more. It sounds like it could diffuse my attention and simply become one more stream of information that demands my attention, which is exactly the opposite of what advocates are telling us.
As you can see, I am still working my way though the concept. I haven’t had a chance yet to play with Root Markets yet, since it does appear to take some time to bootstrap your information into the system. Maybe playing with a concrete application will solidify these concepts further for me. Or maybe I should listen to a few more attention talks now that I’ve grokked the very basic concepts. In either case, I’ll try and revisit this topic when I wrap up my coverage of ETech.
Do you think I’m on the right track? Tell me what you think!
Related link: http://mrl.nyu.edu/~jhan/ftirtouch/
ETech day two starts off with a blast as we watched Jeff Han, a consulting research scientist, give a demonstration of his Multi-Touch display screen. Developed at NYU’s Department of Computer Science as a part time project, this display screen allows users to control the computer by touching the screen. Unlike old single point touch-screens that we know now, the multi-touch interface allows the user to touch the screen at multiple points at the same time. Multiple touch points on the screen surface open up a world of possibilities where the user can manipulate objects with multiple fingers, and these can map to many more operations that a single point interface can. This new interface will require rethinking of many of the common user interface concepts that we all take for granted today.
Supporting multiple points at the same time allows the developers to break out of the current UI box and start thinking in new ways. For instance, using this screen with a new desktop system a user can control the desktop using simple hand motions to pan and zoom. With beautiful and smooth graphics we watched as Jeff dragged and zoomed dozens of pictures on a desktop, as if we were watching Apple’s Expose features on steroids. If there wasn’t enough space on the desktop, simply zoom out and find more space on the edge of the desktop and then move windows there. All done very fast and very smoothly. Wow! Mindblowing!
Jeff went on to show a simulated lava-lamp complete with the customary red blobs. Using his fingers Jeff inserted heat into the lava lamp and the blobs started bouncing about the screen. The audience gasped a collective “Oohhh” and dozen of cameras were whipped out and started snapping shots of the screen. (Not only was this the best lava lamp app I’ve seen, but with the multi touch interface, it was truly amazing!) Another crowd pleaser was Jeff’s demonstration of multiple cable channel video feeds playing in dozens of windows in the desktop — still zooming and panning smoothly with dozens of videos playing.
Another interesting thing Jeff showed was an on screen keyboard. The keyboard itself was scalable, which drew more oohhhs from the crowd, but Jeff quickly went on to pan the keyboard. This keyboard represents a cheap copy of the trusted keyboard in normal meatspace. Jeff suggests that on-screen keyboards should be taken out of the box and radically innovated, to take advantage of the features of the new input medium.
This demonstration was simply amazing — all running real-time off one laptop and all designed by grad students. I suggest that you take a look at their video that demonstrates the multi-touch interface.
Normally I’m not so gung ho on new technologies and I tend to be a bit skeptical. But this demo didn’t send up any real warning flags for me, so I will continue to watch this space. I want one of these display screens in my office ASAP!
Is this hype or will it be real? What do you think?
Fill out a short survey (less than 10 minutes) and help us bring you the most accessible, best-written, cost-effective, and useful digital media resources on Earth. And don’t forget to give us your email address at the end of the survey for a chance to win either a full version copy of Adobe Photoshop CS2 (a $650 value) or a complete O’Reilly Photoshop library of books (a $500 value)!
It’s Emerging Technology time again! This conference is still my favorite conference that spoils me for all other conferences. In so many ways ETech is a reunion for me, when I get to see cool people that I haven’t seen in months.
And this ETech is shaping up to be even better than previous years. The format change from last year has been held over — this year there are a number of high order bits sessions and other smaller presentations (like the conversations carried over from Web 2.0), that make the conference a little more fast paced. Which, in and of itself is scary, given that ETech is already very fast paced to begin with. O’Reilly is obviously doing something right — the conference sold out last week!
Today was the first day of ETech — the day of tutorials where a few select topics are dissected in detail in the course of the morning or afternoon. Since having bootstrapped my own high tech non-profit in the last year, I felt compelled to attend Marc Hedlund’s “From Coder to Co-Founder: How to Move from Engineering to Entrepreneuring” tutorial.
And true to ETech fashion, Marc dumped a bucket of useful information into the audience, broken into three sections: Proverbs, funding and case studies. Covering each of these would take a little long, so I will focus on the proverbs section, since it contains the most useful nuggets of knowledge. Overall, Marc’s tutorial focused on what pieces of knowledge tech heads should arm themselves with if they are considering moving from writing code to starting their own business. Quick side-note before I dive in: He suggests that going to get an MBA is usually not the best route for geeks to become entrepreneurs.
Marc’s proverbs were:
He also shared these proverbs focused on people:
Phew. A lot to digest in the space of a couple of hours. I’m sure the rest of ETech will bring tons more information overload. Stay tuned!
Do you have any nuggest of wisdom to add to this list??
Like microscopes, audio editing software can reveal amazing new worlds within everyday sounds. Check out “Winnoise,” for example. It’s a three-minute song made by manipulating Windows error sounds. I particularly like the way the artist looped portions of the Microsoft Sound to create a sustaining pad, and that he needed only basic commands from the lowly Windows Sound Recorder to work his remixing magic.
This Flash movie shows how you can make a complete song by stealthily manipulating wimpy alert sounds—but read the FAQ for the secret.
For the 2002 Desktop Music Production Guide, I asked composer Dan Phillips to come up with some tips for blasting yourself out of a creative rut. He turned in “Twelve Easy Pieces,” whose tip called “Be your own personal sample CD” I use to this day. Dan recommended remixing yourself by extracting and manipulating sections from your old recordings. With audio browsers like the one in Ableton Live, trawling your hard drive for inspiration is easy, and the resulting music is far more personal.
For more on the inner art of sound, see Music Thing’s fascinating series called “Tiny Music Makers,” where Tom Whitwell uncovers the stories behind the Windows and Mac startup sounds, the THX sound, and more. The comments are also a rich source of links.
The Center for Social Media has released a small (8p) booklet that provides some guidelines for understanding Fair Use, especially in the context of documentaries. It was developed with the help of filmmakers, lawyers, and I believe the American University School of Law. With the exponential growth of user created content, and videoblogs specifically, I think there’s something to glean from it. I also hope it helps answer a few of those nagging questions everyone seems to have about copyright and what they can and can’t use in a video.
Do you think this type of public dissemination of legal information places the University at risk? If so, do the potential rewards for society outweigh the risk?
I can’t believe nobody else has blogged about this yet ;)
So, Jonathan Schwartz’s blog (http://blogs.sun.com/jonathan) has quite a bit of information about this. Basically you can apply to receive a free trial T2000 server for 60 days worth of testing. At that time you’re expected to buy it, or return it. Schwartz recently said that he’d let *some* people keep the T2000 if they blogged about their experience.
Many people cited difficulties getting their hands on these elusive servers. I understand.
When the trial promo first started I filled out the form in an attempt to get one of these for work. I got a call back rather quickly, but they ended up saying “no” when I indicated we didn’t plan to buy a T2000. We wanted a T1000, but those aren’t shipping yet. So I figured I would test out a minimally configured T2000 instead. Sun wasn’t amused.
But now I want to write a review of the T2000 server. Not because of Jonathan’s promise–I don’t care if I can keep the server. I wrote a Solaris 10 review back when it first came out, and Sun apparently enjoyed it because they linked to it from their main S10 marketing website.
So I filled out the form again, but haven’t heard back at all this time. *sigh*
I’ve got some internal friends at Sun trying to make this happen.. but the process is excruciating (nobody knows who to ask). If you want to test one of these T2000’s for your business, make sure to sound interested in buying, and don’t forget “yes, I have multithreaded apps with no floating-point-heavy requirements.”
The Photo Marketing Association show (PMA) in Orlando, FL is beginning to wind down. I’ve been in town since Sunday and have gathered a few goodies that might interest you.
Adobe announced Photoshop Elements 4 for the Mac. This full featured image editor for $89 is all most amateur photographers would ever need. Nikon previewed its Capture NX software that provides an unified environment for photo management and image editing. Clearly Nikon has put a lot of effort into this project, and it looks great.
I discovered a super-portable screen calibrator by gretagmacbeth called the huey. I really like its interface and intelligence, but I can’t tell yet if it will really help me make better prints. There’s also been quite a bit of hallway talk about archiving our images. Some folks are even considering using film to archive their digital pictures. Isn’t that ironic?
Overall, this year’s PMA show has been an exploration in refinement. No earth-shattering announcements, but lots of improvement with existing products. I’ve tried my hand at reporting on some of these items via photoblogging. You can see my pictures uploaded from the show floor via my WiFi LifeDrive using Splashblog software. It was a fun proof of concept.