This week on the Perl 6 mailing lists
“Q: Can February March? A: No, but April May…”
– Larry Wall’s r14313 log message for a date correction
“Q: Can February March? A: No, but April May…”
– Larry Wall’s r14313 log message for a date correction
At SD West last week, I sat in on the Agile — With or Despite of Global Development roundtable. While traditional agile development (as you might see in XP or Scrum, to some extent) recommends small teams that sit together, many organizations and projects are larger, with team members in multiple locations.
This separation makes producing software more difficult. (That’s one of the reasons agile development tries to keep team sizes small and everyone together.)
Microsoft Port 25’s Michael Francisco lists a bunch of new projects hosted on CodePlex in his blog…
Two projects in the alpha development stage caught my attention…
Windows Installer PowerShell Extensions: Exposes Windows Installer functionality to PowerShell. Head over to this project and read the descriptions of some of the proposed PowerShell Cmdlets. This project should be an interest to watch develop over time.
Crash a Party: This sample mashup uses the Windows Live Contacts Control and Virtual Earth to place your Windows Live Contacts on a map. This one sounded fun. I visited the mashup site linked on the project page and fired up the Live Contacts Control. However, instead of clicking through the contacts and sending them over to be visualized on the Virtual Earth map, I canceled the session. Why? I didn’t want to accidently expose my friends to a web app that I hadn’t fully investigated. Identity and Trust are the cornerstones of the Web 2.0 world. How do you decide on what and who to trust? I don’t even provide my cell number for my Twitter account fearing a SMS deluge that might be caused by a large number of legitimate twittering from friends or an accidental text flood.
Why does Microsoft talk about Total Cost of Ownership when you merely license their software?
Perhaps all of these TCO studies are true; it does cost Microsoft less to own Windows than it would to try to buy all of the copyrights for every competing project.
Recently there was a thread on the O’Reilly Radar started by Tim O’Reilly posting a chart that I put together from our book sales data. The chart showed comparative market share for most of the relevant programming languages. I have updated this in the chart below and have included 2005 data as well.
A litte insight to the numbers behind this graph. The percent shows that a lanugage like Java represents ~23% of all book sales when looking at the language dimension. That means, I compared all the languages and which books have ‘XYZ using Java’, or ‘Embedded FOO on Java’, etc. So it does not have to be a strict Java Programming book, but rather a book that is Java-centric or the examples contain mostly Java code. I compared aggregated sales units during Jan-Feb for 2005, 2006 and 2007.
This is not an exhaustive study as I threw out Languages that did not have a representative sample in one of the years. In other words, if a language area show up with 15 units in 2005, but not in 2006 [or 2007] it was dropped. These are the bottomfeeders. So if you use one of those languages, Squeak, you will not find the results in my chart.
During the previous two years and this year, during January and February, the biggest declines were seen with Java ~5.5% down, C/C++ ~4.5% down, Visual Basic ~2% down and Perl ~1.5% down. The reason I point these out, is that is is market share for books, the unit sales numbers, which I will not supply, are a bit more alarming if you are on the declining list.
So when you look at the top for both lists, the totals are a bit different. There is a 3% difference on the winners side. What is says to me, is that most of the growth was seen in the four top languages, while the decline was spread a bit wider.
Do you really care about languages and what books sales tell us about trends? Don’t think for a moment, as past posters have said, that some languages have better market share because one language has ’sucky’ books. I did a quick analysis of GPA ratings on Amazon by language, and there are not any really significant wins for one language over another. One thing that does factor tough, is early to market. On average, when a language or technology topic is in its infancy stages, the market is more forgiving on the reviews. About .5 for the first books to show up in a category. You could say the the first are usually the best, but that does not hold up either.
If there is enough interest, I will follow up with some efficiency and average title metrics.
Although the launches of Windows Vista and Office 2007 got all the attention during the Winter 2006-2007 period, I think that when people look back on this period in a few years, the standout Microsoft release will be PowerShell. I’ve mentioned a couple of Port 25 interviews and comments on PowerShell in the past. Here are some of the pertinent ones to refresh your memory.
PowerShell is currently a free downlaod add-on product for Windows XP, Windows Vista, and Windows Server 2003 (I couldn’t get it to install under Longhorn Server Beta-2). This week, however, Microsoft’s Jeffrey Snover (PowerShell Architect) Announced: PowerShell to Ship in Windows Server (Longhorn)!!!! This is good news indeed. Having spent the last couple of decades working on UNIX/Linux servers (mostly from the command line), I find managing Windows Servers from the GUI often slows me down and makes remote management a chore. Having PowerShell built into the future version of Windows Server makes life easier for those of us with mixed server environments.
Here’s a PowerShell goodie I found on Microsoft’s CodePlex site that is an example of the power PowerShell brings to Windows.
This brings together the Windows data objects that PowerShell has access to with the statistical analysis power of the Open Source R statistical package. The oldsters among you (if you remember the late 20th century, you are old :-), will find that your old S statistical package books mostly applies to help you use the powerful R statistical package.
TAP::Parser is the intended replacement for the venerable Test::Harness module. The intent is to clean up the code in such a way that writing custom test harnesses and supporting new TAP features is possible. (I’ve hacked on Test::Harness:: Straps; it wasn’t the easiest programming task I’ve ever tackled.)
I added TODO tests to Parrot’s test tools a while ago, to make it easier
to distinguish expected test failures from accidental failures.
Test::Harness displays very little information
about TODO tests that passed. In TAP terms, these are bonus tests.
The programmer expected them to fail, but they actually passed, so they
need further investigation.
It would be nice to collect information on skipped, TODO, and bonus
tests in the normal test run. Though I could write a harness via
Test::Harness::Straps, I decided to try
TAP::Parser instead. Here’s what I discovered.
This week’s recipe can be found here. It shows how to use closures to perform various sorting operations. Before you click on the link, let me point out the same thing as one of the commenters of this recipe: this recipe is pretty much obsoleted by mixing operator.itemgetter with list.sort(key=foo). I thought this recipe was interesting, though, because it showed a really good use for closures. Having a concrete idea of how a certain feature can be used sometimes helps to use the feature in other ways. Hopefully that will be the case here.
BusinessWeek has a great article on Microsoft’s recent stumbles in online search. It’s reflective of Microsoft’s - and, indeed, any successful company’s - attempts to cast itself in a new mold.
If Microsoft can’t keep pace, it risks seeing its Windows and Office software franchises erode as Google and others launch Web-based rivals. “It behooves Microsoft to be there,” says Charles Di Bona, an analyst with Sanford C. Bernstein & Co. (AB ). “If they don’t get there, it gives others a platform from which to attack Microsoft’s core business.”
Just as troubling, Microsoft’s search problem reflects its approach to new markets in general. It spends little time focusing on tiny, emerging niches that generate little, if any, sales. But those are precisely the markets that can quickly blossom on the Net into meaningful businesses. “Bill [Gates] and Steve [Ballmer] and the leadership don’t understand the value of small things,” says Robert Scoble, a former Microsoftie whose blog recently took the company to task for its Web missteps. “That cripples their entire Internet strategy from the start.”
This is the same trouble the company has had with open source, though I believe it has generally been more successful with open source than with these other, product-related decisions. Once Microsoft figured out that open source is a development methodology, and not a traditional competitor, it has responded much more productively to the “threat” than it has to search, online applications, etc.
In open source, I believe Microsoft’s best strategy is to start creating entirely new products completely in the open. It doesn’t have to sacrifice its Windows or Office cash cows to open source. Rather, it can experiment in safer territory.
What Microsoft can’t afford to do is sit around and wait for open source to happen to the company. It won’t. Open source requires a complete restructuring of how one thinks and behaves as a company. It’s asking too much of Microsoft to make this shift (just as my old company, Novell, failed to make the corporate shift). But it’s not too much to ask of a division within Microsoft. Or a product. It needs to happen sooner, not later.
The Seneca Free Software and Open Source Symposium (FSOSS) is a great community-oriented event held in Toronto each October. I’m co-chairing the 6th edition this fall, which for the first time will include two days of presentations and workshops (October 25-26). If you’re interested in speaking or teaching, we’re looking for solid presentation and workshop proposals at http://fsoss.senecac.on.ca/2007/.
I just used GNU find to search a directory hierarchy for files matching a particular naming pattern. I’ve been programming for long enough that writing a tree-walking algorithm to search for appropriately-named leaves is almost trivial, but the point is that I don’t have to do that. Piping the output of
find into my filter program to search within the files was sufficient.
I use the other findutils programs–especially
locate–several times each week. Without them, I’d get lost in a sea of thousands of files. Thank you to all of the developers and contributors. You saved me a few minutes today, as you do almost every day.
Open source is not as open as it claims, and Microsoft is not as closed as is claimed. Thus spake Brad Abrams, group program manager for ASP.NET AJAX at Microsoft.
Abrams argued that Microsoft is not the cathedral when it comes to ASP.NET AJAX but is quite transparent. Furthermore, he stated that most successful open source efforts are backed by a commercial vendor, making them less bazaar than they claim to be.
“I’m not sure the bazaar analogy works,” Abrams said. “Neither cathedral nor bazaar are the same in the AJAX Web space; rather there is a continuum that reaches across space.”
According to Abrams, ASP.NET AJAX offers the best of both the commercial and open worlds. On the commercial side Microsoft offers 24 x 7 support. “In the open source world you can talk to people and get answers,” Abrams said. “But we’re offering guaranteed support.”…
On the open side of things, Abrams claimed that Microsoft was providing ASP.NET AJAX components with 100 percent source code availability. The components are being licensed under Microsoft’s permissive license, which allows users to view, modify and redistribute source code for non-commercial and/or commercial purposes.
Fair enough, and no doubt true. It also points to an important point (though not stated): different groups within Microsoft are more open than others. I’m willing to bet that the emerging groups have more leeway to be open than the old cash cows within the company. That’s to be expected.
Fragment of my business card
Bryan poses three scenarios that encourages openness: Curiosit & Creativity, Economic Opportunity & Problem Solving, and Status & Recognition.
A few days earlier (March 21), internetnews.com reported on the keynote given by Brad Abrams, Microsoft Group Program Manager for ASP.NET AJAX at the AJAXWorld conference.
Abrams reflects on Eric S. Raymonds oft referenced The Cathedral & the Bazaar saying that Microsoft is not the cathedral when it comes to ASP.NET AJAX but is quite transparent and “I’m not sure the bazaar analogy works,” Abrams said. “Neither cathedral nor bazaar are the same in the AJAX Web space; rather there is a continuum that reaches across space.”
I suggest that there is room for extrapolating a bit on both these points of view. To Bryan’s list I’ll add Do the right thing. Being open in terms of information in general and source code in particular often just feels like the right thing to do. Microsoft itself has recognized the value of sharing information by awarding its Most Valuable Professional (MVP) designation to those they describe as: …are a highly select group of experts that represents the technical community’s best and brightest, and they share a deep commitment to community and a willingness to help others.. And, the creation of the Microsoft Open Source Lab seems to demonstrate that they are serious about understanding FOSS better.
My take on Abrams’ point that the Cathedral and Bazaar analogies may be less applicable these days is to add a less colorful but perhaps useful analogy of workshops in clear view of each other and where things simply need to be built or repaired. For years I’ve been trying to promote a pragmatic view of just getting work done in a heterogenous environment. The section of my business card showed here pretty much says it all for me.
As Paul Kedrosky is reporting, Microsoft continues to struggle with its online strategy.
This is further evidence that Microsoft needs to look forward, rather than trying to tie everything into its history. The way forward is by burning the boats, not by continuously plugging the holes in those boats. Microsoft will never succeed in the online world until it competes as vigorously there as Google does, which will be difficult while its interest is in hording the riches it has made in the past with the offline world.
I was discussing object oriented (OO) programming with someone who was working on a horrible piece of software with class names like StartSession. I’ll call him “Alice”. Naturally, when wading through these classes, he finds 400 line “methods” in classes which are merely OO façades around procedural modules. This is disappointing, but it’s all too common. If you think that StartSession is a good name for a class, someone has done a poor job of teaching OO to you.
I think part of the problem is that while there are some excellent university professors who do a fantastic job of teaching OO, many professors I took classes from had little to no real experience outside of the classroom, or those with experience clearly went back to teaching due to the old adage “those who can’t, teach”. Surprisingly, two of the best instructors I had were teaching COBOL. Both of them clearly had decades of real-world experience under their belts and it showed in the classroom. They understood their material, they understood the pitfalls, and taught us how to work within the constraints of the language.
Getting back to Alice, he told me about an idea he once implemented to make it clear to other programmers that OO classes can be thought of as responsible agents. One of the first classes he wrote for his work was named the AuthenticationFairy.
Earlier in the week, I said that we wouldn’t be running an article this week because I was getting up to speed and didn’t have one ready yet. I was wrong…
It turns out that hidden way in the back, behind the camel chow and under the orbital death ray spare parts, we had an article that’s been waiting to run. Please enjoy the PHP Search Engine Showdown with our compliments.
It seems that Dell is scratching its head trying to figure out what it would take to get Linux on their desktop and laptop systems in order to meet customer demand (as hinted at on the DellIdeaStorm site). But I’m not convinced that preinstallation is what Linux customers really want from Dell or the other hardware vendors. Most experienced sysadmins have preferred distributions, application sets, and partitioning layouts, and it isn’t possible to provide a one-size-fits-all preinstall image. (This problem isn’t unique to Linux — most companies re-image their Windows systems to their liking). Furthermore, the rapid release rate of most distributions would make image preparation a continuous task for the hardware vendors.
What I think we really want is in-tree drivers. If a hardware vendor took pains to ensure that their product lines — or, perhaps, just their “Linux-ready” product line — incorporated only hardware for which there were drivers in the kernel tree (and/or drivers in the major hardware-dependent projects, such as X.org [video] or pam [biometrics & smart cards]), those systems would automatically be compatible with all of the major Linux distributions and would remain so for a reasonable length of time.
This would require the vendor’s systems to be built around established hardware for which drivers already exist, or drivers will need to be pushed into the kernel before the systems are shipped (which creates an interesting problem: how do you get many eyes looking at code for hardware that isn’t available? — but if we wait until the hardware is widely available, then Linux will never support the latest hardware. We may need to rething some of our procedures if we want to see broad support for new hardware in Linux). Of course, there is a third way: design new hardware to use existing protocols and interfaces, in the same way that HP SCSI scanners used a stable protocol for years, Postscript and HP PCL printers are (largely) backwards-compatible (for two decades!), and new USB 2.0 high speed flash drives can be successfully accessed by ancient USB 1.0 storage drivers. This requires good engineering (which is a good thing!).
If such systems were shipped with WhoCaresLinux X.Y.Z, we’d still be happy. We could easily install the latest Ubuntu/SUSE/Fedora/Debian/any distribution with confidence that it would run well.
What do you think: Would you be satisfied to know that a vendor’s system offerings were all covered by in-tree drivers, even if Linux was not preinstalled or the preinstalled distro was not the one you intended to use?
CodePlex is a Microsoft’s Open Source project hosting site. The source code is managed using a Microsoft Visual Studio Team Foundation Server. But, what if you want to use the site from a non-Windows workstation running,for example, Linux or Mac OS X? The answer appeared earlier this week in a Port 25 blog item titled…
Teamprise is offering three tools to let you use CodePlex from a non-Windows platform. These tools are:
Will “Coke” Coleda released Parrot 0.4.10 on 20 March 2007. I’m particularly excited about this version because we finally have Parrot::Embed compiling and running (with the appropriate path setting for certain platforms) on multiple platforms.
Parrot::Embed allows you to use Parrot code in your Perl programs. Right now it supports basic subroutines (though multidispatch works on the Parrot side). Soon it will support Parrot objects.
Yes, there is a Ruby version in progress.
If there’s any interest, I’m happy to walk through the code or show examples of its use.
You may notice that no articles are going up this week on the ONLamp / Database / SysAdmin time-space continuum this week (at least I think there aren’t any, I’m still getting used to our content management system…) This is mostly due to me settling into my new job here, but rest assured there are plenty of articles in the pipeline, and you’ll be seeing two of them next week. Stephane Faroult will have the first half of an excellent piece on how to emulate Analytic Functions in MySQL, and Raju Varghese continues his look at how to visualize server log files using Gnuplot.
In addition, you may recall that I mentioned last week that there might be some new and fun things coming to ONLamp in the near future. In fact, the first of these has made significant progress over the last week, and is now looking likely to appear in mid-April, a weekly ONLamp-themed comic strip. If things continue to go as planned, I’ll be writing the strips and a good friend and outstanding comic book artist, Randy Silverman, will be doing the art. Look for some sneak peeks in the near future!
Initial response to my call for articles has been amazing, I already have several writers committed to upcoming features as a result. I’m still looking for more interesting writing, especially on the topics of Perl/PHP/Python/Ruby and Databases. This is a great way to get your name out into the world, and earn some money besides!
I’ll have some more geeky things to talk about later in the week, I’ve just returned from a 3 day corporate (day job, not O’Reilly) tech summit, and getting my bearings back.
Microsoft has taken an increasingly warm approach to open source. It’s not going to revolutionize the company tomorrow, but Bill Hilf and others are successfully nudging the company toward greater and greater experiments with open source.
Since the company will eventually get to an open source model, or die fighting it, I have some advice for Microsoft:
This will sound ridiculous to those who don’t appreciate the nuances of the GPL, but the GPL is capitalism, pure and simple. It is the best way to benefit customers while inhibiting competitors, as I’ve argued before, which lends itself perfectly to Microsoft’s business. From my interview with Charlie Babcock:
If a competitor takes your code, modifies it and redistributes it, then the giveback provision reasserts itself….So your competitor will be required to give the originating company all the changes that its made.
And the community that’s formed around the original GPL code will probably not assist the competitor with further improvements. But it will quickly assimilate a competitor’s changes, test them, modify and expand them and in general make life miserable for the competitor.
“With the GPL, you get the value of the changes back. You don’t get that with other licenses,” Asay notes. And if the original code supplier is on the ball, its going to move faster than any competitor can keep up.
“It’s produced the best open source companies on the planet–Red Hat, MySQL and JBoss. The GPL is best suited for commercial companies….” he says.
But more profoundly, the GPL enables a fundamental change between a software company and its customers that in the long run is going to give GPL companies immense staying power.
“The GPL aligns the company’s interest with the customer’s. It forces me to stop thinking of the relationship as ending when I ship a set of bits. Instead, that’s the start,” and the nature of the ongoing relationship is determined by the caliber of upgrades to those bits, the quality of technical support, the strength of the programming community that forms around the bits.
Isn’t this precisely where Microsoft competes? On the value of its ecosystem and the ability to deliver updates to customers? Why couldn’t Microsoft have essentially the same model (for enterprises) with GPL’d code as it does with its proprietary license?
It could. It should. Hopefully, it will.
Some open source licenses don’t readily lend themselves to commercial open source. Apache/BSD licensing, for example, is hard to monetize (directly). But the GPL is very easy to monetize directly: customers get the value they want and competitors are scared to touch it. Everyone (that matters) wins.
Microsoft needs to ditch its weird view on the GPL. It used to call it anti-American. It’s actually the exact opposite. It is the most American of open source licenses. Microsoft could embrace it and continue to pull in its billions…and what could be more American than crass materialism? :-)
“It seems you are presuming a Waterfall model of development here. We’re not doing the Waterfall, we’re doing the Whirlpool, where the strange attractor whirls around with feedback at many levels but eventually converges on something in the middle. In other words, a whirlpool sucks, but the trick is to position your whirlpool over your intended destination, and you’ll eventually get there, though perhaps a bit dizzier than you’d like.”
– Larry Wall, in ‘What criteria mark the closure of perl6 specification’
MIT Press released the book…
…as a free PDF. I just took a brief look into it. But, you gotta give credit to an academic oriented book (vs. pop book) that uses phrases like nerdish stereotype (p. 32) :-). There’s a section that starts on page 59 titled Comparison between Open Source and Closed Source Programming Incentives that I suspect will become required reading for the staff of Microsoft’s Open Source Lab.
I also hope that someone at the Lab takes a look at the document mentioned to on page 66 (An internal Microsoft document on open source (Valloppillil 1998) describes a number of pressures that limit the implementation of features of open source development within Microsoft.) and reflects on this nearly decade old point of view.
BTW, don’t assume that this book is some kind of FOSS cheerleader. Take a look at Chapter 4 written by Robert L. Glass who takes FOSS to task.
We had a situation come up at work the other day where we seemed to be receiving some spurious data from a data provider. The data providing process connects to one of our processes on some specific port and sends a relentless stream of data. Said spurious data uncovered a bug in our process which was causing it to die abnormally. My first action was to get tcpdump to show me what was going on. But the results were just wrong. I suspect the incorrect results I was seeing were caused by the antiquated version of tcpdump running on an antiquated FreeBSD machine and trying to view the results on Wireshark/Ethereal on a recent Ubuntu box.
So, I figured a logging proxy help. So I whipped one up using Twisted. It worked pretty well. I know I didn’t get everything right since I don’t regularly use Twisted. Basically, every connection that is made to the proxy from the data provider initiates a client connection to my server process. That connection also creates a log file on disk with a name that identifies where the connection came from. Each piece of data that is sent from the data provider is logged and forwarded on to my process. What it doesn’t handle properly is my process going down. I didn’t spend enough time to figure out exactly how to attach a reference to the server piece of the proxy onto the client piece.
Enter the recipe of the week. Just ten days ago, this excellent recipe was either submitted or updated (I can’t tell which). This recipe contains code for a proxy server which would fit my needs and log a hexdump of the received data. I haven’t tried it in the context of what I was trying to do, but given the testing I did with it, it looks like it would work quite nicely. The only thing I would change is the format of the logging. For my purposes, I’d still need to have a raw log of the transmitted data. But this is a great recipe that shows an example of a working proxy in Twisted.
Besides e-mail, the only real office document I ever use is a spreadsheet. I’m a very happy Gnumeric user. My needs are reasonably simple; I share spreadsheets with Excel users and rely on some formulas.
It may not seem like an elaborate or flashy feature, but Gnumeric just works for what I need it to do. I’ve never had a problem reading or saving spreadsheets in formats that non-Gnumeric users can read. It works almost transparently, and the only reason I think about how often I use it is because I deliberately made a note of the applications I use the most.
There’s little better praise than “It works so well that I never think about it.” Thanks to all of the Gnumeric developers and contributors!
In what might be a minor move at any other company, Microsoft’s decision to move FoxPro to its CodePlex open source site is big news. And good news, too, in my opinion (one that I share with Jason Matusow, apparently.)
Why is it big news? An increasing number of companies have started to treat open source as a dumping ground for old, unwanted code. FoxPro certainly seems to fill that description, though not for existing customers that use it and rely on it. But this move is bigger than just one piece of code. It reflects, I believe, a shifting mindset within Microsoft.
No, it won’t be licensed under an OSI-approved (read: open source) license. It will be under one of Microsoft’s Shared Source licenses, as Mary Jo Foley points out. That’s OK, because I think this decision is less about open source and more about collaborative community development. Very few get this aspect of open source right, and I’m hoping that Microsoft will do better than many of the rest of us.
Microsoft, for all its faults, has traditionally understood the importance of developers better than most companies. Steve Ballmer’s famous developer dance is just one indication of this. To the extent that Microsoft can figure out the open source development model, and marry it with the passive-aggressive open source licensing model, it will win big in this new world of software.
Chris Travers, who wrote the recent Port 25 tutorial for installing PostgreSQL on Windows, is back again with a tutorial describing how to install MySQL on Windows Vista. You can find the paper linked in a Port 25 blog entry from Jamie Cannon at titled…
Chris’ paper focuses on installing MySQL on Windows Vista because its new security features require a few tweaks to allow MySQL to install properly. The general information provided in the paper could also be applied to Windows Server, however.
Since it isn’t discussed in the paper, I thought I’d mention that MySQL comes in two flavors now: The Enterprise Edition appeared late last year (2006) and has a tiered pricing depending on the kind of support desired. The Community Edition is still a free download. However, the two editions have forked in a way that appears similar to the relationship between Red Hat Enterprise Edition (RHEL) and Fedora Core.
MySQL Enterprise Edition is the version MySQL recommends for use with mission critical applications. It is said to be more stable and will have minor point releases available as a binary download for the various supported platforms.
MySQL Community Edition does not have formal support options from MySQL. It will include new features before the Enterprise Edition and can be, I guess, considered be the testing ground version. Binary ready to run installation files will only be released twice a year for this version.
The two versions will converge about every 18 months and then fork again for the next round.
I used to install/upgrade MySQL using the RPM releases from MySQL. However, since the code was forked, I have been installing MySQL for Linux from source code which is released for every minor point version. One minor issue on the Linux side of the world is that the RPM installer assumes the socket file is located at /var/lib/mysql/mysql.sock while the source code version points to /tmp/mysql.sock. Tweaking the my.cnf file for MySQL and php.ini for PHP takes care of this from a LAMP point of view. I haven’t tried to build from source for Microsoft Windows.
Steve Loughran’s How to Own an OSS Project, part 1 and especially How to Own an OSS Project, part 2 bring up the always-relevant issues of security, transparency, and trust. In particular, the second entry asks a most insightful question: how can you trust the documentation?
It doesn’t help when the documentation suggests outdated practices which are, at best, dangerous and, at worst, completely wrong. (I’ve patched a few of these in Perl 5, myself.) Add to that active malice, such as a recent dangerous answer to a novice question in comp.lang.lisp, or running obfuscated code outside of a locked-down sandbox, and it’s almost a wonder there aren’t more security problems related to source code posted on the Internet.
Thinking of trying the Drupal open source content management system? It’s a powerful platform, but the learning curve can be steep, even if you’re already comfortable with its underlying technologies: PHP, MySQL and CSS. As the volunteer webmaster for the Monterey County (California) Democrats, I’ve gotten deeper into this stuff than I ever anticipated, and believe me, I know that learning curve well. Here’s a list of some of the top gotchas. Some of them are just plain good web development practice, but they become especially important with Drupal, and even more so if you’re using the CiviCRM contact relationship management module.
I spent the past week attending the semi-regular Microsoft MVP Global Summit in Seattle and Redmond Washington. What’s an MVP? Microsoft describes MVPs like this: Microsoft Most Valuable Professionals (MVPs) are exceptional technical community leaders from around the world who are awarded for voluntarily sharing their high quality, real world expertise in offline and online technical communities. My particular community involvement focuses on the Windows Mobile Smartphone and Pocket PC products. The knowledge and passion of anyone deeply involved with any knowledge area becomes quickly apparent and appreciated during a discussion of the subject. I have often said to my fellow MVPs that the most valuable take-away for me from MVP Summits is speaking with and learning from the other MVPs.
We naturally associate many Open Source projects with passionate and knowledgeable communities. But, there are many other kinds of communities: Some more formalized than others. On Thursday (March 15), I had the opportunity to drop by the Microsoft Open Source Lab and meet some of the people blogging on the Microsoft Port 25 site for the first time: Anandeep, Jamie, and Sam (Ramji - Director of the lab) were there and took a break from their busy schedules to speak with me about Microsoft and their work to interoperate with the Open Source products and the people involved in those projects. I had the chance to have a long conversation with Kishi and Chris (Tavers, an independent consultant and software developer, who wrote the PostgreSQL on Windows how-to paper I blogged about recently) earlier on Monday. I also ran into Sara Ford (Influencing the Microsoft culture one open source presentation at a time) long enough to say hello. A quick peek into one of the Lab’s server rooms and being greeted by the Linux penguin trio provided one of the more amusing moments. And, no, this is not some gigantic glass walled server room with an unearthly glow. It probably looks like a lot of the small-ish servers rooms many of you have built and installed over the years.
My take away from the series of brief afternoon meetings at the Microsoft Open Source Lab is that these are people who are knowledgeable, engaged, and passionate about their work of somehow bridging the worlds of Microsoft products and Open Source products to create interoperable productive software eco-systems. And, of course, I am aware of the whole Microsoft-Novell/SUSE-Linux issue, what CEO Steve Ballmer said, and various other heated and confusing issues. But, quite frankly, I doubt if a little ol’ nobody like me was going to resolve those issues in 90 minutes. However, I was able to have a good old fashioned handshake and conversation to learn more about the Lab group as thinkers and human beings. And, that seemed like a good way to start things off for me. As with my interaction with other MVPs at the MVP Summit this week, I found a lot of value in my first time meetings with the various people at the Microsoft Open Source Lab. As with nearly everything else in the world, it really is all about people.
You can find more detailed information about the Microsoft Open Source Lab in a two part blog written about a year ago found at…
Piers Cawley picked up on a rant I had elsewhere in DSLs, Fluent Interfaces, and how to tell the difference. An API that uses Ruby symbols isn’t automatically a DSL, nor did Ruby invent or even popularize the concept (though it seems to have taken the “How to Draw a Horse!” concept right out of matchbook covers)–I credit lex and yacc as ancestors.
Once I simplified and unified almost all of the language-specific test harnesses in Parrot to a single line per language implementation:
use Parrot::Test::Harness language => 'pheme', compiler => 'pheme.pbc';
API or DSL? Does it really matter? Maybe. It can be difficult to talk about closures when Luddites from the Java world claim that they’re the same thing as anonymous inner classes, and I still consider Python’s use of the word
lambda as misleading.
Then again, there’s no reason that code has to look anything like valid Perl. It only needs to be a list of strings, and
Parrot::Test::Harness could do anything. Consider also P5NCI::Declare which extends the concept:
use P5NCI::Declare library => 'shared_library'; sub perl_function :NCI( c_function => 'vii' ); perl_function( 101, 77 );
Maybe a language with richer syntactic elements–and the possibility to change their behavior–allows greater potential to create a true DSL modeled after the domain.
Either way, the possibilities for API simplification are hard to ignore.
There are plenty of tasks for novices to Parrot, Perl, and C, as well as a few tasks for people with experience but who need some guidance to get started. I’d also love to find someone with hard-won experience compiling software on Windows (specifically creating a shared library that links against another shared library dynamically).
Parrot continues to make progress; this will be the best release yet. Come join us in #parrot on irc.perl.org all day, regardless of your timezone.
As chromatic alluded to in his newsletter on Monday, there’s been a changing of the Guard here at ONLamp. After many years of faithful service to the LAMP community, he’s moving on to greener pastures inside the O’Reilly family. I’m honored to have been chosen to take over for him here, and know I have some large shoes to fill.
A few brief words about myself: I’ve been a software engineer for almost 30 years now, and have worked all over the industry, from Artificial Intelligence to desktop publishing to e-Commerce. I’ve worked in huge (Xerox-sized huge) corporations and tiny little startups. In addition, I’ve been an active member of the open source community, including working as a committer on Apache Struts.
About a decade ago, while working as the web site manager of the Christian Science Monitor, I started to write professionally, first for the paper itself and then branching out. I’ve written for WIRED, Processor, Linux Journal, LinuxWorld Magazine (where I was Senior Editor), Linux Today (where I am still a Senior Contributing Editor), CPU, Developer.com, InfoWorld, CMP Tech Pipeline and many others. I’ve reviewed PDAs, and covered the 2000 NH Presidential Primary. I’ve also written two books (for, as they say, another publisher…) on Java Web Development.
When I’m not programming or writing or editing, I’m an avid science fiction fan, a private pilot, a scuba diver, a videographer and a cat herder. I believe in Heinlein’s motto that specialization is for insects.
Anyway, that’s who I am, now a little about what’s upcoming in ONLamp’s future. I intend to continue the tradition of in-depth technical articles that ONLamp and it’s sister sites are known for. But (you knew there was a but coming, didn’t you?) I also intend to bring some more introductory material to the sites. Not dumbed down, just more approachable for someone who wants an overview of a technology rather than a plunge into the deep end. You’ll see that in some of the articles I’ve got out with authors right now, and which should start showing up in April.
We’re also planning some new features for the site. One thing we hope to roll out quickly is the ONLamp Ombudsman. This will be a once-a-month feature where we’ll take a particular user problem or complaint, one that’s languished in the support forums or mailing lists of a given technology, and chase down a solution or explanation. So get your most aggravating problems ready to submit, maybe yours will be the one we go after.
We’re also going to have a little fun on the site. Believe it or not, there’s potentially a comic strip in the works, one that would star characters that O’Reilly readers will be very familiar with. We’re also hoping to improve the layout and user experience, although this will have to wait on the next generation of our content management system to roll out to ONLamp, probably midyear.
I’d also like to hear from you. I’m a big believer in the open source methodology in all things. No one knows what you want to read more than you do. So drop me a line with a suggestion, a constructive criticism, or just a friendly hi. At the end of the day, I work for you as much as I do O’Reilly, so let me know what would make the site better for you. And of course, if you have an itch to write on any of the broad range of subjects that ONLamp and its sisters cover, definitely drop me a line. The beast always needs to be fed.
James Bennett summarized the XML/JSON debate late last year, but in passing he referred to a more interesting point that deserves more attention.
The first section of I can’t believe it’s not XML! shows a graph of complexity in web sites. The left represents static, HTML only pages. The right indicates the world’s largest and busiest sites–billions of hits per day with plenty of database traffic.
Most of the web is in the middle. (Most of the web is to the left of the middle.)
While it may interest some people to debate endlessly the tactics of scaling to Top 200 Internet Sites traffic with huge clusters of high-powered database servers, it’s far, far more interesting to me to discuss what the millions of sites the rest of us run can do to meet the needs of our visitors efficiently or effectively.
Put another way, I don’t particularly care about the issues that concern only the largest 1% of all potential readers. I prefer to reach the 80% in the big belly.
In a sign that Microsoft finally feels the faintest stirrings of an open source business model, Jeff Raikes has gone on the record as asking people to steal from Microsoft if they’re going to pirate/steal from anyone. Now, in the open source world we don’t call using software for free stealing. We call it seeding the market. Microsoft is starting to understand:
“If they’re going to pirate somebody, we want it to be us rather than somebody else,” Raikes said….
Raikes, speaking last week at the Morgan Stanley Technology conference in San Francisco, said a certain amount of software piracy actually helps Microsoft because it can lead to purchases by individuals who otherwise might never have been exposed to the company’s products.
“We understand that in the long run the fundamental asset is the installed base of people who are using our products,” Raikes said. “What you hope to do over time is convert them to licensing the software.”
You can almost see the lights coming on in Raikes’ mind. Now if he could just grasp that letting everyone on the planet have it for free, and finding other ways to charge them, is an excellent way to achieve World Domination Part II. I know at my company, nearly all of our customer leads come through free downloads of our product. They then return to pay us for additional services, certified binaries, etc.
It works. Even for Microsoft.
I’m a couple of months late for mandatory ISBN-13 support, but I’ve uploaded Business::ISBN-2.00_01 so people who depend on the module can look at the new interface and make comments on it. It supports ISBN-13 seamlessy. Most of the internals are completely new: this was my second-ever module, and I did a lot of dumb things back then.
Interface changes (all open for discussion and improvement):
This isn’t ready for production. It’s pretty good, but don’t blindly replace what you have now. Give it a try, and if you run into a problem let me know.
MainSoft has an interesting approach with their…
Visual MainWin for J2EE Version 2.0 Technology Preview 2 (released March 1)
…that lets you port a Microsoft .NET application to Linux using a plug-in for Microsoft Visual Studio. It does this by transforming the .NET CLI bits into J2EE JVM bytecodes and then using Mono-based class libraries.
You can learn more about the Microsoft .NET and the Open Source Mono Project by viewing the Port 25 interview with Mono Project leader Miguel de Icaza at…
Redmond Magazine has a great article on Microsoft’s changing perspective on open source, featuring Bill HIlf as one of the key drivers of this change. I know and respect Bill, and agree heartily with the article’s conclusions:
When Bill Hilf came from IBM Corp. to join Microsoft three years ago, the company’s stance on open source vacillated wildly. It would swing from outright indifference to overt nastiness. Today, something else is unfolding: Microsoft is striking a surprising balance. It has stopped dismissing open source licensing and community development as dangerous folly or evil foe, and is looking for a way to both compete and co-exist.
Let’s start with Hilf. Under his direction as general manager of platform strategy, Microsoft is crafting a multifaceted plan to approach open source from a number of different levels: Linux as an operating system competitor; interoperability with Linux in mixed environments; partnering with open source ISVs; development of Shared Source Licensing; contributions to and support for community development sites….
Perhaps the biggest challenge that Hilf faces is changing the internal tone at Microsoft. One of the things he’s worked on is convincing developers that they need to play a role in the open source process and take part in projects on CodePlex to join the so-called community. The engineers caught on right away, he said, while the sales and marketing organizations were tougher to persuade.
Yesterday I heard one of the most prominent open source figures in the industry suggest that maybe, just maybe, Microsoft is changing its tune vis-a-vis open source. It has a long way to go, but the work that Bill, Sam Ramji, Jason Matusow, and others have done is truly changing the way Microsoft thinks about its ecosystem.
The big question, however, is how Microsoft views itself: platform company or applications company. To the extent it is the former, it has a big tent to share with the open source world. To the extent that it is the latter, it will try to quash any part of its ecosystem that aggressively competes with it.
But that’s not any different from how it deals with closed-source companies. So maybe it will beat up on open source just as much as it does closed source. Nice.
I have a love/hate relationship with GNU Make. Yes, it’s picky about syntax and it’s difficult to write cross-platform Makefiles (though that’s not really GNU Make’s fault), but a make utility of some sort is mostly ubiquitous across the free Unix-like platforms.
When I need to compile a project written in C or C++ (or when I want to automate certain system administration tasks, such as remembering to update my Postfix files when I update them), I use GNU Make. It does a difficult job without much thanks or thought. I suspect that its maintainers, like me, would like to see a cleaner and friendlier replacement sometime in the future, but for now, its ubiquity and its power are definite advantages.
Thanks to everyone who’s contributed to make and GNU Make!
The next issue of The Perl Review is out, and it’s a special edition for the Nordic Perl Workshop! Not only that, the PDF-only price is now only $7. Subscribe now to beat the price increase for US postage rate increases in May.
The Spring 2007 issue of The Perl Review is online and ready for download. Subscribers should have already received an email telling them all about it. In this issue (besides the cover showing Gary Blackburn’s license “PERL GOD” license plate), there’s:
The Port 25 blog entry…
…points to a new Microsoft technical document titled: Linux VPN Technical Analysis and HOWTO.
As its title implies, this 33 page PDF document gives both a technical reading of Linux VPN as well as specific how-to information. The work is based on testing using Red Hat and Fedora Core Linux distros.
If you are looking for some Windows VPN help, you might want to check out the OpenVPN GUI for Windows I mentioned in an item in my personal blog recently.
William Hurley first got in touch with me many years ago, not to promote one of his own projects, but to set me up with some of his colleagues to write about network security. Later, he got involved in several open source ventures in networking.
So William is both an innovator and a facilitator. He’s willing to lend his expertise wherever there’s a good chance someone can make a difference in an area he cares about. And he definitely cares about open source.
Now he’s chief architect for BMC’s open source strategy. As he points out in a podcast, this position is an important step for BMC, but only one of several.
In the free software world, we’re getting used to companies putting money and support behind the software. This doesn’t diminish the importance of the individual zealot. Individual zealots are the source of many new tools, and the genius of free software is the ease with which it allows someone to introduce a new idea and then let larger institutions amplify it.
With William at BMC, I can be confident that this large company will not only continue to contribute to open source, but will maintain strong community bonds so the route from individual innovation to large-scale adoption remains pothole-free.
I was talking tonight with a friend that manages the medical arm of a large humanitarian organization. We were talking about poverty and he suggested that there are basically three things necessary to enable people to pull themselves out of poverty:
My friend focused on this third thing. He suggested that most people overlook it, but that it’s imperative to enable poverty-stricken people and societies to pull themselves out of poverty. Why? Because it’s useless to plant if there’s little chance of harvesting. There’s little reason to build a product or render a service if the government or a neighbor will likely rip it away tomorrow.
Security matters. This is why governments are set up - to remove us from our Hobbesian existence (”nasty, brutish, and short”) and give us the opportunity to reap what we sow. This is also why the US Constititution provides for intellectual property protection.
I’ve spoken against proprietary software in the past but, hearing my friend speak tonight, I think I should qualify my opposition to current usage of copyrights and patents in software. My contention is not that these are not necessary - they are. I firmly believe that it’s important for software developers, just like farmers, land developers, etc., to be able to build something and be secure in their expectation of attempting to monetize their product. Microsoft, just like everyone else, needs to be able to invest in R&D with confidence that its money is not automatically wasted simply because the system rejects investment.
But what if this old version of intellectual property has been superceded? What if, in fact, one can get equal or possibly better protection by putting the same code under an open source license, rather than under a closed-source license? I’m not talking about relinquishing ownership of one’s developments. On the contrary, copyright law is absolutely foundational to both traditional software licenses and to open source software licenses: open source is meaningless without property. You must first own it in order to assign copyleft-style distribution requirements.
Rather, I’m suggesting that perhaps we’re entering a new phase in intellectual property (2.0), when our basic needs don’t change (food, health, security), but the way we fulfill them does. I can still earn a living (to feed myself and my family) with open source, and I can provide equally (or superior) infringement protection (health) with open source. And, importantly, since open source depends on the same rule of law to guarantee security, I’m safe in my development, too.
All that changes is how I choose to monetize the software. Instead of charging for access to the software, I charge for access to a certified version of the software. Or to services around the software. Or the software as a service, itself (like Google, Salesforce.com, or a range of others). In other words, I make the software experience more about experience and less about software.
This sounds like progress to me. I know that there are a wide range of companies tied up in IP 1.0, which will find the transition to IP 2.0 difficult. Microsoft need not be one of these. The company has been aggressive in trying to figure out open source and this 2.0 world. It just needs to keep moving in this direction.
While the idea of circumventing the privacy offered by Tor via DNS, Flash, and Java (applets) is nothing new, HD Moore’s “Torment” Tor server hack has made news at Securityfocus and ZDNet. Although I’m not quite sure why this big news now all of a sudden, it does have positive side effects for the Tor project (see my opinions below).
I just happened to catch this Port 25 blog item by Jamie Cannon about one of my favorite tools (MySQL)…
In it he gives a bit of history about Microsoft’s relationship with MySQL and pointers to using Visual Studio with MySQL.
But, what really caught my interest was an unadorned link in a list of MySQL/Windows references at the end of the blog. It points to a blog focusing on using Visual Basic with MySQL that I had not known about (VBMySQL.com). There’s a short article there that asks the simple but interesting question..
Check out both Jamie’s blog item linked above at the VB for MySQL site. I wonder if MyODBC can be used with the free Visual Basic Express (vs. the full Visual Studio)?
I wrote on this topic earlier this week, but my post is lost in the ether(net) somewhere. I can’t help but notice that Microsoft is working hard of late to shroud itself in protectionist robes of the holiest color.
First it was a battle for the sanctity of its patents. Now it’s a self-righteous tirade against Google for not being protective enough of others’ intellectual property. “Let me lead you into the Promised Land of IP Safety!” seems to be Microsoft’s latest rallying cry.
Isn’t this the same company that needed masses of US federal judges to stop it from trampling on others’ rights? The same company that flaunted antitrust laws to build and maintain monopoly power so that it could tax billions of dollars into its coffers? I’m as willing to forgive and forget as the next person, but it’s a bit galling to have Microsoft preaching morality and ethics to the world.
It would probably sound slightly more credible if its sanctimonious bile weren’t directed at its chief competitors, open source and the Internet (Google being the ‘Net’s chief representative in this case). Even more so if the rocks being thrown weren’t being thrown in apparent desperation.
Some of us, perhaps best put forth by Tim O’Reilly, feel that the more Microsoft seeks to “protect” the more Neanderthal it looks. It’s not that respecting others’ property is not important - it is. It’s just that the more Microsoft and others cling to the old ways of protecting that property, the more they lock themselves out of future prosperity. I understand that it must be hard to see this when the company continues to generate money like Niagara Falls gushes water, but this is precisely the time when the company needs to be most prudent on how to manage the future.
Microsoft’s way forward is to move forward, and not to greedily horde its past. It must do that to a certain extent to preserve shareholder value, but if it doesn’t change, that’s all it will own: shareholder interests of the past, which will drag it down to prevent it from embracing the future.
Microsoft’s anandeep posts a thoughtful Port 25 blog item titled…
…relating his recent visit to the Microsoft Cambridge Lab and a recent conference there focused on the topic of software reliability. He closes his blog by writing…
Open Source would have much the same issues but for the fact that there is not a central organization that collects all this failure data. The situation in Open Source may be the reverse of the situation for proprietary software makers in that the failure data is collected at the IT organization level and not centrally. How does this failure data really result in code defect corrections? I guess that it is either pre-analyzed and submitted as a bug or people patch their own instances of the source code. But my opinion is that eventually open source software systems will have to build central repositories of failure data in much the same way that commercial software vendors have built them.
Let me preface my comments on this by admitting that I have never made a significant contribution to a major Open Source project. But, I am a long time FOSS Lurker (FOSSL - pronounced fossil? :-) as a relatively early end-user of FOSS (before the term Open Source was coined) software such as GNU EMACS, GNU C, and Perl since the mid-1980s and having installed Linux from floppies downloaded in increments of uuencoded files from USENET newsgroups.
My gut instinct is that the large Open Source projects that I follow and use (Apache httpd, Firefox, Thunderbird, MySQL, PHP, Ruby, Python, Zope, etc.) as well as many of the smaller projects already do pretty well in the error reporting and correcting department. The various communities around healthy FOSS projects are, it seems to me, extremely knowledgeable about the products they use and proactive in terms of bubbling up issues to the FOSS project members. There are chat rooms, bulletin boards, user groups, and formal error reporting procedures.
The FOSS communities have, it seems to me, performed a remarkable job of identifying and responding to product reliability issues.
However, we have been watching the emergence of FOSS business models over the past few years that may change the complexion of these communities. For me, the first change came when Red Hat stopped providing free ISO distributions after Red Hat 9. More recently, MySQL forked their product into Community and Enterprise Editions and reduced the release of Community RPM releases to twice a year. In their case, however, the MySQL Community Edition source code releases configure and compile easily on a Linux system making it easy to stay up to date (I haven’t tried building from source under Windows). These notable (to me personally) changes are understandable from a revenue driven point of view. But, I wonder if it signals, perhaps, the need for these more commercial FOSS projects to focus more on centralized repositories of failure data as anandeep suggests.
OK. I’m in the job market again. Andy Oram finally cut me some slack. Tuesday, I received my first legitimate call for a pure-blooded Linux administrator job. No Server 2003 R2 or mixed environments came with the offer. The company wanted someone to manage 30 Fedora boxes running Asterisk as the primary application. The position existed in Dallas and close to the commuter train station. I fell out of my chair.
Back on October 15, 2006, I enrolled in my first MSCE 2003 server class at the community college in Dallas. Why? I surveyed the market and found some Linux positions. They read something like this: Wanted Linux System Administrator. Experience required: Server 2000 and 2003, Active Directory Guru, VB Shell scripting, IPSec and Cisco VPN experience, A+ and N+ Certifications, Microsoft Exchange experience desirable but not necessary. Some SuSE experience for small Novell workgroup. We have 250 mobile users and require management of off-line data synchronization.
Let’s see. In my home town - the seventh largest city in the US - I don’t see or hear much about Linux administrators. So, re-certifying from NT 4.0 to server 2003 looked like one of the only ways to land a job. And as you know: Gotta eat.
I did get a call from another semi-Linux shop. They posted the job description for a desktop support technician with some Linux experience. Then I had an interview and the requirements changed. The help desk part of the job involved managing a MS Active Directory Forest and support of XP desktops. Then they disclosed the other 2/3ths of the job, which involved heavy PHP development and 24/7 administration of a large server farm of mission critical VoIP servers. OH, you probably guessed it. The servers all ran Asterisk. They made a generous compensation offer though and that involved going temp to perm for six months at $25 an hour. You might not believe it, but they wanted someone to start in four days and they actually found a candidate and hired him.
Oh, well. Don’t forget that you need DNS experience when you implement AD and learn those SRV record types. This could come in handy.
In September 2006, I wrote a short blog post on Python’s readiness for the enterprise. That posting had a question mark in the title because someone asserted (jokingly) that Python is, in fact, not ready for the enterprise. (Remember, my post was a link to a tongue in cheek blog entry elsewhere. I’m not stating that Jeff Waugh was trying to discredit Python.)
I recently came across this eWeek article which outlines Python’s use in the airline industry where failure would be very, very bad. This eWeek article contains assertions of some really smart people that Python is up to the task. Could they be wrong? Certainly. But I don’t think they are. And I’m not going to defend my reasons for thinking so. That’s not the point of this post.
My whole point in posting this is perhaps more for my own benefit than anyone else’s. I see Python too often treated as a toy programming language. I see people who feel that the language has failed them somehow because they encountered some problems at runtime which would have been caught by a compiler. I’m currently watching Python being replaced by another language. (No, this isn’t the wholesale state of Python in the world. Just my little segment of it). This article is reassuring to me that I’m not totally off of my rocker by thinking of Python as an excellent and capable language. And it gives me a warm fuzzy to think of Python being used in such a weighty manner.
By the way, I think ITA (the company around which the article was written) is hiring. If someone from ITA wants to shoot me an email with a link, I’ll gladly edit this post and put the link in. Good going, guys and gals of ITA!
In yesterday’s installment (Testing FizzBuzz in Parrot), I explained a test file for testing multiple Parrot implementations of the FizzBuzz problem. I also promised to show two different ways to solve the problem in Parrot. The test framework requires both approaches to take a single integer describing how many FizzBuzz elements to produce and to return an Array-like PMC containing the FizzBuzz strings.
Every couple years it’s time to say, “Everything you thought you knew about MySQL has to change.” Well, not everything; it still is (and always will be) the lovable, streamlined, easy-to-administer, web-friendly database it was known as from the start. But MySQL AB is aggressively branching out into new markets and domains, so they can surprise you.
For instance, when I saw the roll-out of MySQL Cluster at their 2004 conference, I (and most other observers) predicted it would be adopted in very limited markets and pose little threat to other clustering solutions, because it required all databases to be stored in RAM. I also assumed that eliminating this restriction and allowing on-disk storage would be too big a job to be worthwhile. Well, on-disk storage is reportedly part of MySQL 5.1.
The old choices for storage engines (which, few as they were, represented a good selling point for MySQL, because each storage engine offered advantages for particular applications) have suddenly burgeoned. The primacy of InnoDB is fading, while much excited debate and obsessive benchmarking is taking place around two new entrants, Falcon (from MySQL AB) and PrimeBase XT (PBXT) (an independent open source project).
Anyway, there are plenty of reasons–whether you’re a current user of MySQL or just curious about what it could offer you–to register for the 2007 MySQL Conference & Expo, run in conjunction with O’Reilly. It’s bound to be both fun and informative–if not, everything you thought you knew about MySQL (and O’Reilly) has to change.
I just spent an interesting and productive hour with Matthew Aslett of the Computer Business Review. Matthew writes one of the most interesting blogs on open source, so we met to talk about the state of the open source market.
In the course of our conversation, we talked about Microsoft and its reactions to open source. In discussing this, I raised the issue of how hard it is sometimes to give credence to what Microsoft says. Not because of what it says, but because of all that it does not say.
I know that public companies are under amazing pressure to be as universally bland as possible, but Microsoft can dazzle at times with its willingness to attack others publicly (as it did yesterday with Google, and which Tim rightly calls “foul” on). So, it doesn’t have a communication problem.
The problem, as I see it, is that Microsoft only appears to be willing to be public about negative moves, and only en masse as a company. It doesn’t allow its employees (or they don’t feel entitled to do so) to discuss the company’s actions publicly. (Apparently, it also muzzles those of us who simply occasionally blog on Microsoft-related sites.)
Letting a few small cracks show through would make Microsoft stronger, not weaker. It’s hard to trust a company or person that purports to be perfect, simply because we know that we, ourselves, are not perfect. If an employee were to get out of line, it’s either a sign that a) the company has serious strategic problems it needs to fix or b) the employee doesn’t believe in the company’s correct strategic decisions and needs to go. Either solution is fine. But silence leads to neither, and so leaves both the bad employee or the bad company intact.
Open the windows, Microsoft. Let some fresh air in. Let conversation flow, both out from the company and into the company. It will make you a better company. And a more credible vendor.
A recent discussion on interviewing programmers (in hopes of finding clueful ones) brought up the FizzBuzz challenge. Can you write a program to print the numbers from one to one hundred, printing also “Fizz” for multiples of three, “Buzz” for multiples of five, and “FizzBuzz” for multiples of three and five?
This ought to take no more than a few minutes for a developer with any proficiency in a language. I decided it would be fun to write it in Parrot’s PIR. There’s the straightforward procedural way, the array overloading way, an object-oriented way, the coroutine approach, and the generator technique.
I chose the first two, but I also decided to work entirely with test-driven development, even though this is normally the realm of a SpikeSolution–I thought that might be more interesting for everyone.
Komodo 4.0 now allows XPI extensions, the same thing that Firefox has. Create something cool for the Komodo 4.0 Extensibility Challenge and you could win some money or prizes. The contest is open until April 1 (no, really), and you can use the 21-day free preview of Komodo to do it.
ActiveState will go through the entries and put the best on their website to let the community vote on the one they like the best.
So, who’s going to win the prize for extensions for perltidy or perlcritic?
It’s almost summer again, so it’s time for students to figure out what to do with all that free time? Want to work in open source but need some scratch to keep you in ramen and Mountain Dew? Let Google give you a big bag of money to do it by participating in their third Summer of Code! Last year, students got $4,500 for their work in open source projects. This year that could be you.
The application period begin on March 14 and end on March 24. Coding runs from May 28 to August 20. Read more about it in the Google Summer of Code FAQ.
I revisited Michael Francisco’s CodePlex Update blog post on the Port 25 site this evening and downloaded the Cache My Work utility on CodePlex. It lets you select currently running items to restart after a controlled reboot (on Patch Tuesday for example). For some reason, it saw IE7, Firefox, and Thunderbird but did not see Windows Explorer. It would be useful if it did and also keep your place on the directory tree. Perhaps the next version will. In any case, it looks like a useful utility for the next Patch Tuesday (or some other controlled reboot). Check out Michael’s CodePlex updated project list for other projects that might interest you.
I’ve used revision control for almost all of my professional career, starting with CVS in 1999 as I joined the world of free software developers and migrating to Subversion in 2003 for personal projects. Now I rarely use Subversion directly; I use SVK for most of my work.
SVK accesses public Subversion repositories almost transparently, so on projects where I have commit access, the only real difference for me is typing
svk instead of
For projects where I don’t have commit access–or when I’m on a train or an airplane or in an airport, hotel room, or conference center without reliable Internet access–it’s amazingly convenient to be able to commit changes as I go and merge them later.
I enjoy the use of offline mirrors and the ability to generate patches or push certain commits but not others. Even better, there are no .svn directories in my checkouts to deal with. Everything I remember from using plain Subversion before extends to creating and managing and working with repositories, but I’ve gained the ability to manage my own branches even without commit access at the moment or in general.
When I need those features, I really need them. Thank you to all SVK developers and contributors for making my life easier!
What is Colorado? That’s correct. I’ve been in the Winter Park area for about two months now, and I can only laugh at myself for thinking that my home, in the Piedmont of North Carolina, has a real, bone-freezing winter. Don’t get me wrong, our climate can stir up some pretty rough winters, but the wind chill here has gotten so low, to the point where you just laugh about the surrealism of it all. I’ve got my better Brazilian half to keep me warm, and some new cryptovirology research to keep my thought process in a relatively thawed out state. So, let’s talk about fish.
Well, not a real fish, but the sea-faring, fin-bearing creature I’ve chosen to use as a naming convention for cryptoviral functions. Science tells me I have a whole load of names to choose from, so I’ll be fine for a while. Without further pointless ado, allow me to introduce Mackerel, a family of cryptoviral functions. Here’s the preliminary abstract for a paper that will be presented at Security Opus, an information security conference in San Francisco, with technical lectures being held from March 19th through the 21st:
“Mackerel is a family of symmetric cryptovirus constructions that allows up to IND-CCA2 and INT-CTXT security; they’re based around the AES in CTR mode (IND-CPA) for preserving confidentiality and CMAC-AES (SUF-CMA) for preserving integrity. The optimal configuration (IND-CCA2 and INT-CTXT), “King Mackerel,” employs two 256-bit symmetric keys, for encryption and authentication in the Encrypt-then-Authenticate (EtA) composition, and claims a 128-bit security level. All functions operate in the Troutman mode of information extortion (TIE), a slight variation of Young and Yung’s information extortion attack . While Mackerel requires its own set of intrinsic analyses, it takes advantage of the analytical scrutiny of the AES; as such, the security of Mackerel reduces to that of the AES. Mackerel is based on original research conducted by Troutman, in . Mackerel is in the final stages of preliminary cryptanalysis, of which will support Mackerel in a standalone paper, set to appear in Spring ‘07, along with a complementary protocol for ensuring fairness via game theory.”
 A. Young, M. Yung, “Cryptovirology: Extortion-Based Security Threats and Countermeasures,” IEEE Symposium on Security & Privacy, pages 129-141, May 6-8, 1996.
 J. Troutman, “Examining Misimplemented RSA and Strengthened Authentication for Variations of the Cryptovirological Information Extortion Attack,” Duke University (TIP), July 24th, 2006.
So, as you can see, research has gotten as far as receiving a cool name - well, a name, at least. As of right now, Mackerel has taken on a completely standardized approach, by using AES. However, Mackerel is merely a shell, of sorts; that is, encryption and authentication functions, and their parameters, are largely arbitrary. As such, Mackerel can be configured for various trade-offs between efficiency and security. The paper will focus primarily on the most conservatively secure configuration, dubbed “King Mackerel,” which is IND-CCA2 and INT-CTXT secure. The algorithm specifications and design rationale paper will be available during, or shortly thereafter, the conference. It follows that a complementing game theoretical paper, outlining the Troutman information extortion mode of operation (TIE) for Mackerel, is set to be completed by the Spring of ‘07; in late June, it will be presented in a guest lecture at Duke University.
In the meanwhile, I’ll be investigating other niche environments for Mackerel, both software and hardware, as well various other structural possibilities and applications for cryptovirus design. As always, I’m quite interested in any feedback - criticism included. Recognizing insecurity comes before understanding security, so the more folks looking at cryptovirology, the better. Until next time, I’ll be dreaming of warmer days, when I’m back in the South, away from 70mph wind gusts, incessant snow, and -30F wind chill. It’s all good, though.
Long live thermal underwear and down feathers (and a future excursion to Ipanema).
“‘Course, if someone goes ahead and adds the Y combinator, one must naturally begin to wonder what the YY combinator would be… :-) “
– Larry Wall
“Obviously it generates a function so anonymous that it can’t even refer to itself. I call it the depressed existentialist solipsist operator.”
– chromatic, in ‘Y not’
Microsoft continues to show foresight in some areas while distinctly lacking it in others (i.e., the Internet, Web 2.0, search, etc.). As an example of foresight, check out Mary Jo Foley’s coverage of Microsoft’s new Beginner Developer Learning Center.
What’s it for? The name says it all: help drive more would-be programmers to Microsoft by lowering the bar to writing good (or, at least, decent) code.
Via the new BLDC site, Microsoft is working to provide non-professional programmers with basic content.
Smart, smart move. I’m not sure it will be enough to stem the tide of new developers moving to open source, but that’s not really the point. Microsoft is expanding the universe of potential developers with this move, and not merely carving up an existing market of developers.
Good idea, Microsoft.
Hank Janssen may not be drinking the Microsoft Kool-Aid, but he’s certainly jazzed about his work at the company. And I was frankly a bit bowled over by something he said:
[W]e have been touching a lot of items people never thought a few years ago would be likely. Getting Mozilla people on site for one. Another one that would have been considered impossible is Microsoft writing plugins for Firefox. Here is a cool one for example Photosynth, and you can listen to my podcast in which I interview Ian Gilman one of the Photosynth developers….
Just think about that for a second, Microsoft writing Firefox plugins!!!
I am thinking about it, and I’m very impressed. Microsoft is either the shrewdest company on the planet or it’s actually feeling its way toward open source. I’m not suggesting that the company has a grand strategy to go open source, but I do believe that it’s hiring a generation of developers and business people that are intelligent about open source, rather than knee-jerk against it.
I grew up in open source. I never had a career outside open source. It’s therefore easy for me to accept it as the default business and development model. For those who grew up on proprietary software, the inverse is true. Not because they’re bad people, but because they were “raised” differently.
As Microsoft attracts good people from the outside (Bill, Sam, etc.), and as it hires from the universities that increasingly teach open source, Microsoft will “get” open source more and more. I don’t think this will fundamentally change the company from a products company to a services company, but I also believe that Microsoft’s vision of baking services into software meshes well with the underlying ethic of open source.
In short, I continue to believe that open source is a massive opportunity for Microsoft, and I continue to be impressed (not always, mind you - I still despise the Novell patent fiasco and am not a fan of the “Get the Facts” campaign) with steps the company is taking to figure out open source. Maybe it will never quite arrive, but at least it’s trying, which is much more than I’d say about some of its proprietary cousins.
Port 25’s Hank Janssen interviewed Ian Gilman from the Microsoft Live Labs Seadragon Project (12 minute interview). You might be more familiar with their PhotoSynth project. One of the outcomes of this specific project related to the Open Source world is the first Firefox plugin developed at Microsoft.
Hank also looks back at his first 10 months at Microsoft as the resident skeptic at the Microsoft Open Source Software Lab. He lists his impressions and significant events during this period. You can find this blog item and the link to the interview with Ian Gilman in a blog entry titled…
Of course, what we all want to know is when the rest of us can upload our own photos to create our own PhotoSynth image collections to wander around in.