As code grows and grows more portable and gains more dependencies, themselves portable, how can you validate it? What can a company or individual do to repay a community for millions of lines of useful code? How can developers and users give and receive feedback more smoothly?
Perl 5 enabled the growth of something amazing; the CPAN. This distributed code repository has grown from a few dozen contributions to over ten thousand reusable components in millions of lines of code, written by several thousand contributors, in ten years. Some people even claim that 90% of any Perl program exists, prewritten, on the CPAN.
Without good tools, a project of this size would be unmanageable. FortunatelyCPAN search, user ratings, annotations, and user recommendations all help people find the right distributions for their uses. There are also recent upload feeds available to skim for updates or new releases.
Another useful tool is an increased focus on quality among module authors and maintainers. The Perl testing tools have grown up from new beginnings in 2000 and 2001 to rival the best tools available for other platforms. With new tools (and, admittedly, more than a little evangelism for good development practices), testing and quality and even automated kwalitee metrics have spread to many developers both on and off of the CPAN.
Of course, having automated tests available means nothing if you don’t use them, and so there are quite a few dedicated testers who faithfully download new modules and put them through their paces, reporting back their results. It’s not a perfect system, but automated tests run even by a few people other than the author and reported back faithfully can still give valuable feedback.
Not everything is perfect though. What works for ten or a hundred or even maybe a thousand distributions may not work well with ten thousand.
How many platforms does Perl support? Dozens. How many stable releases of Perl are available in the world now? Even counting just the stable, recommendable versions of Perl 5.6 and Perl 5.8, that’s eight or nine. How many dependencies are there to consider — not just other modules, but system libraries and configuration options?
Even retaining the same level of quality while the number of distributions continues to grow is a difficult task, to say nothing of improving the level of quality. (We’re not perfect. We can always improve.)
What’s the solution? Adam Kennedy has an idea: the Portable Image Testing Architecture. If anyone can make a ridiculously large and ambitious project work, I believe it’s Adam.
By creating and distributing preconfigured virtual machine images, all set and ready to go for testing, PITA hopes to make it so easy to test code on any platform and any circumstances that a company or individual can invest $500 in a little box in the corner and provide continual, useful feedback to code maintainers.
This project has my attention, not just because it is a frontal attack on a ridiculously large software development problem, but because I believe may provide me a valuable service. As code grows and grows more portable and gains more dependencies, themselves portable, how can you validate it? What can a company or individual do to repay a community for millions of lines of useful code? How can developers and users give and receive feedback more smoothly?
I look forward to further information from PITA.