When you run the same process over a few years, its particular shortcomings emerge and can dominate: for example, Joel Spolsky claimed that MicroSoft had an economic criterion for fixing bugs, so that they only fix a bug if it costs them more (e.g. in sales) to have it than to fix it—for a monopoly in a growing market there is no loss of sales from bugs, and for a near monopoly with free alternatives to some extent the funds you lose from an application sale may be spent on another purchase from you anyway: we spend less on Office but that gives us more to spend on Vista, for example.
I don’t know if Joel was correct in his assessment, or whether Microsoft have a different strategy now. But clearly the mid-term impact of such a strategy would be a buggy code base, with entrenched workarounds, combinatorial explosions of symptoms that prevent diagnosis, and an inadquate foundation to prevent major errors. Not to mention a sudden exposure to loss of market share when the market gets saturated and stops growing: when a sucker isn’t born every minute.
Sun’s Java effort is similarly suffering recently: they have a nice-looking error process based on people voting for errors as critical. Now whether Sun acutally use this list to determine which bugs they fix first, or whether they use the vote to justify ignoring bugs that they are not interested in, the result is probably the same. A system with lots of known bugs.
There are lots of other single-strategy methodologies: risk-based analysis, ISO 9126 software quality analysis, weighting bugs against their depth in the call stack so that libary bugs are fixed at hgih priority, metrics, test driven programming, and so on. I don’t know why we should have any confidence that any of them will necessarily not, over time, systematically fail to address some kinds of errors. Which will bite us.
So is a better approach to just fix bugs randomly? Pick a bug from a hat? Well, maybe….
Perhaps we should say each maintenance methodology applied singly over time will result in an accumulation of unaddressed errors in some aspect.
Part of the problem is human: people have interests and pressures and viewpoints. So democracies solve this by what Lee Teng-Hui (the Taiwanese president who secretly funded the opposition parties) called “the regular alternation of power”: term limits, shifting jobs, even sabbaticals.
Part of the problem, as I see it, is with simple prioritization of bugs. Sometimes it is better to see each module as a whole, allocate quality requirements for that module, and then handle each bug according to its module priority. For example, Sun could say “we don’t treat text.html as a priority module but we do treat 3D rendering as a priority”. Apply this to voting, and then two votes for an HTML bug would be required to equal one vote for a 3D bug.
But that is a more complex strategy to be sure, but it is still a single strategy.
A better way of doing things may be to divide the debugging/maintenance/natural enhancement effort into independent efforts. For example, have main stream process use immediate rational economic effect, risk or deadline criteria. But also have a background effort that alternates between different strategies: systematic audits for internationalization, performance, standards-compliance, transparency, integrity, resource utilization, and other quality concerns. And also have a background effort that uses weighted voting and different criteria that accepts minor Requests For Enhancement as well as bugs.
And even, for one in a hundred bug fixes, do pick a bug out of the hat, on the grounds that you don’t have 100% confidence that even the multi-criteria maintenance will prevent the emergence of a nasty clump of errors in some aspect. Shake it up.