When I first started using LVM I got bit by a few bugs. It’s all part of being an early adopter. As a result I never really used it on production hardware. It wasn’t until about 2 years ago that I gave it another look. In a similar manner I never really thought much of software raid beyond a novelty. Much of that has changed now and I use them both on a regular basis for a number of reasons.
I’m working with a product that includes this disclaimer in their support documentation:
“Virtual environments, such as VMWare (and others) are not recommended, and thus not supported.”
I can almost see their point. It’d be pretty daunting to gauge a benchmark if a customer described the running host as “1/13th of two dual core processors, 3.1 gigs of memory, and a 27 gigabyte filesystem disk”. True, that’s a pretty extreme situation, but I wouldn’t doubt it if there was the occasional bad provisioning by virtual system installers.
Anyone who implements virtualization is implicitly trusting the VM solution to do the right thing, and when we see the operating system up and running, we just assume everything works perfectly. But let’s be honest: almost every VM solution creates some overhead, so you’re missing out on a few resources. That loss shouldn’t amount to much, but it could mean a lot to an application. And while CPU and memory can be partitioned, device IO such as hard disks are a little sketchy.
To the developers of the above unnamed application, I know it’s going to be a big hassle, but five years from now, you’re not going to be able to avoid virtualization. Instead of the blanket disclaimers, increase your virtualization knowledge base, and create more test suites. Find out what works, what doesn’t, and why. It’s still okay to set guidelines on usage, but a wholesale avoidance of virtualization will hurt in the long run.