Update: via http://www.rpath.com/rbuilder/tryItNow?id=1,
This appliance can be run in the Amazon Elastic Compute Cloud (EC2), compliments of rPath. Click on the button below to launch the appliance. Once the boot process is complete, additional instructions will appear and you can complete the installation. Then, use the MediaWiki appliance in the cloud!
Just tried it and it seems you get about 15 minutes worth of play time via the rPath Appliance Agent interface which allows you to change the password, create an admin+password for the MediaWiki instance, add an email address (part of the MediaWiki setup process, though it seems any old email (read: fake**) will do) and then access the MediaWiki instance itself.
Nice touch, rPath!
** NOTE: Don’t use any periods in the admin name OR the email address you provide. Using m.david and m.david@fill_in_the_blank threw errors for both, which is why I
decided to use a fake email address, as 95% of my email addresses have periods in the handle segment.)
rPath - rPath Teams with Amazon Web Services
It will work like this: software developers use rBuilder to build an Amazon Machine Image (AMI) that is stored using the Amazon Simple Storage Service (Amazon S3). Then, with a single click, rBuilder and rBuilder Online users can boot their software appliances on Amazon EC2. No more waiting for downloads or fighting with complex installation procedures. Software appliances plus Amazon EC2 deliver software value without the hassles - on-demand. To learn more visit: www.rpath.com/amazon.
So firstly, this *ROCKS*!
I’ve been building out an official nuXleus AMI based on the core AMI image released by the good folks at rPath, and in fact, have had several build processes running in the background over the last couple of days on top of this same image up on EC2, something I am coming to discover is quite nice: Turn on a couple nuXleus EC2 instances, start up bunch of build processes, and when complete: Shut em’ down. At USD $0.10/hr it’s cheaper to run these processes on EC2 than it is locally, as if you combine the cost of electricity with the lost productivity due to fewer available resources on my local machine when running several virtualized instances, there is simply no comparison. By a *LONG* shot!
As mentioned, I’ve been building out a new instance of nuXleus (several applications have the ability to take full advantage of Xen, so flavoring the recipes and re-cooking with the Xen flavoring is worth the effort) based on the rPath AMI base, and while my time has been limited, I’ve been making good progress. And with the added luxury of having full access to the resources on my local machine coupled with the ability to queue multiple build processes with rmake, I’ve been able to avoid allocating several days at a time to prepare a new distribution, (which is basically what I have had to do in the past) am making good progress towards the next release, something in which in a few moments you will be learning more of as I finish up the Open Source XML Roundup post for weeks 10, 11, and 12 (nearly done.)
In the mean time, while I was hoping to get to the development of a screencast that showcases just how wonderful having the ability to create and deploy rPath/rBuilder-based appliances on EC2 (think: from “we need an optimized appliance to run this process” to that same optimized appliance running on EC2 in less than an hours time) truly is, I haven’t had the time. But I did take some screen shots, something of which you can view via a quick and dirty S3 client-side XSLT bucket reporting tool I just wrote for another purpose, but realized it would act quite nicely in regards to dropping a bunch of images into an S3 bucket, creating a quick XML definition file with a PI pointed to
http://s3.amazonaws.com/xslt/list-bucket.xsl, and then pointing people to the resulting XML definition file uploaded to any given bucket on S3.
An example XML definition file looks like,
Once I have a moment (read: I have the list of higher priority items complete) I’ll get the code base checked into the XSLT project repository on Google Code, and some basic documentation thrown together. That said, the above should be pretty self explanatory, so please feel free to create a definition file similar to the above, put it in a bucket somewhere on s3, and using the
http://s3.amazonaws.com/<bucketname>/<keyname>**, point people to the resulting XML file to allow them to quickly and easily browse the contents of a public folder (I haven’t built any sort of signing mechanism into this, so this will only work for buckets + the buckets content tagged for public access)
Will be posting the mentioned XML OSS roundup here in a bit, so bye for now :)
** You can access the XSLT file directly, save it locally, upload it to a virtual host bucket on S3, to then use this to serve up a directory of the contents of the virtual host bucket (which is just a bucket with the same name as the domain in the request header (e.g. foo.example.org)), but for this to work as is (meaning served from its location @ http://s3.amazonaws.com/xslt/list-bucket.xsl), due to cross-domain limitations, you need to use the