A JVM that gradually tunes its garbage collection heuristics to adapt to the applications being run. Building JVMs in a modular way to support extension and modification. Finding faster implementations for Java bytecode operations. Analysis on where static compilation of the Java language to native code will break in the face of versioning, and solutions to the problem. And a keynote from the enemy.
Greetings, from the USENIX JVM Symposium, 2002!
This conference, perhaps a blip on the radar of most Java developers, deals with that element of the Java environment without which no Java code could ever execute: the Java Virtual Machine itself. Made up of attendees from all over the world, representing both corporate and academic environments, the JVM Symposium rattled off a rather interesting, intriguing, and sometimes intimidating array of topics for the Java developer to consider. Two days of intense, in-depth technical presentation, discussion, and peer-to-peer networking, the old-fashioned way: schmoozing.
If you threw a party, and only the geeks showed up, is it still a party?
This was, without a doubt, one of the most interesting conferences I’ve ever attended. Blatantly research-oriented and academic in nature, the technical sessions were fascinating to watch. Instead of the range of vendors springing to the stage with carefully-choreographed dog-and-pony shows demonstrating how their product does X better than the competitors, the Symposium was a presentation of research papers, where honest evaluation and feedback was desired, given, and appreciated. No better example of this took place than during the keynote Thursday morning, the first day of the conference.
The keynote, “Stop Thinking Outside the Box, There is No Box”, by Robert Berry of IBM, essentially described much of the work he and his compatriots in one of IBM’s research labs have been studying and exploiting in the areas of performance and optimization. His conclusion? “Don’t focus so much on performance–it’s probably not as important as you think.” Said with a chuckle later, “Of course, don’t tell my boss that.”
Smaller and smaller, faster and faster
Much of the focus of the papers and research topics this year was on tiny devices–specifically, optimizing JVMs for use on memory-constrained devices like PDAs and cellphones. Several research topics focused on how the JVM must evolve to better work with handheld devices; in fact, one of the more interesting talks from this perspective was the talk given by G. Chen, M. Kandemir, N. Vijaykrishnan, and M. J. Irwin, of Pennsylvania State, and M. Wolczko, of Sun Microsystems. In their paper, “Adaptive Garbage Collection for Battery-Operated Environments”, they discussed an approach whereby the garbage collector of a VM running on an embedded device could relocate objects in memory, so that particular banks of memory within the device could be shut down, thus creating less power draw on the device’s batteries. As I was listening, I thought about the discussion a few years ago about the debate between different garbage collector approaches, and in particular, the arena-based collector.
In an arena-based collector, objects are allocated out of one of many arenas, small contiguous “chunks” of memory allocated by the memory manager. As objects are freed, they are not recollected until all of the objects allocated from within a particular arena are marked for collection, at which point the collector can release the entire arena and reallocate. It makes compaction simpler, since arenas can be shuffled around instead of the actual objects, at the expense of carrying around dead weight–unreleased-yet-releasable objects–until the arena itself is completely unreleased.
The parallel here, of course, is the idea of either arenas themselves being the banks of memory, or else suballocating arenas within a given bank. When an arena was ready for collection, rather than going through the motions of releasing the memory and then shutting the bank down, the bank could simply be shut down–in essence, collecting memory back via powering down. It certainly would be fast, and preservation of battery life is, of course, all-important on an embedded device.
AIG: Artificially Intelligent Garbagemen?
One talk which blew me away with some of its implcations was “To Collect or Not To Collect? Machine Learning for Memory Management”, presented by Eva Andreasson and Olof Lindholm of BEA/Virtual Machines and Frank Hoffmann of the Royal Institute of Technology. In it, they presented a garbage collection system which used machine-learning techniques to slowly, over time, tune its garbage-collection heuristics when deciding whether to collect or not. The part that truly sent a shot of pure adrenaline into my system was a chart presented towards the end of the talk, comparing their machine-learning GC against JRockit, a high-performance JVM marketed by Appeal Virtual Machines (now owned by BEA).
As would be expected, the JRockit GC performance numbers were pretty flatlined across time–JRockit achieves its garbage collection numbers (which are, in of themselves, impressive) through an advanced, yet static, set of heuristics deciding whether objects are collectable or not, and which ones to collect first. And, as would also be expected, the machine learning GC curve started out far below that of JRockit. But what truly blew me away was that the curve not only approached the JRockit line over time, but eventually surpassed it. (All the way to the end of the chart, anyway, when it took a surprising dip back below the JRockit numbers. When asked why this was so, Ms. Andreasson, presenting, shrugged and said, “We don’t know. In fact, I was counseled to just drop that data off the end of the chart to make it look better.” At which point the room broke out in laughter for a good fifteen seconds.)
If this doesn’t send chills down your spine like it did mine, then the full ramifications of this kind of research hasn’t hit you yet. Consider the purpose to which we are currently putting Java: enterprise systems, running 24×7 for as long as we can keep them up. Consider the curve presented: over time, the GC adaptations simply get better and better. Over time, which a 24×7 process has plenty of. In short, your servlet could achieve better performance, just because the VM underneath it has learned how best to adapt its garbage-collective techniques to the code running on top of it. No need for complex pooling logic, or activation/passivation schemes–the JVM learns as it goes, and it’s got plenty of time in which to do it.
No, Really, I’m the Same Guy You Always Knew
One performance technique which was popular in Java’s early days (that just doesn’t seem to want to go away) is the idea of taking a finite set of Java classes, and rather than running them under the dynamic managed environment of the Java virtual machine, to instead compile the code statically, creating a natively-linked executable. Several vendors, including Microsoft, bundled Java-to-native compilers as part of their Java offerings, and the GNU crowd continues to pursue gcj as we speak. But there are issues involved with this approach, issues which hadn’t occurred to me until I’d heard Dachuan Yu, Zhong Shao and Valery Trifonov of Yale present their paper, “Supporting Binary Compatibility with Static Compilation”.
In essence, the problem is not a new one–normally, under the dynamically-loaded JVM environment, when a Java class wants to call a method on another class, the bytecode “invokevirtual” is emitted with the target being the name and signature of the method to call. When the JVM sees this, either the interpreter or the JIT compiler translates this symbol into an instruction-pointer target within the JVM. When a Java class gets statically compiled, however, for maximum runtime performance, this symbolic lookup is done at compile-time, and these symbolic method lookups turn into fixed offsets into virtual tables embedded inside the code. Now, when the class called is changed, all callers must be recompiled, or else the slot/offset scheme will be off and wrong, and Bad Things will result. Their solution? To follow one of the oldest precepts in Computer Science: “There is no problem that cannot be solved with another layer of indirection”. In short, the static Java compiler doesn’t embed the offset directly, but a pointer to a table in which those offsets are maintained and updated as classes are resolved and loaded in.
Ordinarily, I don’t deal much in statically-compiled Java; in fact, I see one of Java’s greatest strengths is that of its dynamically-loaded nature, allowing it to easily take the component-based container approach that’s been so popular three times now (applets, servlets, and EJB). But again, the paper made me start to think about object layouts, both in terms of binary compatibility as well as JVM implementation of method dispatch–combined, for example, with an adaptive JIT that tried to optimize and inline methods as aggressively as possible, and back out when changes in the environment necessitated a reJIT of a method against a similar-looking yet different class.
Capulets and Montagues, In The Same Room?
But by far and away what was most interesting to me, personally, was the keynote on the second day. David Tarditi presented “Research Opportunities and Research Challenges”, in which he described the Microsoft CLR architecture, and talked about the Shared Source CLI implementation, code-named Rotor. Surprisingly, pleasantly so, there was little in the way of poisonous comments directed at the speaker, a member of Microsoft Research. In fact, at the end of the conference, it was even brought up that the next iteration of this conference might be renamed from the “JVM Symposium” to the “VM Symposium” or “JVM/EE (Execution Engine) Symposium”, to reflect a growing interest in virtual machine technology outside of Java, such as the CLI (or Lisp or other VMs that have come along the way).
Naturally, this peaceful interaction between Java and .NET is near and dear to my own heart: as the author of one Java book (Server-Based Java Programming, Manning Publications, 2000) and another one on the way (Effective Enterprise Java, Addison-Wesley), as well as two .NET books and (C# in a Nutshell, and VB.NET Core Classes in a Nutshell, both OReilly 2002) and another one on the way on Rotor (Shared Source CLI Essentials, OReilly), I see tremendous opportunity for geeks of both camps to learn from one another’s history, experience, and, yes, mistakes. But the only way to find out about them is to look, with clear and unpoliticized eyes, at what the other guys are doing, and see if it’s relevant. Using myself as an example, my interest in writing about Rotor in turn rekindles my desire to dive into the SCSL-licensed JDK source and figure out how Hotspot does JIT, GC, classloading, and so forth.
Same Time Next Year
All in all, the Symposium was a success, I believe; not only were interesting papers presented (admittedly, from a personal perspective, some more than others), but the opportunity to “work the room” and get a chance to talk with academics, researchers, and the odd corporate developer (like myself) was definitely good stuff. Although not necessarily a conference for the faint-of-heart Java programmer, this is definitely a conference to take a look at if you’re a team lead, tech lead, or architect who wants to keep an eye on what might be coming down the highway in a few years.
Now, if you’ll excuse me, I have to finish downloading the JDK source and start building…..