Interesting. I have been experimenting with something similar for an upcoming article.
The stuff that's in quicktime.std.sg (sg == "SequenceGrabber") substantially works, in that you can get a SequenceGrabber, iterate over devices, look at the audio or video channels, get plausible dimensions for the GWorld, etc.
But on the other hand, creating a Movie with Movie.fromSequenceGrabber(sg) gives you a movie that instantly dies (try to use it, even for a println, and you get a "the native QT object is no longer valid" error).
It's interesting that you can get the pixels to an offscreen GWorld (I'm trying to dump them as a Pict to disk and it's not working yet), and it makes sense that you then have to use them "outside of QuickTime", since you don't have a valid Movie or GraphicsImporter from which to fashion a QTComponent. So what do you figure this code is doing... maybe taking the PixMap and making a BufferedImage out of it (since the PixMap gives you a RawEncodedImage for the Raster and a ColorTable)? You mention performance is bad and that makes sense - from the QT offscreen buffer to a Java offscreen buffer and then blitted to a (J)Component. Eep.
If you have a link to this code, please post it as a follow-up here - it'd be interesting.
That said, even if we can do this, we've still lost a lot of why we'd want capture. QTJ should be able to save the captured movie to memory or disk (as a real QT Movie, so we can edit it, export it, etc.), get us a MediaHandler so we can look at audio levels on the mic ("Karaoke Revolution" anyone?), look for diffs between frames (turn your G5 into a motion detector), stream it if enough of the Presentation API still works (if it ever did), etc. We still need capture to get fixed for real.
Apple says they listen to bug reports, and duplicates tell them what's important, so go ahead and file a "we want QTJ capture back" bug. I have. http://bugreport.apple.com/