I think it would be easier if you separated defending your article (which rightfully addresses the needs of Windows developers) from trying to refute the statement "macos x already works like this".
The fact is that Mac OS X already *does* work mostly like this, except for retained vectors within the graphics API, and wether this is a good thing or not is at the very least debatable (I don't agree it is). I don't think you needed to dwell on Quartz, but a short reference to existing state of the art is almost always appopriate, if just to provide context.
With that out of the way, I do believe you made a number of statements in your replies that need to be addressed.
1. High DPI displays
Quartz will be quite capable of adapting to high DPI displays, just not using the mechanism you envisage. In fact, Quartz uses the PDF/Postscript imaging model, which as you may recall, is device independent and theoretically arbitrarily scalable, and practically scales at least to imagesetter and film-printers, with around 2500 dpi, and has done so for a couple of decades.
The mechanism is quite simple and has been in uses for several decades: apply a device-transform with a higher scale factor before any other rendering is done. Voilá, you get a bitmap rendered at a higher resolution.
Your proposed mechanism of applying a compositing transform to the completed Window is unlikely to work well even in the Windows world, because it assumes that *all* drawing by *all* applications will use the retained vector API. This assumption seems unlikely unless non-retained APIs are removed, and completely hopeless once you consider legacy applications.
2. "You have to do your own redrawing"
This is false. Or more precisely, this really depends on who you mean with "you". If you mean "some entity outside the low graphics library", then it is technically true. However, if you mean "you, the developer", this is simply false (and since you put this under "it makes life easier for the developer:", that seems the only reasonable definition).
Most modern app development on Mac OS X is done using the Cocoa frameworks, and these will handle the refresh logic, calling your redraw methods when necessary (and appropriately clipped) without you being aware of it.
You might reasonably counter that this goes outside the scope of graphics APIs, and the answer is that yes it does. However, just because something isn't handled by a particular API doesn't mean it isn't handled, and just possibly a retained-mode graphics API isn't the best way of handling this situation.
[I would argue that it is not]
3. "big limitation: there is no vector-level retention"
You assert that this is a big limitation. Unlike the other two points, which were factual errors, this is more of a value judgement that I can't factually refute quite as easily. However, I do disagree, and point out that your primary arguments supporting your assertion [1+2 above] have been shown to be factually incorrect.
The retained vector model is not new in any sense of the word, it is actually one of the oldest in town, with GKS and similar systems going back to the 70ies and before (anyone remember Tektronix storage tube displays? :-) )
Now that doesn't mean it is bad, just like claiming it is new doesn't make it good.
However, it should be noted that retained-mode graphics systems were the norm a while ago, and then were rejected in favor of immediate drawing, at least for 2D graphics. Attempts to reintroduce retained mode drawing such as QuickDraw GX did not meet with success.
3b. drawing using the GPU vs. unified imaging model
You seem to assume that drawing via the GPU is inherently "better" or "more advanced", without apparently even considering the possibility that *not* drawing via the GPU may be a conscious design decision.
However, this was precisely the decision taken for DisplayPostscript and I am almost certain also for Quartz. The reason was/is that the Postscript/PDF imaging model is precisely defined (in a device independent manner, see above), and most GPUs simply don't draw precisely way, and even if some do, there is no guarantee of this.
You don't need to be convinced that this is a good idea, but at the very least it is an alternative view and means that GPU-based drawing is not necessarily "better" or "more advanced", or "leap-frogging".
3c. (vector level) retention automatically fast(er)
Once you are not drawing retained vectors with the GPU, there is no particular reason why it should be faster than immediate mode drawing, and experience with DPS, for example, showed that it almost never was.
The reason for this is that you are doing extra work: building up the retained graphics representation from the application-level representation, and then drawing that. Drawing directly from the application-level representation eliminates one step.
3d. (vector level) retention makes for easier APIs
As I already showed above, it is not true that vector-level retention automatically makes for less work by the programmer.
In fact, retained mode APIs tend to make things more complicated, not easier. The reason is that almost all applications will already *have* a retained-mode representation of whatever it is they want to draw, in application-level terms.
This will be true in every case where the application-level model is not exactly the same as the retained-mode graphics model that the API offers. I can't imagine an application where this will *not* be the case, except maybe for a trivial "Avalon" viewer.
So you end up simultaneously maintaining *two* models, instead of just one, and trying to propagate changes between them. Doing this via "diffs", which is the advantage you are proposing, is very complex and error prone. So complex that I am not going to go into too much detail here.
All this extra complexity for what? Supposedly faster 2D drawing. Hmm....I just profiled TextEdit during live resize, and actual drawing was a negligible percentage of total CPU usage. So what are we gaining here? Optimizing a small fraction of an operation that is already fast enough, with no guarantee that the "optimization" will actually be an improvement.
That isn't just premature optimziation, that is completely superluous optimization. Classical "Mount Everest Syndrome".
4. Quartz Extreme and Window Level compositing
Another factual error: window-level compositing is not a feature of Quartz Extreme. It was always in Quartz, Extreme just made things faster by putting the compositor on the graphics board.
I find it interesting that you think that hardware support and feature support are intrinsically linked, because that is exactly the way Avalon seems to be going...