"But this is not a bitmap that is drawn, it is a bitmap pattern that is used as a fill-color for a rectangle."
You're splitting hairs here. It's only "not a bitmap" if you use a narrow definition of "bitmap" that exludeds "bitmap patterns". But this is a dicussion about scalability, and bitmap patterns aren't any more scalable than non-bitmap patterns, so there's no significant difference between the two - 'bitmaps' and 'bitmap patterns' are just two ways of painting with bitmap imagery.
In other words, it's still a bitmap, even if it's in pattern form. Indeed it's a 24x22 bitmap, according to your analysis, which is not scalable - the pixellation is quite obvious when you enlarge it, and that is my point. You previously claimed that it was perfectly smooth and scalable, but now you're agreeing that it is essentially bitmap data. (And yes, it's a bitmap pattern - it's still a bitmap-like image source; the fact that it's being used as a fill pattern rather than a standalone image doesn't change its intrinsic non-scalability.)
So all I was saying is that it doesn't make your argument look very credible when you produce what you have now admitted is a 24x22 bitmap pattern, but claim that it is "smoothly scalable". This is emphatically not compelling evidence for the allegedly scalable nature you are claiming Quartz 2D has.
Bitmaps may be the standard way of hacking up graduated fills in PDF, but they're just that - a hack. If you use a 16 bit per channel colour space, then 256 rectangles is not a proper representation of a graduated fill. (Also, drawing 256 rectangles is emphatically *not* the same things as a bitmap with 256 pixels - you have a widely-held but wrong-headed view of what a bitmap represents. This is, as you put it, a rookie mistake. I won't reproduce the argument here, because you can find an excellent explanation of why you're wrong if you google for "A pixel is not a little square") And in any case your argument doesn't even stand up on its own: you've not got 256 levels here - only 24. Which is not enough, and here's that word again, to make it *scalable* - it looks wrong when you enlarge it. The pixellation is clearly visible.
"You have made a well-known beginner's mistake in using NSImage"
No doubt - I am after all not experienced with OS X. But why on earth would it behave this way? This just shows that by default, OS X applications are not necessarily resolution independent, even if they use resolution independent image sources.
If this is, as you say, a well-known mistake, doubtless that means loads of people have made it. (Particularly since this mistake causes absolutely no problems on screen that I can see - it behaves absolutely correctly when the application is running at the native screen resolution. In fact for on-screen use, the term 'mistake' doesn't seem applicable - there's nothing wrong with the application's behaviour. Problems only start to occur when we attempt to get a scalable version of the display.) Does that mean that every application that makes this mistake is going to look crummy if Apple ever get around to supporting high resolution displays?
This all add weight to my argument that OS X is, by default, not scalable. If a perfectly straightforward way of putting vector imagery into A UI turns out not to be resolution independent, then OS X is not intrinsically resolution independent, end of story.
It remains *possible* to write resolution-independent apps. But you have to set out to do that. It won't happen by accident. The same has been true in Windows for a decade now.
"So to sum up: both of your "proofs" have evaporated,"
What proofs? I was just pointing out that the image you supplied was not what you claimed it to be. And apparently I was right - you have now admitted that the fill is in fact a bitmap pattern. (That may be the standard way of doing grad fills, but it doesn't change the fact that it is, according to your analysis, a 22x24 bitmap pattern, which is not a scalable way of creating an image.)
My assertion is, and has always been, that OS X is just like Windows today - it's a platform you can build resolution-independent UIs with if you make the effort, but in which UIs will typically not be resolution by default, because it requires effort to achieve resolution independence.
I may have blundered into these proofs, but that doesn't change the basic facts. I, as a beginner, wrote an application that works absolutely fine in the native resolution, but turns out not to be resolution indepent. This proves point: resolution independence is not automatic in OS X.
That's the point I've been putting to you all along, and you've been denying all along. All it took was a rookie mistake to proove the point... :-)
The fact that I blundered into this existence proof doesn't reduce its worth. You seem to be attempting to dismiss me because I don't know a lot about OS X. That's a falacious argument, and I think you know it is. I can tell that, say, the elements of the button in the PDF I produced don't line up properly at all resolutions without having to be an expert Mac developer. Remember, you started generating PDFs as a way of trying to prove your point - I'm just using my non-technology-specific ability to recognize cruddy imagery when I see it to point out that these PDFs seem to be demonstrating the opposite of what you claim.
(Unless of course creating PDFs like this was just the wrong way to go about things. Maybe that's a facility that doesn't really work right anyway. In which case, all these sample PDFs don't really proove anything at all.)
Meanwhile, I see you've quietly ignored the other problem I illustrated in my example - that the button looks terrible. In fact it looks wrong even at the natural resolution, and just looks worse as you scale it.
Actually I'll go into a little more detail on the button, because I didn't mention the most interesting thing about it. Try a few different magnifications. Notice how sometimes, the middle section lines up with the caps, and sometimes it doesn't. This is typical of the problems you will see if you try to scale the output of a program that has never been tested at any resolution other than its native resolution.
This is one of the two main reasons why Apple can't simply apply a scale factor of 1.5 to applications running on a 150dpi display - lots of little glitches like that will come crawling out of the woodwork. In other words, it's pretty much exactly the same reason Microsoft don't do the very same thing on Windows. (The second reason is the widespread reliance on bitmaps in most UIs.)
In fact it wouldn't surprise me if this problem - the fact that a scalable drawing API turns out not to be as useful as you might thing for making scalable UIs - is the whole reason they have decided to 'solve' the problem by arbtrarily declaring recently that 100dpi is the perfect resolution. This to me looks like a positive spin on an admission of defeat that they can't solve the scalability problem with the technology they have today. (Surely nobody actually *believes* that 100dpi is the ideal resolution. Are Apple going to recommend we throw out our 600dpi laser printers and dig out those old dot matrix machines?)
Or did I make a rookie mistake with the button too?