I have an nVidia card. I want (and supposed to) write code compatible with any (or most) consumer (read: average gamer) cards. These, as far as I see, beside nVidia cards include at least ATi cards, and other companies seem to rise from the slumber and make nVidia competition (e.g. SiS Xabre, Matrox Parhelia). I ain't John Carmack and I don't have a cart full of different ATi, nVidia and who's-else cards to test my code on them. In fact, in OpenGL I don't have to TEST, but write same effect in different ways and using different extensions for ATi, nVidia, etc cards, then test. Well, and how am I supposed to do that on nVidia? No problem, I can buy Radeon, something else, but it is obviously stupid to switch them in one machine, so I must have a whole park of machines for simple code-wrighting, not testing?
Conclusion: (if you disprove it I will be grateful)
OpenGL is very loosely standardized API, and, though I like OpenGL's interface much more than D3D's, it seems that I must switch to D3D. In D3D, everything is standardized, and if my code works on some card, it will work on any card with same hardware capabilities, no matter who's the vendor. In OpenGL, anyone who designs a new card, can think up a whole bunch of extensions (GL_VASYA_PUPKINS_COOL_FEATURE), do I need them? In real programs? nVidia dominated the market for some years, yes, but now it is not so obvious, and I can't assume that most of my end-users run nVidia hardware.
Where is standard extensions for shaders? It's been long time since D3D8 appeared, and it has standard interface...
What is the sense in using OpenGL for _real_ game coding, not some tech-demos?