I don't know how this pertains to workstations as I'm from a PC games background, but what seems to be lacking from OGL (IMHO) is some generic way of finding out what the hardware what it is capable of. i.e. how the blending pipeline can be set up.
Take the D3D TSS pipeline for example. With this mechanism it is possible to find out something about the hardware, e.g. number of blend stages, number of simultaneous textures, available blend modes etc. Armed with this information it is possible to make some reasonable guesses as to how the pipeline should be set up for the effect you wish to achieve. There is still the need for a ValidateDevice call to ensure that the effect will work, but it's not too much hassle to go through a number of effects trying to work out what will work.
Now, with OpenGL the developer is forced to go about things in a rather different way. Instead of having a single general mechanism for setting up effects we have to know about numerous (sometimes proprietary) extensions in order to get the best from the hardware. This problem is only going to get worse when considering the new DX8 capable cards (assuming they even get OGL ICD's) which are capable of even more texture stages, blending modes (EMBM, Dot3 etc), and even fancier stuff like pixel shaders.
Forgive me if I've gotten any details about OGL wrong, it's been a while and I've only just returned to it, but having read through the 1.2 reference docs it seems that I, and many other developers that want to use the latest features on the latest cards, will be forced to stick solely to D3D.
It would be nice to write something that could run on a number of systems from the start, but if the project would suffer by doing this (i.e. effects that should be available on a card are not because the graphics API is getting in the way) then I can imagine more and more developers sticking to Windows and D3D.
what a downer