An interesting issue came up with regard to GL 3.0 and its shading language, GLSL.
The OpenGL 3.0 API is designed to, among other things, keep the user on the fast, hardware-supported path. Image format objects were added to allow the user to (among other things) know if a particular kind of texture was supported by the hardware. Vertex array objects allow the implementation to tell the user that unsigned shorts are not supported as vertex inputs. And so on.
It all sounds pretty solid. But there’s a gigantic hole in this fortress: GLSL.
GLSL was never really designed to communicate adequately with the user as to whether certain features are available to the hardware. Because VAOs and IFOs are fairly simple constructs, a pass-fail mechanism is all that is necessary to test for certain features (though I expect an appropriate glError-like mechanism to be available too, for more in-depth testing). GLSL compilation can fail, however, for innumerable reasons.
The forums are littered with posts where a shader worked in hardware, but after a driver revision no longer did. Some of these issues are related to driver bugs, but some aren’t. Some are simply the vagaries of compiler design showing themselves.
GLSL has very few mechanisms to be able to tell if you are getting close to hardware-defined limits. You can ask for the number of uniforms and varyings, but that’s it. Instruction count, made much more nebulous by the C-style nature of the language, is indeterminant. And something even more nebulous, the number of temporaries available, isn’t available either.
As such, it is impossible to know a priori whether a shader will compile. But that’s not the big problem. After all, VAOs and IFOs are the same way; you have to ask the implementation if it will work. The problem is that, with VAOs and IFOs, if they fail, you know why. More importantly, you know how to correct it.
The absolute worst case with a failing VAO is that you revert to floats for everything. The absolute worst case with a failing IFO is that you either can’t use the image at all or you revert to RGBA8, with power-of-two sizes.
Shaders are a much more nebulous beast. If a shader doesn’t compile to hardware, you can’t really tell why. Not in any algorithmic way, that is. Even if you had a human there, it wouldn’t be a guarantee of being able to make a shader work.
Furthermore, you can’t even guarantee that a worst-case fallback will compile. Obviously, you can be reasonably sure that it will work, but you can’t be positive about it.
Is there a solution to this? Ultimately, even a lower-level shader language wouldn’t be a full solution (since instructions can be expanded to multiple opcodes, in an implementation-defined way, counting opcodes is unreliable). Is the ARB working on finding a way to alleviate this problem?