As Aleksandar said, there’s a debug context but we can only wait until it becomes usable in these cases.
And what if that extension had required more accurate error reporting? What if the ARB_debug_output extension had mandated specific, detailed messages for each kind of possible error? Do you think things would be better now?
Of course not. All that would happen is that nobody would implement it. At least, not yet. Implementing detailed error reporting is a pain, and it takes time.
If the ARB had forced their hand by somehow putting it into 4.1, then they simply pretend to support it by supporting the entrypoints, but giving generic error messages. You might say, “But that’s against the spec!” but when has that ever stopped anyone from claiming to support a particular version of GL before?
I, like many, didn’t read the standard first to depth, focusing on all the statements there and making a global image that would guard me on my path. Instead I would get the general idea and move on to doing stuff on it, in contrast to becoming an OpenGL guru.
I’m having some trouble imagining these circumstances. Could you describe a place in the spec were you could look at the API and get a really wrong idea about what is legal and what is not. And no, the example you gave doesn’t count because:
For example, locations of vertex attributes are numbered in order of appearance. Standard says it’s undefined, and so ATI numbers it randomly. A tiny detail, one might say, but IMO the standard should get rid of all unnecessary undefines because they make the life harder without a real purpose.
OK. So how do you define “order of appearance” in GLSL?
Remember how the GLSL compilation model works. You compile shader strings into object files, then link those into a program. So, how do you determine the order that attributes appear in if attributes are defined in different shader objects? Is it the order that the shader objects are attached to the program? Is that something you really want to enforce?
Furthermore, is that even a good idea? Do you really want to allow subtle breakages in code just because you rearranged the order of how you defined a couple of attributes? I’m not sure if I would consider that a good idea to begin with. At least with layout(location), there’s an explicit number in the shader; here, it’s based on something implicit.
Yes, you effectively have the same thing with uniform blocks: ordering in the shader can impact the results. But uniform blocks are at least cordoned off. They’re clearly separate from other global definitions; the ordering happens within a clearly-defined boundary.
Also, the spec doesn’t say that the attribute locations are “undefined.” It says that they are “implementation-dependent.” “Undefined” means you shouldn’t do it; “implementation-dependent” means you should expect it to vary from implementation to implementation.
Every example that uses shaders will either use glBindAttribLocation to set attributes it before linking, layout(location) to set attributes in shaders, or use glGetAttribLocation to query the location after the fact. None of them rely on NVIDIA order. So I have no idea how you even discovered NVIDIA’s ordering, let alone came to believe that this was in-spec behavior and relied on it.
I’m guessing that NVIDIA puts the attributes in an array somewhere, and simply assigns indices using those array indices. Whereas ATI probably sticks them in a std::map or similar sorted structure, and therefore assigns indices based on some kind of name ordering (not necessarily lexicographical).