That’s how it work on both drivers: Expected expected visual result, no error, no crash.
On AMD, if I put glBindSampler(GL_TEXTURE0, this->SamplerName);
it crash. I think AMD is wrong on that one but I’m not 100% sure. (at least a crash is what what we expect!)
The beautiful thing about OpenGL is that it is defined by a specification that is (usually) quite anal about everything. So let’s see what the spec says:
GL_TEXTURE0 is not between 0 and GL_MAX_TEXTURE_IMAGE_UNITS-1 (unless the hardware allows a LOT of texture units), so it looks like ATI is right.
Except that nowhere in the spec does it say that OpenGL is allowed to crash if it is outside of that range. So they’re both wrong, but at least ATI gets the right behavior when getting the right parameters.
Usually these kinds of spec battles come out in NVIDIA’s favor. They’re slipping if they let something like this through.
with <unit> set to the texture unit to which to bind the sampler and
<sampler> set to the name of a sampler object returned from a
previous call to GenSampler.
<unit> must be between zero and the value of
MAX_TEXTURE_IMAGE_UNITS-1. <sampler> is the name of a sampler object
that has previously been reserved by a call to GenSamplers.
Which implies that that glBindSampler(0, this->SamplerName) is correct.
Howevet the extension spec also says
void BindSampler(enum unit, uint sampler);
Which implies that glBindSampler(GL_TEXTURE0, this->SamplerName); is correct.
with unit set to the texture unit to which to bind the sampler and sampler set to the
name of a sampler object returned from a previous call to GenSampler.
Which implies that that glBindSampler(0, this->SamplerName) is correct.
It doesn’t imply anything; it’s very clear, “<unit> must be between zero and the value of MAX_TEXTURE_IMAGE_UNITS-1.” There is no implication of anything. “must” is not a word that can be argued with or needs clarification.
GL_TEXTURE0 is 0x84C0, which is almost certainly not “between zero and the value of MAX_TEXTURE_IMAGE_UNITS-1.” So it is not a valid value.
It appears that this is my fault. I will be fixing this today and it should appear in NVIDIA’s next OpenGL driver release.
We accept GL_TEXTURE0 because of the error in the “New Procedures and Functions” section of the ARB_sampler_objects spec pointed out by Foobarbazqux. The prototype should have been “uint unit”, as is found in the body of both the ARB extension and OpenGL 3.3+ specs.
I’ll also make sure we fix glext.h published at opengl.org (which also has the incorrect prototype).