Sampler: nVidia vs AMD OpenGL 3.3 drivers

AMD:

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, this->Image2DName);
glBindSampler(0, this->SamplerName);

nVidia:

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, this->Image2DName);
glBindSampler(GL_TEXTURE0, this->SamplerName);

That’s how it work on both drivers: Expected expected visual result, no error, no crash.

On AMD, if I put glBindSampler(GL_TEXTURE0, this->SamplerName);
it crash. I think AMD is wrong on that one but I’m not 100% sure. (at least a crash is what what we expect!)

The beautiful thing about OpenGL is that it is defined by a specification that is (usually) quite anal about everything. So let’s see what the spec says:

GL_TEXTURE0 is not between 0 and GL_MAX_TEXTURE_IMAGE_UNITS-1 (unless the hardware allows a LOT of texture units), so it looks like ATI is right.

Except that nowhere in the spec does it say that OpenGL is allowed to crash if it is outside of that range. So they’re both wrong, but at least ATI gets the right behavior when getting the right parameters.

Usually these kinds of spec battles come out in NVIDIA’s favor. They’re slipping if they let something like this through.

The extension spec says

void BindSampler( uint unit, uint sampler );

with <unit> set to the texture unit to which to bind the sampler and
<sampler> set to the name of a sampler object returned from a
previous call to GenSampler.
<unit> must be between zero and the value of
MAX_TEXTURE_IMAGE_UNITS-1. <sampler> is the name of a sampler object
that has previously been reserved by a call to GenSamplers.

Which implies that that glBindSampler(0, this->SamplerName) is correct.

Howevet the extension spec also says

void BindSampler(enum unit, uint sampler);

Which implies that glBindSampler(GL_TEXTURE0, this->SamplerName); is correct.

glext.h says
GLAPI void APIENTRY glActiveTexture (GLenum texture);
and
GLAPI void APIENTRY glBindMultiTextureEXT (GLenum texunit, GLenum target, GLuint texture);

Both of which imply that GL_TEXTURE0 is an enum.

The 3.3 spec says

   void BindSampler( uint unit, uint sampler );

with unit set to the texture unit to which to bind the sampler and sampler set to the
name of a sampler object returned from a previous call to GenSampler.

Stating the unit is a uint, not an enum.

Which implies that that glBindSampler(0, this->SamplerName) is correct.

It doesn’t imply anything; it’s very clear, “<unit> must be between zero and the value of MAX_TEXTURE_IMAGE_UNITS-1.” There is no implication of anything. “must” is not a word that can be argued with or needs clarification.

GL_TEXTURE0 is 0x84C0, which is almost certainly not “between zero and the value of MAX_TEXTURE_IMAGE_UNITS-1.” So it is not a valid value.

Already reported as http://www.khronos.org/bugzilla/show_bug.cgi?id=274

That’s what I thought, somewhere in the spec, it should be a mistake.

I bet they first use GL_TEXTURE0 and then design to use 0.
I prefer that choice. Actually, 0 works on both.

Thanks

It appears that this is my fault. I will be fixing this today and it should appear in NVIDIA’s next OpenGL driver release.

We accept GL_TEXTURE0 because of the error in the “New Procedures and Functions” section of the ARB_sampler_objects spec pointed out by Foobarbazqux. The prototype should have been “uint unit”, as is found in the body of both the ARB extension and OpenGL 3.3+ specs.

I’ll also make sure we fix glext.h published at opengl.org (which also has the incorrect prototype).

Sorry for the inconvience,
Pat

Not so fast… I wrote the extension spec, so the incorrect prototype in the ‘New Procedures’ section is my typo.

It seems this is a team FUBAR. Apologies.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.