GL texture matricies not tracked by GLSL, nvidia 6800?

Hi all, just wondering if anyone else noticed this… I’m now multiplying my texture coordinates with the corresponding texture matrix:

gl_TexCoord[0].xy = (gl_TextureMatrix[0] * gl_MultiTexCoord0).xy;

I’m binding my shader once at the start of a draw pass, then modifying the texture matrix per-object as I’m drawing, as needed.

This works fine with the modelview matrix (with Cg I always had to call setmatrixparam after I changed the modelview matrix with multmatrix for example)… but GLSL is supposed to track this as a part of the GL state and have the current matrix available.

It seems this functionality is broken with texture matricies, since I only see it working with an identity texture matrix throughout the run of a draw pass, after which a fixed-function draw pass is used, which seems to update the texture matrix properly… Anyone, anyone? :slight_smile:

Oh, hardware & stuff:
6800GT, 78.01 driver, windows

Your drivers are a little old. Try the new 81.94’s.

-SirKnight

sigh Happens with 81.94 as well :frowning:

Anyone else noticed this?

I’m binding my shader once at the start of a draw pass, then modifying the texture matrix per-object as I’m drawing, as needed.
try rebinding the same shader just before u draw the object (even if its the current shader)

zed: yeah I’m guessing that’d work, but isnt that going to be a big performance hit? especially with many objects?

either way it is a bug, as far as I can tell from the docs. I havent been able to find anything that specifically mentions texture matricies, so I’m guessing its supposed to be automatically tracked like with the modelview matrix. But please correct me if I’m wrong :smiley:

Well its not even working with binding program 0 and then the program that was bound…

All I’m trying to do in my shader is:

gl_TexCoord[0].xy = (gl_TextureMatrix[0] * gl_MultiTexCoord0).xy;

And in the code:

glMatrixMode(GL_TEXTURE);
	glLoadIdentity();
	glTranslatef(u, v, 0);
	glMatrixMode(GL_MODELVIEW);

// HACK
	unsigned int handle = glGetHandleARB(GL_PROGRAM_OBJECT_ARB);
	glUseProgramObjectARB(0);
	glUseProgramObjectARB(handle);

I have used texture matrices in shadow mapping (using GLSL). It works for FX 5700 Ultra, 6800 GT (with all drivers since 77.xx) and 9700-pro.

Have you made sure the correct texture unit is current when calling glMatrixMode(GL_TEXTURE)?

GRRR!

Relic: Yeah I was thinking about that last night, checked it out today and I was calling glClientActiveTexture instead of glActiveTexture… This has been working fine for setting texture coordinate pointers for years, clearly the texture matrix must be server state or something?

Ok, so its not an nvidia bug. After having had an ATI card, I can jump to driver bug conclusions so easily! :smiley:

Hehe, call it experience. :wink:

I was calling glClientActiveTexture instead of glActiveTexture… This has been working fine for setting texture coordinate pointers for years, clearly the texture matrix must be server state or something?
Yes, it’s just like that. From the spec:

“The command void ClientActiveTexture( enum texture ); is used to select the vertex array client state parameters to be modified by the TexCoordPointer command and the array affected by EnableClientState and DisableClientState with parameter TEXTURE_COORD_ARRAY. This command sets the client state variable CLIENT_ACTIVE_TEXTURE.”

“The command void ActiveTexture( enum texture ); specifies the active texture unit selector, ACTIVE_TEXTURE.”