Problem with samplers (nvidia 61.77)

I have this simple fragment shader:

uniform sampler2D u_texDiffuse[5];
uniform sampler2D u_texMask[2];

void main()
{
	vec3 diffuse = vec3( 0., 0., 0. );

	vec4 mask0 = texture2D( u_texMask[0], gl_TexCoord[0].xy );
	vec4 mask1 = texture2D( u_texMask[1], gl_TexCoord[0].xy );

	diffuse += mask0[0] * texture2D( u_texDiffuse[0], gl_TexCoord[0].xy*4. ).xyz;
	diffuse += mask0[1] * texture2D( u_texDiffuse[1], gl_TexCoord[0].xy*4. ).xyz;
	diffuse += mask0[2] * texture2D( u_texDiffuse[2], gl_TexCoord[0].xy*4. ).xyz;
	diffuse += mask1[0] * texture2D( u_texDiffuse[3], gl_TexCoord[0].xy*4. ).xyz;
	diffuse += mask1[1] * texture2D( u_texDiffuse[4], gl_TexCoord[0].xy*4. ).xyz;

	gl_FragColor.xyz = diffuse;
	gl_FragColor.w = 1.;
}

it stores color from one of the u_texDiffuse textures into gl_FragColor, depending on some mask texture (u_texMask textures contain 1.0 in one channel and 0.0 in all others). But actual result is very strange - it looks like some of the diffuse and mask textures are swapped.

Here is ShaderDesigner project to demonstrate this problem (you can easily see what textures are bound to correct units):
ShaderDesigner project (22 KB)

card: NVIDIA FX 5700, driver: 61.77

ftp://download.nvidia.com/Windows/66.81/

Download the latest drivers first.

Also, I think that there is only 4 texture units on Geforce FX (Only the ATI has 6 textures units and even 8 for FireGL series).

And it seems that you are trying to access to 5 textures units, which is more than supported.

Originally posted by execom_rt:
[b]
Also, I think that there is only 4 texture units on Geforce FX (Only the ATI has 6 textures units and even 8 for FireGL series).

And it seems that you are trying to access to 5 textures units, which is more than supported.[/b]
You are right, FX has 4 texture units, but has 8 texture images, so, from in a shader, the developer can access up to 8 textures (but only will have 4 texture interpolators)

GeForce FX and GeForce 6 series GPUs have 8 texture coordinates and 16 image units. So you can fetch from up to 16 unique textures in a fragment shader/program.

grisha, are you setting the samplers to the correct texture units using glUniform1iARB?

Originally posted by jra101:

grisha, are you setting the samplers to the correct texture units using glUniform1iARB?

sampler uniforms are in arrays, so I’m using glUniform1ivARB to set entire array with one call:

GLint samplersDiffuse[5] = { 0, 1, 2, 3, 4 };
glUniform1ivARB( u_texDiffuseId, 5, samplersDiffuse );
GLint samplersMask[2] = { 5, 6 };
glUniform1ivARB( u_texMaskId, 2, samplersMask );

From glIntersept output I can see, what ShaderDesigner also use glUniform1ivARB in same way.

Originally posted by jra101:
GeForce FX and GeForce 6 series GPUs have 8 texture coordinates and 16 image units. So you can fetch from up to 16 unique textures in a fragment shader/program.
from reading this it seems like i can do on my gffx
glCurrectActiveTexture( GL_TEXTURE6 );
glBindTexture( … )

ie im not to sure what unique textures are?

Originally posted by jra101:
[b]GeForce FX and GeForce 6 series GPUs have 8 texture coordinates and 16 image units. So you can fetch from up to 16 unique textures in a fragment shader/program.

grisha, are you setting the samplers to the correct texture units using glUniform1iARB?[/b]
I was almost right :stuck_out_tongue: , well, I tried this shader on my machine (Wildcat Realizm 100) and I get a mesh with many numbers mapped (1,2,3,4,5,1,2,3,4,5, etc), here is a screenshot to see if is the desired effect: http://www.typhoonlabs.com/~ffelagund/2.png (I think so)

Originally posted by zed:
[b]from reading this it seems like i can do on my gffx
glCurrectActiveTexture( GL_TEXTURE6 );
glBindTexture( … )

ie im not to sure what unique textures are?[/b]
Yes, you can do that (function is called glActiveTexture though).

By unique I meant you could potentially bind up to 16 totally different textures if you wanted to.

Btw, Ffelagund I can’t run the latest version of Shader Designer on my system (with a GeForce 6800), I get this error:

"Shader Designer needs a hardware with multitexture support"

Obviously my card supports multitexturing so I’m not sure why I’m getting this error.

It is strange, the only reason for this message is that the Shader Designer is getting the OpenGL MS 1.1 implementation or is unable to create the OpenGL context. Perhaps there is something wrong in your drivers, but if you have MSN we can trace the error (if you want, send your msn mail to jacobo.rodriguez ‘at’ typhoonlabs.com)

Originally posted by Ffelagund:
here is a screenshot to see if is the desired effect: http://www.typhoonlabs.com/~ffelagund/2.png (I think so)
yes, this is a correct image.

here is mine:
http://home.ripway.com/2004-11/198558/1.png :frowning:

u_texDiffuse[3] and u_texMask[1] are swapped.

And 66.81 drivers didn’t help.

Originally posted by Ffelagund:
It is strange, the only reason for this message is that the Shader Designer is getting the OpenGL MS 1.1 implementation or is unable to create the OpenGL context. Perhaps there is something wrong in your drivers, but if you have MSN we can trace the error (if you want, send your msn mail to jacobo.rodriguez ‘at’ typhoonlabs.com)
Shader Designer doesn’t appear to be parsing the version string properly to handle version 2.0

When I’ve seen the problem before, the software was just checking the digit after the decimal and doing the wrong thing.

Originally posted by esw:
[b] [quote]Originally posted by Ffelagund:
It is strange, the only reason for this message is that the Shader Designer is getting the OpenGL MS 1.1 implementation or is unable to create the OpenGL context. Perhaps there is something wrong in your drivers, but if you have MSN we can trace the error (if you want, send your msn mail to jacobo.rodriguez ‘at’ typhoonlabs.com)
Shader Designer doesn’t appear to be parsing the version string properly to handle version 2.0

When I’ve seen the problem before, the software was just checking the digit after the decimal and doing the wrong thing.[/b][/QUOTE]Shader Designer doesn’t check the OpenGL version at all. It checks needed extensions individually. If it finds that multitexture is not supported is because can’t find that extension in the extensions string. So, if multitexture is not in the extensions string, or is getting the MS implementation, or is unable to create a context. In last case, other error than “no multitexute support” should be shown. With our GF6800, Shader Designer works correctly. Infact Shader Designer uses GLEW to access extension, maybe glew does not handle correctly OpenGL 2.0. I’ll check that.

Hello,
This problem in the Shader Designer is now fixed, this week we’ll upload a new release with some more fixes

Originally posted by jra101:
Yes, you can do that
By unique I meant you could potentially bind up to 16 totally different textures if you wanted to.
wow, that opens up a lot of interesing possiblilities, i assumed since ive only got 4 texture units i can only bind 4 textures at a time, hmmmm im still not 100% sure how it works, so now i can bind 16 textures and draw them in one pass!, surely something else is going on behind the scenes.

[QUOTE}(function is called glActiveTexture though).[/QUOTE]
been using my own gl wrapper for so long, ive forgotten a lot of the original gl syntax’s :slight_smile:

i assumed since ive only got 4 texture units i can only bind 4 textures at a time
If you are referring to the value returned by querying GL_MAX_TEXTURE_UNITS on GeForce FX and GeForce 6 series GPUs, that only applies to fixed function rendering.

When using fragment programs or fragment shaders, you can bind up to GL_MAX_TEXTURE_IMAGE_UNITS_ARB textures which is 16 on GeForce FX and GeForce 6 GPUs.

Another strange bug with samplers:

uniform sampler2D u_texNormal;

uniform sampler2D u_texDiffuse[3];
uniform sampler2D u_texMask[1];

void main()
{
	vec3 diffuse = vec3( 0., 0., 0. );
	vec4 mask0 = texture2D( u_texMask[0], gl_TexCoord[0].xy );

	// Notice different scale factors on texcoords here:
	diffuse += mask0[0] * texture2D( u_texDiffuse[0], gl_TexCoord[0].xy * 3. ).xyz;
	diffuse += mask0[1] * texture2D( u_texDiffuse[1], gl_TexCoord[0].xy * 5. ).xyz;

	// this reads from u_texNormal instead of u_texDiffuse[2]:
	diffuse += mask0[2] * texture2D( u_texDiffuse[2], gl_TexCoord[0].xy * 7. ).xyz;

	// but if I try 5. instead of 7. shader works fine!!!

	//diffuse += mask0[2] * texture2D( u_texDiffuse[2], gl_TexCoord[0].xy * 5. ).xyz;

	vec3 normal = texture2D( u_texNormal, gl_TexCoord[0].xy ).xyz*2. - 1.;

	gl_FragColor = vec4( 0., 0., 0., 1. );
	gl_FragColor.xyz += diffuse * dot( normal, vec3( 0.8084521, 0.1616904, 0.5659165 ) );
}

Isn’t it strange - changing single constant changes the way textures are sampled?

ShaderDesigner project file: http://home.ripway.com/2004-11/198558/samplers2.zip

Your two projects works fine on my system, so I only can think on any problem in your driver.

Originally posted by grisha:
Isn’t it strange - changing single constant changes the way textures are sampled?

If you use 5.0 in both operands, compiler optimize code, so it is not just a “single constant changes” but assembler code generated for shaders are different too. But still this doesn’t explane why setting of uniform array values at once doesn’t work…

This did turn out to be a bug in our drivers when using glUniform1ivARB to set an array of samples. This bug has been fixed internally and the fix will be available in a future driver release.

The workaround is to do something like this:

GLint loc = glGetUniformLocationARB(programObject, "u_texMask");

glUniform1iARB(loc, 0);
glUniform1iARB(loc+1, 1);

loc = glGetUniformLocationARB(programObject, "u_texDiffuse");

for (int i = 0; i < 6; i++)
    glUniform1iARB(loc+i, i+2);