Nvidia Geforce PCX 5750 & ARB_fragment_program

Hi,
I’v been coding on ATI cards for a while and just got my hands on an nvidia one. I have a bunch of shaders that sample 8 textures and then do some simple calculations with them. There arn’t multiple texture indirections or anything. Just the basic sample->calculate->output type shader.

They all worked fine on ATIs but the exact same shader doesn’t work on this Nvidia card. The Nvidia card doesn’t seem to sample the 6th,7th and 8th textures. I checked for errors using glProgramivARB and all that and it comes back saying everything is fine. The glGet calls to see how many tex coords and images that can be used in the shader came back as 8 and 16 respectively so it should work. Am I missing something, is this a driver bug or am I better off coding 2nd generation shaders on the NV extension instead for nvidia cards?

Thanks

Malcolm Bechard

Could you post the shader program? (or even better, an example app we can test?) I am not aware of any problems like this.

i had a quite similar problem…
check if all your texture units are enables with glEnable(GL_TEXTURE_2D)…
ati cards seemed to even sample from texture units if they were only bound but not enabled (btw something i would wish would be standard since the type of texture sampling is defined in the fragment program anyways)

Here’s the code. Pretty basic shader

"!!ARBfp1.0
"
"ATTRIB texcoord0 = fragment.texcoord[0];
"
"ATTRIB texcoord1 = fragment.texcoord[1];
"
"ATTRIB texcoord2 = fragment.texcoord[2];
"
"ATTRIB texcoord3 = fragment.texcoord[3];
"
"ATTRIB texcoord4 = fragment.texcoord[4];
"
"ATTRIB texcoord5 = fragment.texcoord[5];
"
"ATTRIB texcoord6 = fragment.texcoord[6];
"
"ATTRIB texcoord7 = fragment.texcoord[7];
"
"PARAM w = program.env[0];
"
"PARAM v = program.env[1];
"

"OUTPUT oCol = result.color;
"

"TEMP w1;
"
"TEMP v1;
"
"TEMP texCol0;
"
"TEMP texCol1;
"
"TEMP texCol2;
"
"TEMP texCol3;
"
"TEMP texCol4;
"
"TEMP texCol5;
"
"TEMP texCol6;
"
"TEMP texCol7;
"

"TEX texCol0, texcoord0, texture[0], 2D;
"
"TEX texCol1, texcoord1, texture[1], 2D;
"
"TEX texCol2, texcoord2, texture[2], 2D;
"
"TEX texCol3, texcoord3, texture[3], 2D;
"
"TEX texCol4, texcoord4, texture[4], 2D;
"
"TEX texCol5, texcoord5, texture[5], 2D;
"
"TEX texCol6, texcoord6, texture[6], 2D;
"
"TEX texCol7, texcoord7, texture[7], 2D;
"

"MUL texCol0, w1.r, texCol0;
"
"MAD texCol0, texCol1, w1.g, texCol0;
"
"MAD texCol0, texCol2, w1.b, texCol0;
"
"MAD texCol0, texCol3, w1.a, texCol0;
"
"MAD texCol0, texCol4, v1.r, texCol0;
"
"MAD texCol0, texCol5, v1.g, texCol0;
"
"MAD texCol0, texCol6, v1.b, texCol0;
"
"MAD oCol, texCol7, v1.a, texCol0;
"
"END
";

Any ideas?

Do you have a vertex program that outputs texture coordinates 0-7? If not, you’ll only get texture coordinates 0-3 on NVIDIA cards as they only support 4 fixed function texture units.

Ya, I just noticed that while GL_MAX_TEXTURE_COORDS_ARB returns 8, GL_MAX_TEXTURE_UNITS_ARB returns 4…

I’m not using a vertex program, I’m just setting the coordinates using glMultiTexCoord for each unit.

So to get 8 sets of texture coordinates I need to calculate them in a vertex program?

You can just add a passthrough vp:

!!ARBvp1.0
OPTION ARB_position_invariant;
MOV result.color, vertex.color;
MOV result.texcoord[0], vertex.texcoord[0];
MOV result.texcoord[1], vertex.texcoord[1];
MOV result.texcoord[2], vertex.texcoord[2];
MOV result.texcoord[3], vertex.texcoord[3];
MOV result.texcoord[4], vertex.texcoord[4];
MOV result.texcoord[5], vertex.texcoord[5];
MOV result.texcoord[6], vertex.texcoord[6];
MOV result.texcoord[7], vertex.texcoord[7];
END

Ok great, I bet thats my problem. Thanks a lot.

Isn’t it kind of wierd that we are able to use commands like glActiveTexture(GL_TEXTURE7_ARB)
even when the MAX_TEXTURE_UNITS_ARB is 4?

Also, if I wanted to make this program dynamic so it increases/decreases the number of textures used depending on the card, should I do it according to the result of a glGet on GL_MAX_TEXTURE_COORDS_ARB?

Thanks

Also, if I wanted to make this program dynamic so it increases/decreases the number of textures used depending on the card, should I do it according to the result of a glGet on GL_MAX_TEXTURE_COORDS_ARB?

Yes. As long as you are using a vertex and fragment program.

MalcomB :

I don’t think it is so funny as if you read the ARB_fragment program spec (issue 16) there is a note stating that the old method is legacy with ARB_fragment_program (and ARB_fragment_shader) the two checks are now:
GL_MAX_TEXTURE_COORDS_ARB - For number of passable texture coordinates (from a Vertex program)
GL_MAX_TEXTURE_IMAGE_UNITS_ARB - For number of bindable textures

Hmm, using that vertex shader doesn’t seem to fix it. It stops any textures from being sampled. I’m sure jra101 is right, so the question is what am I doing wrong?

I’v never used vertex shaders before, just pixel shaders. Is there anything special I need to do to use this shader? I also tried re-writing it so its using the more explicit ATTRIB and OUTPUT commands, same result (as expected I guess).

Maybe I should be moving these questions to the beginners forum now :slight_smile: