Noob sampler2D question

I’m attempting to implement glsl where before I had a cg program running.

So, lets say I want to access a texture in my glsl program. eg.

uniform sampler2D NormalMap;

Before, in my CG script I’d have something like:

uniform sampler2D NormalMap : TEXUNIT1;

Is there a similar way in glsl?

When I look at glsl showcase examples in the os x developer tools, I see an analogous snippet:

glUniform1iARB(glGetUniformLocationARB(program_object, “NormalMap”), 1);

Yet, I cannot do this because I have display lists. The display list knows nothing about program_object since there is no program_object when I make the display list. But in the display list I set :

glActiveTexture(GL_TEXTURE1);
glBindTexture (GL_TEXTURE_2D, bump_id);

Or am I suppose to cycle through all my glsl programs and set an arbitrary variable “NormalMap”. Now these names are in my program code.

Maybe I’m just confused. Just a glsl noob and its day 2.

Create your display list as before with Cg.

After you have compiled and linked your GLSL program object, call glUniform1iARB(glGetUniformLocationARB(program_object, “NormalMap”), 1);
to indicate to the program that the uniform named normal map uses texture unit 1.
(You only need to call this once - But you will have to do this for every GLSL shader that has a normal map variable)

Be careful in that all variables default to zero, so if you don’t specify a sampler, it will access texture unit 0.

I think I got it to work. Looking back, CG does something similar. Though, they pass in the texture name… cgGLSetupSampler(param, name);

Before with cg I made a uber one size fits all shader. Not going to make that mistake again. Too much branching.

I think, in my little world, I’ll just standardize on uniform names like “texunit0”, “texunit1” etc. Then I’ll look and see if glGetUniformLocationARB returns a -1 on each program like you suggested. If it returns -1 then I will not set the uniform.

I noticed on mac os x , that I have to set glUseProgramObjectARB first then call glGetUniformLocationARB not to return a -1. Also, I notice if the uniform variable is not used in the glsl program then it appears the compiler strips it out and I get a -1 for the uniform location.

One more question, if I use glMultiTexCoord3f in the display list then the data should appear in gl_MultiTexCoordX in the glsl program? I should not have to call anything else to enable it in the glsl program?

Trying to avoid all the uniform setting calls. When I ran my cg scripts before, I suspect they slowed things down a bunch. Wonder if this is the case with glsl? I’ll have to see what the shark performance tool says.

Though, when I run glsl and cg I can still hear my g5 bus humming. Must be traffic from the cpu to the card. Sounds crazy but I know how my machine sounds. No humming with the fixed pipeline. :slight_smile:

Originally posted by nib:

I noticed on mac os x , that I have to set glUseProgramObjectARB first then call glGetUniformLocationARB not to return a -1.

This shouldnot happen. Probably a bug.

Originally posted by nib:

One more question, if I use glMultiTexCoord3f in the display list then the data should appear in gl_MultiTexCoordX in the glsl program? I should not have to call anything else to enable it in the glsl program?

Yeah that should be fine.

Also note that GLSL is now core in version 2.0 so you can stop using the “ARB” extensions if you want. (some of the entry points changed names slightly so you just can’t drop the “ARB” part in a few cases)

Got bump mapping to work with my new shader code. Pleased so far. The performance appears to be much higher on my ati card than cg.

Onto shadows. So, I have a depth buffer in an fbo. I assume its the exact same procedure as previously mentioned using glGetUniformLocation to set the uniform used with sampler2DShadow? Same kind of deal … pass in a texture matrix to the vertex shader then I’d assume I’ll use shadow2DProj in the fragment shader? Would be nice if I could give a set of programs and set the uniforms with the same name all at once – passing the texture matrix.

Funny, when I set a break point with OS X OpenGL profiler and I goto Resources the profiler program hangs. I have about 10 glsl programs in all. Maybe it cannot handle it. Though, my opengl program appears to work fine. I have to force quit the profiler.

Originally posted by nib:
[b]
Onto shadows. So, I have a depth buffer in an fbo. I assume its the exact same procedure as previously mentioned using glGetUniformLocation to set the uniform used with sampler2DShadow? Same kind of deal … pass in a texture matrix to the vertex shader then I’d assume I’ll use shadow2DProj in the fragment shader? Would be nice if I could give a set of programs and set the uniforms with the same name all at once – passing the texture matrix.

[/b]
Yes that should work.
You can use the built-in OpenGL texture matrix and access it from the GLSL code if you don’t want to set it in a lot of programs.

I followed your suggestion. Works and seems faster now … ripped out all the old cg code. What a week.

glActiveTexture(GL_TEXTURE5);
glMatrixMode(GL_TEXTURE);
glLoadMatrixf(temp[0]);
glMatrixMode(GL_MODELVIEW);
glActiveTexture(GL_TEXTURE0);


//meanwhile, in the script…
shadow_coord = gl_TextureMatrix[5] * ecPosition;

I made a program cache. So, if some of my models use the same script they hit the cache and I don’t have to have duplicates. Actually, Apple’s OpenGL profiler worked. The problem was I had about 50 programs. So, it took a minute to load. Now its reduced. Funny thing is OpenGL profiler handles many textures without a problem. :slight_smile:

Be careful when using the legacy built-in state to set variables.

code like
glActiveTexture(GL_TEXTURE5);
glMatrixMode(GL_TEXTURE);
glLoadMatrixf(temp[0]);

May fail if the card only supports 4 legacy texture units.
Eg Nvidia typically only supports 4 legacy texture units. Check the return value from a glGet(GL_MAX_TEXTURE_UNITS)

(All the new limits for textures and coordinates are done through the interface limits like GL_MAX_TEXTURE_IMAGE_UNITS/GL_MAX_TEXTURE_COORDS)

Hmm, that is going to be a problem. I’ve got a Radeon 9800.

If I open Apple’s OpenGL profiler program, then I can choose a card profile to emulate the OpenGL state. When I look at the profile I see:

GeForce2 MX ( 10.3 ) Max Texture Units 2.
GeForce3 ( 10.3 ) Max Texture Units 4.
GeForce4 ( 10.3 ) Max Texture Units 4.

Radeon 9000 (10.3) Max Texture Units 6.
Radeon 9700 (10.3) Max Texture Units 8.
Radeon v1.3 Max Texture Units 3.
ATI Rage Pro Max Texture Units 2.

Hmm, I’m using the following

rgba texture ( diffuse )
rgba shine map ( only need one byte though )
rgba normal map ( has an extra byte I dont use )
rgba texture fbo ( need this to have a valid fbo ? )
depth shadow map fbo
texture cube

That would be six units. So, I have to get rid of two. I suppose I can combine the shine and normal images using that extra alpha byte. ( ouch … lots of graphic converter work ) I don’t use the texture for color on the fbo but I have to validate the fbo.(?) Then I’d be down to four units.

My program has two modes. It fails over to default pipeline or you get the fragment/vertex shader operation.

NVidia only has 4 units? Thats just sucking eggs. Who would of thought on the high end? No wonder most of the new macs have ati cards.

I stripped the varying variables down to 8 too. I hear having too many of those is a problem.

I think you mis understood me.

Nvidia has chosen to only support 4 LEGACY texture units. All their modern shader based cards (FX and above) have way more (16 textures + 8 texture coordinates)

This is only a problem if you want to use the legacy interface texture matrices with GLSL. (or combiners etc).

Also, I doubt GeForce2-4 or Radeon 9000 have anything other than vertex shader support in GLSL.

I think I understand.(?) Your saying if I use glMatrixMode then I had better use one of the first four units. This is the legacy way.

So, I’d assume the new way of doing it would be …

_loc = glGetUniformLocationARB(_prog,“gl_TextureMatrix[5]”);
if( _loc != -1 ) {
glUniformMatrix4fv (_loc,16,false,temp[0]);
}

but I can’t do that because the uniform name start with “gl_” and I get a -1.

So, I have to use the bug bear of a call strings and all called glGetActiveUniformARB ? This looks like some kind of database query situation. Are locations unique across all programs? So, I only have to look it up once?

glMatrixMode sure looks easy for the noob. :slight_smile: Life would be easy if glsl had a static variable across all programs. So, uniform, varying, attribute and static. :slight_smile:

The locations are not unique across all programs, so just look it up when you load the program.

You should not need to call glGetUniformLocation at runtime, just get the locations of the variables when you load the program and store them with the program ID.

eg.

struct ShaderData
{
GLuint shaderID;

GLint shadowmatrixLoc;
… // other locations
};

Also this code
glUniformMatrix4fv (_loc,16,false,temp[0]);

Looks wrong. Unless you are supplying an array of 16 matrices, you are only supposto use 1 for the matrix count value.

Lets see, its a

GLfloat temp[4][4];
glGetFloatv( GL_MODELVIEW_MATRIX,temp[0] );

//vvector.h stuff transpose, multiply, inverse matrix etc…

The program uses GLUT/vvector.h matrix calls to calculate the texture matrix.

This way I can just use the built in matrix operations and not have to write my own matrix operations.

Caching the locations appears to help. :slight_smile:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.