GLSL multitexturing problem

I have an ATI Radeon 9600 Pro graphics card with 256 MB of VRAM, the driver version is Catalyst 6.5. My OpenGL based application uses the following code to set up two texture units and uniform variables for use in the fragment shader:


//texture setup in the application
glGenTextures(2,textureIds);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_3D,textureIds[0]);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_REPEAT);
glTexImage3D(GL_TEXTURE_3D,0,GL_RGBA,res,res,res,0,GL_RGBA,GL_FLOAT,texture_image0);
glEnable(GL_TEXTURE_3D);

glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D,textureIds[1]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,resx,resy,0,GL_RGB,GL_FLOAT,texture_image1);
glEnable(GL_TEXTURE_2D);

//some intermediate shader program code

GLuint texLoc=glGetUniformLocation(shader_program,“Texture0”);
glUniform1i(texLoc, 0);
texLoc=glGetUniformLocation(shader_program,“Texture1”);
glUniform1i(texLoc, 1);


The (incomplete) fragment shader is roughly such (only the important part is included):

//fragment shader
uniform sampler2D Texture1;
uniform sampler3D Texture0;

void main() {
vec4 c=texture2D(Texture1,address1);
gl_FragColor = texture3D(Texture0,address0);
}

If i draw a spherewith this fragment shader on, nothing shows up. If I comment out the 2D texture access, it miraculously starts working. If a comment out the 3D texture and use 2d texture to set fragment color, it also works. But if I then turn on the 3d texture access (for any purpose), it eclipses again. Anyway, it never works if I use both textures, and it doesn’t work if i use only 3d texture, but attached to texture unit 1 (and not 0 like here).
Does anyone have the slightest clue about what could be the reason for such behaviour? Is it some sort of hardware limitation? Are 3d textures so poorly supported by older Radeon series? The same thing happens on my Radeon 9700 Mobility (laptop) card. I checked the driver caps but no limit indicates a problem of using two textures in the fragment shader.

Was the compilation and linking of the shader successful? Are there any warnings in shader compilation or linking logs?

Yes, the compilation and linking was successful. I also tried out the TyphoonLab’s Shader Designer and the 2d/3d texture combination doesn’t work there either. If I use it, the rendered geometry disappears altogether, so I am almost certain it is some sort of hardware problem or limitation.
Could someone with an Nvidia card or a newer ATI try to do it in the Shader Designer (it has 2D image textures and 3D procedural noise textures) and report whether it works for him?

The (incomplete) fragment shader is roughly such (only the important part is included):
Why not go ahead and post the entire shader, and let those that would be the judge of what is important… be the judge of what is important.

Could someone with an Nvidia card or a newer ATI try to do it in the Shader Designer (it has 2D image textures and 3D procedural noise textures) and report whether it works for him?
Like, what are we supposed to try, exactly?

seems that you want to sample texture in a single shader

write to gl_FragColor multi-time by flow control

The easiest way to try out what I wanted to explain is to use the Shader Designer (a very simple free GLSL shader editor). First use two texture units (Ctrl-T), a 2D image (e.g. fire.jpg) for t.u. 0, and a procedural 3D plugin for t.u. 1 (both come with the program and are in program subdirs). After the textures appear in the texture preview (so one knows they are loaded ok), use the following fragment shader (no vertex shader needed):

uniform sampler2D TextureUnit0;
uniform sampler3D TextureUnit1;

void main() {
vec4 c = texture2D(TextureUnit0,vec2(0.5));
gl_FragColor = texture3D(TextureUnit1,vec3(0.5));
}

That’s all there is, it only tests the fragment shader. The first line does nothing important, but the shader doesn’t work (the geometry disappears). It starts working if you comment out the first line (2d texture access), what shows is a sphere (or whatever geometry user selects) textured with the 3D texture (single color, because the coordinates are constant). It also works if you comment out the 3d texture access and assign 2d texture data to the fragment color (btw - multiple assignments to the gl_FragColor are allowed, the last one holds). It just won’t work with 2d/3d combination on Radeon 9600, 9700 or 9800.

I know using a true application would be more elaborate, but this is as simple test as possible. If anyone is kind enough to spare 5 mins and report on this I thank him in advance.

One more thing, if you add a line:

gl_FragColor = vec4(1.0);

to the shader, nothing shows up either, so the shader obviously discards (or crashes, I don’t know how to take it) upon sequential 2d/3d texture access.

Hello, I’ve tested you shader with the Shader Designer. Here is what I did:
[Vertex shader]
void main()
{
gl_Position = ftransform();
}

[Fragment shader]
uniform sampler2D TextureUnit0;
uniform sampler3D TextureUnit1;

void main()
{
vec4 c = texture2D(TextureUnit0,vec2(0.5));
gl_FragColor =
texture3D(TextureUnit1,vec3(0.5));
}

Added a texture 2D and a procedural 3D texture in the texture units 0 and 1
Added two uniform variables for TextureUnit0 and TextureUnit1 with values 0 and 1 and type as int

The result: a mesh with the color of the 3D texture (just as I spected). No matter if I comment or not the first texture access.
Perhaps did you forget to update the TextureUnitx uniform variables?

P.D: I’ve tested that with a Radeon 9550

Stupid me, I thought the uniforms are given values by default (it says so in the shader designer docs : “Shader Designer automatically creates uniform samplers for the textures”). Setting them makes things work. Thanks a lot.
It is interesting that it works with one unit though, even unit 1. Moreover, I realized that on nvidia it works without setting the uniforms. Who could understand.

OK, i finally found out what I was doing wrong in my application, which was using 2d/3d texture access and didn’t work the way Shader Designer didn’t. I feel obliged to reveal my error to the world :slight_smile: .

What I did was set the uniforms before calling glUseProgram. I knew I have to call glGetUniformLocation after the successful linking, but I didn’t know it had to be after glUseProgram as well.

Anyway, ATI isn’t as bad as I thought for a couple of last days. And thanks to all willing to help, especially Ffelagund, who made the breakthrough.

Originally posted by dobradusa:
What I did was set the uniforms before calling glUseProgram. I knew I have to call glGetUniformLocation after the successful linking, but I didn’t know it had to be after glUseProgram as well.

glGetUniformLocation has a program object as a parameter, it can be called anytime after a successful link.
glUniform though doesn’t have that parameter and therefore only works on the currently active program object which is set with glUseProgram.

If your implementation requires glGetUniformLocation to be called after glUseProgram to return correct locations, that would be a driver bug.

Originally posted by Relic:
[b] [quote]Originally posted by dobradusa:
What I did was set the uniforms before calling glUseProgram. I knew I have to call glGetUniformLocation after the successful linking, but I didn’t know it had to be after glUseProgram as well.

glGetUniformLocation has a program object as a parameter, it can be called anytime after a successful link.
glUniform though doesn’t have that parameter and therefore only works on the currently active program object which is set with glUseProgram.

If your implementation requires glGetUniformLocation to be called after glUseProgram to return correct locations, that would be a driver bug. [/b][/QUOTE]You are right, of course.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.