PDA

View Full Version : How to use multiple textures in fragment shader?



RobertPub
09-22-2006, 05:00 AM
Hi all,

I'm working on a fragment shader in GLSL using both a regular 2D texture (to be mapped on a quad), and a GL_TEXTURE_RECTANGLE_ARB texture of type GL_RGBA32F.

I can use at most one of them in my shader, but when using them both I get 'invalid operation' GL runtime errors when drawing the quad.

The problem is - I think - the part where I set the sampler2D and samplerRect uniform variables to the 'texture image unit' ID in my C++ program using glUniformiARB(). Both are set to 0 now, and that's probably the conflict. I simply don't understand that part. What's that image ID?

k_szczech
09-22-2006, 05:46 AM
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture1);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture2);Since this attaches texture1 to texture unit #0, and texture2 to texture unit #1 you must set first sampler uniform to 0, and the second one to 1. Sampler uniforms simply indicate to which texture unit your texture2D functions should refer.

dimensionX
09-22-2006, 05:47 AM
loc0 = glGetUniformLocation(program, "tex0");
glUniform1i(loc0, <tex image unit #> 0);

loc1 = glGetUniformLocation(program, "tex1");
glUniform1i(loc1, <tex image unit #> 1);

It is not glUniformiARB but glUniform1i/glUniform1iARB notice the 1i not i.

While using the shader:

BindShader(program);

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, tex1);

Draw calls();

BindShader(0);

RobertPub
09-22-2006, 07:06 AM
Thanks for the help, guys. I'll give it a try again.

I actually tried working with glActiveTexture(). This is the multi-texture feature, right? The shader slowed down dramatically then, but I wasn't using the RECT texture at that moment.

Anyway, I read in one of my GL bibles that the multi-texture feature is used to mix all the active textures while being mapped on quad. So the result of texture #1 is added to the result of texture #2. It's a linked chain.

In my case however, I'm using one texture as the picture source (to be mapped on the quad), and the other one is just a bunch of parameters to be used by the shader algo.

How do I prevent the chaining then? By simply supplying one set of text coords when mapping the quad?

Thanks,
Robert

k_szczech
09-22-2006, 08:00 AM
How do I prevent the chaining then?That chaining is fixed functionality. If you use fragment shader that functionality is replaced by your shader.

And as for texcoords - in fixed functionality each texture unit has it's texture coordinate, but in shader you can do whatever you want. You can use one texcoord for many samplers, or use many texcoords with one sampler (access the same texture many times using different texcoords) or even use normal or color as texcoord.

RobertPub
09-23-2006, 12:36 AM
Originally posted by k_szczech:
That chaining is fixed functionality. If you use fragment shader that functionality is replaced by your shader.Ah! So simple. :)

Thanks again.
Robert

RobertPub
09-25-2006, 12:35 AM
Okay guys, it works now. But only after I moved away from using framebuffer objects and went back to straightforward color buffer usage.

It's probably some mistake in my code, but anyone know if there's some conflict between FBOs and glActiveTexture()?

- Robert

Overmind
09-25-2006, 02:37 AM
You can't have a texture bound to the current FBO and a texture unit at the same time.

Korval
09-25-2006, 09:47 AM
You can't have a texture bound to the current FBO and a texture unit at the same time.You can, actually; it doesn't make the FBO incomplete.

The results are undefined, however, if you read and write to the same mipmap or 3D texture slice.

RobertPub
09-26-2006, 03:57 AM
I've got it working now, even with FBO. The problem was GL_TEXTURE_RECTANGLE_ARB (I had forgotten to disable at some point, and it messes up the GL_TEXTURE_2D context).

Is there a way to use RGBA32F floating point textures with the regular GL_TEXTURE_2D target, btw?

Thanks for the help guys.

Overmind
09-26-2006, 04:39 AM
It should just work, but only with the ARB floating point formats, not with the NV ones. I think the ARB formats are supported on nvidia cards 6xxx and above.

I don't know about the ATI cards.