Multitexturing

Hi,

I have a set of four textures. I need to render a QUAD of fullscreen size, combining R, G, and B pixel components from the four textures. I want to combine them as shown below:

DEST: R G B R G B R G B R G B…
SOURCE: T1R T2G T3B T4R T1G T2B T3R T4G T1B T2R T3G T4B …

i.e. pixel-1 in rendered image is formed by combining R from texture1, G from texture2 and B from texture3.
similarly, pixel-2 in rendered image is formed by combining R from texture4, G from texture1 and B from texture2 and so on.

Is this possible using OpenGL multitexturing?
Any help?

Well, if you haven’t learned GLSL, now is the time.


//Fragment shader
uniform sampler2D Texture0;
uniform sampler2D Texture1;
uniform sampler2D Texture2;
uniform sampler2D Texture3;
uniform sampler2D Logic;
varying vec2 Texcoord;
void main()
{
vec4 texel0;
vec4 texel1;
vec4 texel2;
float logic = texture2D(Logic, Texcoord).x;
if(logic==0.0)
{
texel0 = texture2D(Texture3, Texcoord) * vec4(1.0, 0.0, 0.0, 0.0);
texel1 = texture2D(Texture1, Texcoord) * vec4(0.0, 1.0, 0.0, 0.0);
texel2 = texture2D(Texture2, Texcoord) * vec4(0.0, 0.0, 1.0, 0.0);
}
else
{
texel0 = texture2D(Texture0, Texcoord) * vec4(1.0, 0.0, 0.0, 0.0);
texel1 = texture2D(Texture1, Texcoord) * vec4(0.0, 1.0, 0.0, 0.0);
texel2 = texture2D(Texture2, Texcoord) * vec4(0.0, 0.0, 1.0, 0.0);
}

gl_FragColor = texel1 + texel2 + texel3;
}

Thank you so much V-man. Will study GLSL and try your code.
@ceres

V-man, any reason you create vec4 variables, but only use one component for each ??
It should be more efficient with something like this :


float texel0,texel1,texel2;
...
texel0 = texture2D(Texture3, Texcoord).r
...
texel1 = texture2D(Texture3, Texcoord).g
...
texel2 = texture2D(Texture3, Texcoord).b
...
gl_FragColor = vec4(texel0,texel1,texel2,0.0);

Not really. Use Cg if you want to compare which generates less instructions.

I would have said like ZbuffeR.

Does it come from the fact that under Nvidia at least, GLSL is translated into Cg ?

Well I trust Nvidia to have highly optimized GLSL compiler.
That does not mean that all GL vendors are on the same level.
Better safe than sorry, as they say.

Well, if you haven’t learned GLSL, now is the time.

There is a potential problem with that code. Namely, the majority of the texturing is in non-uniform control flow. Which means that the gradients aren’t available, so your texture functions may have problems.

The better way to arrange this code is as follows:


//Fragment shader
uniform sampler2D Texture0;
uniform sampler2D Texture1;
uniform sampler2D Texture2;
uniform sampler2D Texture3;
uniform sampler2D Logic;
varying vec2 Texcoord;
void main()
{
vec4 texel0 = texture2D(Texture0, Texcoord);
vec4 texel1 = texture2D(Texture1, Texcoord);
vec4 texel2 = texture2D(Texture2, Texcoord);
vec4 texel3 = texture2D(Texture3, Texcoord);
float logic = texture2D(Logic, Texcoord).x;
if(logic==0.0)
{
gl_FragColor = vec4(texel3.x, texel1.y, texel2.z, 1.0);
}
else
{
gl_FragColor = vec4(texel0.x, texel1.y, texel2.z, 1.0);
}

Hello,

After studying GLSL basics, I created, compiled above fragment shader. I set the my RGB textures to the samplers in above fragment shader using a for loop as shown below:

for(int i = 0; i < 4; i++)
{
char name[256];
sprintf_s(name, 256, “Texture%d”, i);
int my_sampler_uniform_location = glGetUniformLocation(program, name);
glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(GL_TEXTURE_2D, g_videoTextures[i]);
glUniform1i(my_sampler_uniform_location, i);
}

Then I draw a QUAD of fullscreen size. But the QUAD is completely white.

Am I missing any step? Do I need to create vertex shader as well?
Please help.

Yes you should also give a vertex shader it should be easy.


//vertex shader
void main() {
   gl_Position = gl_ModelViewProjectionMatrix* gl_Vertex;
}

One more thing, have u checked the error bit in the render routine. Add this call in your render function which will give u an assertion in case the error bit is set.


//include <cassert> at the beginning
assert(glGetError()==GL_NO_ERROR);

Am I missing any step? Do I need to create vertex shader as well?

Well, how else do you expect “Texcoord” to get filled in? I’m surprised it linked without a vertex shader.

Hi,

Could anyone please help me to understand few things about given fragment shader code:

  1. What is the purpose of “Logic” sampler used in this shader?
    what value it should be initialized to?
  2. Will “Texcoord” be automatically filled? or need to be filled explicitly in vertex shader?
  3. What the following statement will do exactly:
    float logic = texture2D(Logic, Texcoord).x;

Hi Alfonse,

I created this vertex shader:

//vertex shader
varying vec2 Texcoord;
void main(void)
{
vec4 a = gl_Vertex;
Texcoord.x = a.x;
Texcoord.y = a.y;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}

Am I right?

Thanks Mobeen.

If you are pushing in the texture coordinates using glTexCoord* function, these coordinates come in glMultiTexCoord* variable in the vertex shader so u can store this value into another register gl_TexCoord[0] as follows


void main(void)
{
   gl_TexCoord[0] = gl_MultiTexCoord0[0];
   gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}

Then u may access gl_TexCoord[0] in the fragment shader as follows


uniform sampler2D textureMap;
void main() {
   gl_FragColor = texture2D(textureMap, gl_TexCoord[0].st);
}

Note this is all pre opnegl3 shader handling. In opengl3 and above u need to handle the per vertex attributes and matrices yourself and there are no builtin uniforms. Just reminding u that while it works in earlier opengl version, it wont work in the modern opengl (core profile so to speak).

The vertex shader would actually be


varying vec2 Texcoord;

void main(void)
{
   Texcoord = gl_MultiTexCoord0.xy;
   gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}

so it sends the xy of the texcoord to the fragment shader
and also, transform the vertex.

Hi,

Basic things seems to be working for me. Thanks for the help. I am now stuck at a point. Let me explain the situation.
I have four RGB textures bound to four video input streams and a rendering pattern map of size =screensize*3. The pattern is like this:
DEST: pixel#1 pixel#2 pixel#3
MAP: 3 0 1 3 2 1 0 2 1 3 1 2…
For example:

  1. To create pixel#1 in the output, take R from texture3, G from texture0 and B from texture1.
  2. For pixel#2, take R from texture3, G from texture2 and B from texture1.

I could bind the RGB video textures to the samplers in the fragment shader. But the problem is, how can I read the pattern map in the shader. I already tried below option:

  1. Create an RGB/RGBA texture taking values from pattern map and bind it to 2D sampler and then use texture2D() to read values from the map. And it doesn’t show up correct values in the shader.

I could get it fixed. I forgot to put “glTexParameterf” calls in my code before copying the pattern map in the texture.

BTW, is there any way to resize opengl texture? I need to resize my RGB texture from 840x512 to 1400x1050.

There are different ways to resize.
You can create a render to texture of size 1400x1050 and render a fullscreen quad along with the first texture. This will at least use the GPU.
You can use GLU to rescale a image : gluScaleImage
There is my own lib from glhlib : glhScaleImage_asm386. See my signature.