ARB_RECTANGLE multitex with different sizes

Hi,

I have 3 textures with different sizes that are sent to the shaders. 2 of them are smaller than the other but they 3 must take the same coodinates at the end… Could any one tell me how can i transform the coordinate of the vertex or better not change the normal pipeline function, or just give me an address to find an example.

Thanks

If I understood you correctly, you got three texture rectangles with potentially different sizes, and you want to use only one set of texture coordinates to access all of them?

You could do this by using normalized texture coordinates (ie what you’d usually use), and simply multiplying each coordinate by the width/height of each texture. An easy way of doing this would be to modify the texture matrix associated with each texture.

Ie something like

glActiveTexture(GL_TEXTURE0);
glBindTexture(…); // bind first texture
glMatrixMode(GL_TEXTURE); // set up the texture matrix
glLoadIdentity();
glScalef(tex0_width, tex0_height, 1.f);

glActiveTexture(GL_TEXTURE1);
… same but for second texture …

Then in the vertex shader, you use the texture matrices to transform the first set of texture coordinates:

gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
gl_TexCoord[1] = gl_TextureMatrix[1] * gl_MultiTexCoord0;
gl_TexCoord[2] = gl_TextureMatrix[2] * gl_MultiTexCoord0;

Then you simply texture as normal.

There’s probably a smarter way, and I might have made a mistake or two, been a while since I’ve coded opengl.

I think, I did not explain it correctly. What I wanna do with shaders is the same as this code does:

<code>
gl.glBegin(GL.GL_QUADS);
// Screen draw
gl.glMultiTexCoord2f(GL.GL_TEXTURE0, 0, texHeight);
gl.glMultiTexCoord2f (GL.GL_TEXTURE1, 0, texHeight / 2);
gl.glMultiTexCoord2f (GL.GL_TEXTURE2, 0, texHeight /2);
gl.glVertex2f(-2, -3/2);

gl.glMultiTexCoord2f(GL.GL_TEXTURE0, texWidth, texHeight);
gl.glMultiTexCoord2f(GL.GL_TEXTURE1, texWidth / 2 , texHeight / 2);
gl.glMultiTexCoord2f(GL.GL_TEXTURE2, texWidth / 2, texHeight / 2);
gl.glVertex2f(2, -3/2);

gl.glMultiTexCoord2f(GL.GL_TEXTURE0, texWidth, 0);
gl.glMultiTexCoord2f(GL.GL_TEXTURE1, texWidth /2, 0);
gl.glMultiTexCoord2f(GL.GL_TEXTURE2, texWidth / 2, 0);
gl.glVertex2f(2, 3/2);

gl.glMultiTexCoord2f (GL.GL_TEXTURE0, 0, 0 );
gl.glMultiTexCoord2f (GL.GL_TEXTURE1, 0, 0);
gl.glMultiTexCoord2f (GL.GL_TEXTURE2, 0, 0);
gl.glVertex2f(-2, 3/2);

gl.glEnd(); <\code>

Because, then in the fragment shaders a YUV to RGB pixel by pixel transformation is done.

Yes, that is how I understood you. You can either use the texture matrix like I showed above, or, assuming the second set of texture coordinates always are half of the first, you can simply do

gl_TexCoord[0] = gl_MultiTexCoord0;
gl_TexCoord[1] = gl_MultiTexCoord0 / 2;
gl_TexCoord[2] = gl_MultiTexCoord0 / 2;

in the vertex shader (the last one isn’t strictly necessary).

You then simply have to pass the first set of texture coordinates:

gl.glBegin(GL.GL_QUADS);
// Screen draw
gl.glTexCoord2f(0, texHeight);
gl.glVertex2f(-2, -3/2);

gl.glTexCoord2f(texWidth, texHeight);
gl.glVertex2f(2, -3/2);

gl.glTexCoord2f(texWidth, 0);
gl.glVertex2f(2, 3/2);

gl.glTexCoord2f(0, 0 );
gl.glVertex2f(-2, 3/2);

gl.glEnd();

If you use a fragment shader and not the fixed function pipeline, you don’t even need to pass two identical gl_TexCoord down. Simply use
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_TexCoord[1] = gl_MultiTexCoord0 * vec4(0.5); // Avoid divisions!
and in the fragment shader replace all TexCoord2 with TexCoord1.
If you’re really picky, you actually don’t need more than the .xy components either.

Thanks a lot, now I know that my problem comes not from that part :frowning: Seems that I have a problem with the multitexturing and It is mapping each texture over different points.

Perhaps some screenshots and some bits of code could help explain what’s going on?

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.