scale on multiply texture render using shader.

Hi there,

I tried to render two 2D textures into a GL_QUADS and do alpha blending in Fragment shader, texture0 is the background texture, and texture1 is the one on top. Since we want to create the texture’s width and height based on 2^n, so for example, for the video size at 1280x720, we will create the texture0 size at 2048x1024 using glTexImage2D, and then we update texture using glTexSubImage2D at size 1280x720, that’s mean we will use partially of the texture, and for the texture1, assume the size of frame is 1920x1080, we will create the texture1 at 2048x2048, and then update the texture using glTexSubImage2D at size 1920x1080.

So right now, the problem is how can I adapter those sub-texture’s coordinates, and make sure texture0 fit with texture1. I am using following pseudo code in render function and it is not display texture properly, do have one know how to scale one of texture to fit another, thank you so much.

Kevin.


static const GLfloat v3[ 3 ] = { -1.00, -1.00, 0.0 };
static const GLfloat v2[ 3 ] = { 1.00, -1.00, 0.0 };
static const GLfloat v1[ 3 ] = { 1.00, 1.00, 0.0 };
static const GLfloat v0[ 3 ] = {-1.00, 1.00, 0.0 };
...
glPushMatrix();
glBegin(GL_QUADS);

GLfloat tmp[ 2 ];

tmp[ 0 ] = 0.0; tmp[ 1 ] = 0.0;
glMultiTexCoord2fv( GL_TEXTURE0, tmp );
glMultiTexCoord2fv( GL_TEXTURE1, tmp );
glVertex3fv(v0);

tmp[ 0 ] = (GLfloat)1280/(GLfloat)2048; tmp[ 1 ] = 0.0;
glMultiTexCoord2fv( GL_TEXTURE0, tmp );
tmp[ 0 ] = (GLfloat)1920/(GLfloat)2048; tmp[ 1 ] = 0.0;
glMultiTexCoord2fv( GL_TEXTURE1, tmp );
glVertex3fv(v1);

tmp[ 0 ] = (GLfloat)1280/(GLfloat)2048; tmp[ 1 ] = (GLfloat)720/(GLfloat)1024;
glMultiTexCoord2fv( GL_TEXTURE0, tmp );
tmp[ 0 ] = (GLfloat)1920/(GLfloat)2048; tmp[ 1 ] = (GLfloat)1080/(GLfloat)2048;
glMultiTexCoord2fv( GL_TEXTURE1, tmp );
glVertex3fv(v2);

tmp[ 0 ] = 0.0; tmp[ 1 ] = (GLfloat)720/(GLfloat)1024;
glMultiTexCoord2fv( GL_TEXTURE0, tmp );
tmp[ 0 ] = 0.0; tmp[ 1 ] = (GLfloat)1080/(GLfloat)2048;
glMultiTexCoord2fv( GL_TEXTURE1, tmp );
glVertex3fv(v2);

glEnd();
glPopMatrix();

Are power-of-two textures a requirement ? Or is it simply due to the fact that on your system you might be stuck with power-of-two textures ? You should detail this in order people to help you the best.

If you have OpenGL 2.0 or above, consider this.

If you don’t want any special filtering to apply on the texture application, then you might consider this. You might have this supported even if you don’t have OpenGL 3.x. Check your extensions. Texture rectangles could be nice as the texture coordinates work by image pixel instead of being linearized between 0 and 1.

There’s nothing wrong with the scaling calculation, so presumably the problem lies elsewhere.

If you have a vertex shader, you need to copy gl_MultiTexCoord0 and gl_MultiTexCoord1 to gl_TexCoord[0] and gl_TexCoord[1] or to user-defined variables. gl_TexCoord[] will not be set automatically if a vertex shader is in use.

Hi GClements,

I am not using vertex shader, yes, the scaling calculation is work fine on our software. The problem is the ratio of those two textures are different when linearized, so if the texture0 is displayed full screen on monitor, the content in texture1 will not match the content in texture0, I am trying to find a way to match the content of those two different size of texture in the same Quads, do you any idea?

Kevin

[QUOTE=GClements;1284308]There’s nothing wrong with the scaling calculation, so presumably the problem lies elsewhere.

If you have a vertex shader, you need to copy gl_MultiTexCoord0 and gl_MultiTexCoord1 to gl_TexCoord[0] and gl_TexCoord[1] or to user-defined variables. gl_TexCoord[] will not be set automatically if a vertex shader is in use.[/QUOTE]

Hi Silence,

No, the power-of-two texture is not a requirement, we are using OpenGL 4.5, we are encountering a problem when blending two textures in fragment shader, when texture1 (RGBA raw video data) at 4k( 3840x2160) size, no matter what is the size of texture0 ( YUYV raw video data ), such as 1920x1080, 1280x720, it is always ok, we do YUV to RGB conversion for texture0 and alpha blending with texture1 in shader. The problem is up when we change the size of texture1 to 1920x1080( the size of texture1 is always the same with screen’s resolution on output ), so when texture0 is at 3840x2160 and 1280x720, we notice that YUV to RGB conversion is not work any more, we notice some random color bar in final screen for texture0, flowing is the fragment shader we are using for YUV to RBG conversion, so when we move the texture0’s size to power-of-two, and it solve the problem, we actually have a workaround for our non-power-of-two solution, when we call glMultiTexCoord2fv, we shift the tmp[ 0 ] by one pixel before passing to glMultiTexCoord2fv like following.


...
tmp[ 0 ] = (GLfloat)(1280-1)/(GLfloat)1280; tmp[ 1 ] = 0.0;
glMultiTexCoord2fv( GL_TEXTURE0, tmp );
glVertex3fv(v1);
 
tmp[ 0 ] = (GLfloat)(1280-1)/(GLfloat)1280; tmp[ 1 ] = 1.0;
glMultiTexCoord2fv( GL_TEXTURE0, tmp );
glVertex3fv(v2);
...


uniform sampler2D texture0;
uniform float texel_width;  // 1/texture_width
uniform float texture_width; // size of texture0's width

vec4 luma_chroma;
float luma, chroma_u, chroma_v;

void main(){
    float red, green, blue; 
    float pixel_x, pixel_y;
    float x_coord, y_coord;

    pixel_x = gl_TexCoord[ 0 ].x;
    pixel_y = gl_TexCoord[ 0 ].y;

    luma_chroma = texture2D( texture0, vec2( pixel_x, pixel_y ) );

    luma = ( luma_chroma.a - 0.0625 ) * 1.1643;

    x_coord = floor( pixel_x * texture_width );    
    if( mod( x_coord, 2.0 ) == 0.0 ) {
            chroma_v = luma_chroma.r;
            chroma_u = texture2D( texture0, vec2( pixel_x + texel_width, pixel_y ) ).r;
    } else {
            chroma_u = luma_chroma.r;
            chroma_v = texture2D( texture0, vec2( pixel_x - texel_width, pixel_y ) ).r;
    }
    
    red = luma + 1.5958 * chroma_v;
    green = luma - 0.39173 * chroma_u - 0.81290 * chroma_v;
    blue = luma + 2.017 * chroma_u;
    
    gl_FragColor = vec4( red, green, blue, 1.0 );
}

[QUOTE=Silence;1284307]Are power-of-two textures a requirement ? Or is it simply due to the fact that on your system you might be stuck with power-of-two textures ? You should detail this in order people to help you the best.

If you have OpenGL 2.0 or above, consider this.

If you don’t want any special filtering to apply on the texture application, then you might consider this. You might have this supported even if you don’t have OpenGL 3.x. Check your extensions. Texture rectangles could be nice as the texture coordinates work by image pixel instead of being linearized between 0 and 1.[/QUOTE]

My guess is that if you don’t want bleeding, you should use the same size for both the textures. If they don’t have the same size, a pixel from one texture will not match a pixel from another texture, but might correspond to several ones from the other texture. Then some averages will be done on this region when you do your calculations. Then in your fragment shader, you might end with a mixture of pixels whereas you are expecting just a pixel. In the fragment shader you have textures, you don’t have images anymore. The original has been scaled (up or down). Each time GL enters your fragment shader, you have a fragment as the input, not a pixel. This fragment can be the representation of several pixels, or just a portion of a pixel.

Hi Silence,

You are right, there is something wrong when fragment shader converting from YUV to RGB, it is a workaround using a power of two texture. So right now my solution is scale up the texture0 when passing texture coordinates in main code, and scale down texture1 to match with texture0 in fragment shader. And it seems works fine as my temporary solution. Thank you.

Kevin