perfect screen quad aligned and GL_NEAREST

Hi,

I am trying to improve my yuv->rgb convert … Now I only draw quad (scaled, rotate, …) in my scene with a texture with GL_NEAREST, GL_CLAMP_TO_EDGE and a shader that do yuv->rgb and the bilinear filtering.
but it seems it’s not perfect.

It’s perhaps better to render a screen aligned quad in an fbo with GL_NEAREST with a basic yuv->rgb shader, then render use the texture of the fbo in RGB space.

What is the exact correct definition (hardware independant) of a a screen aligned quad ? Do I need to use GL_CLAMP_TO_EDGE (it seems that texture is not exactly from 0.0 to 1.0) ?

Assuming you’re using a half-recent version of GL, just use texelFetch and then forget about the sampling and filtering details. Just pass in the integer coordinates of your current fragment and be done with it:


texelFetch( tex, ivec2( gl_FragCoord.xy ), 0 );

If this is an older OpenGL (pre-texelFetch), then you can still do it just fine with GL_NEAREST 0…1 quad (then CLAMP_TO_EDGE is fine but not needed). Just remember that for texcoords, 0 and 1 are coords of “texel edges”, but the texcoords of texel “centers” should be done for the lookup (cell-centered data). So for instance if your texture is NxN, and you wanted to know the texcoords of the center of the Ith texel (I in 0…N-1), then its texcoord is: (I+0.5)/N.

Another option is to use textureGather() to obtain one component from each of the 4 texels surrounding the specified texture coordinates, convert the data to RGB, then interpolate. E.g. (untested):


vec3 textureYUV(sampler2D sampler, vec2 p)
{
    vec4 y = textureGather(sampler, p, 0);
    vec4 u = textureGather(sampler, p, 1);
    vec4 v = textureGather(sampler, p, 2);
    vec4 r = y + 1.14 * v;
    vec4 g = y - 0.39 * u - 0.58 * v;
    vec4 b = y + 2.03 * u;
    vec3 c10 = vec3(r[0], g[0], b[0]);
    vec3 c11 = vec3(r[1], g[1], b[1]);
    vec3 c01 = vec3(r[2], g[2], b[2]);
    vec3 c00 = vec3(r[3], g[3], b[3]);
    vec3 c0 = mix(c00, c01, p.s);
    vec3 c1 = mix(c10, c11, p.s);
    return mix(c0, c1, p.t);
}

If I understand correctly : texelFetch bypass any filtering,works in non normalized texture coordinate ? In this case, I need to render a quad aligned without bilinear filtering ?
In this case I need to do that ?
create fbo of the yuv420 texture size
glviewport (texture size)
glortho(0,0,widthTexture,heightTexture)
drawopenglquad (size of the texture) ?
and use texelfetch with basic yuv->rgb and no bilinear filtering

Is it correct ?

For what you want, texelFetches are perfect. Otherwise use Dark Photons approach. I can’t speak to GClement’s suggestions, however, without more background on the conversion you’re doing.

texelFetch() doesn’t perform any filtering, interpolation or mipmap selection. You pass in integer texture coordinates and a mipmap level, and it returns the requested texel.

How you use that is up to you. One option is to render a same-size, aligned rectangle to produce a 1:1 copy, but in RGB rather than YUV. Another option is to perform the conversion as you render the final polygons, implementing the texture filtering logic in the shader. The latter option requires more processing but less memory bandwidth.

… except in the case of a buffer textures, texture rectangle, or multisampling samplers. These have no mipmaps.