Hello,
I’m trying to implement bilinear texture filtering as it’s done with GL_LINEAR. I fetch the needed texels with textureGather, calculate from the texture coordinate the blending factors and blend the texels according to the formulas in the GL specs (at least, that was my intension).
The problem is, that I get strange artifacts with blend values near 1.0. Playing araund with this I foound out that I have to shift the texturecoordinates by a small offset to exactly mimic the hardware filtering of my GeForce GTX 580. However, I can’t figure out why this magic offset 1/512 has to be used (which is the same for all pot texture sizes).
This is my function:
vec4 textureBilinear( in sampler2D tex, in vec2 coord, in float useOffset )
{
// get texture size in pixels:
vec2 colorTextureSize = vec2( textureSize(tex, 0) );
// gather from all surounding texels:
vec4 red = textureGather( tex, coord, 0 );
vec4 green = textureGather( tex, coord, 1 );
vec4 blue = textureGather( tex, coord, 2 );
vec4 alpha = textureGather( tex, coord, 3 );
// mix the colours:
vec4 c01 = vec4( red.x, green.x, blue.x, alpha.x );
vec4 c11 = vec4( red.y, green.y, blue.y, alpha.y );
vec4 c10 = vec4( red.z, green.z, blue.z, alpha.z );
vec4 c00 = vec4( red.w, green.w, blue.w, alpha.w );
// calculate the sub-pixel texture coordinate:
float strangeOffset = useOffset * 1.0/512.0; // = 0.00195313;
vec2 filterWeight = fract( coord*colorTextureSize - 0.5 + strangeOffset );
// bi-linear mixing:
vec4 temp0 = mix( c01, c11, filterWeight.x );
vec4 temp1 = mix( c00, c10, filterWeight.x );
return mix( temp1, temp0, filterWeight.y );
}
Attached is an image where you see in the upper left the filtering when I call textureBilinear(mySampler, coord, 0.0) (no magic offset used) - lower left with the offset - upper right the difference between the function without offset and the hardware texture lookup (increased values to better see the artifacts) - lower right the same diff with the offset.
Everything looks fine with the offset but I can’t figure out why I need it (did I overread some part of the spec?). I can’t just hack in a magic number in my code - in case it’s an NVidia problem I would get artifacts on proper implementations. Or if the offset is in fact dependent on some texture properties the code would only work with my set of test textures…
In the long run I want to implement a special variant of texture filtering where the problem with this offset also applies, so ‘just use the hardware filter’ is sadly not an option.
Any ideas? Thanks.