Texel location

Hi all,

This might seem like an odd request and I also have concerns about if it can be reliably implemented.
Here goes: to derive the “texel relative location” from a texture coordinate, what I mean here is how far
across the nearest texel to use is for a texture coordinate.

Naively one could try this:


texel_location=fract( textureSize(my_sampler)*normalized_texture_coordinate); 

to get values in the range [0,1]. However, the above is horribly numerically unstable
near the boundaries of the texels.

Replacing the above though a texture look up to remove the integer part:


/*
   pos_sampler is a texture with all filtering set to GL_NEAREST
   and the image data as point p is the value at p
*/
texel_location= textureSize(my_sampler)*normalized_texture_coordinate - texture(pos_sampler, normalized_texture_coordinate)

However this is moderately unstable too, as when one scales one can see “shimmers” (but much less than fract)
The ugly issue is naturally near 0 and 1. Indeed if the nonnormalized value is near an integer, then a small difference can dramatically affect the value in both approaches.

The goal here is to get that interpolation value consistent with from which texel the GL implementation samples, it is ok if different implementation choose different texels in the boundary case, but I want to have a way of getting that interpolate reliably across GL implementations.

I just wanted to add a quick note: if the “position texture” is the exact same resolution, then there are no issues (on desktop) [and on embedded typically no issues until a resolution of 1024, if one makes some adjustments] but nevertheless this is silly: requiring to do an additional texture lookup to get the interpolate value used by filtering.

A Question, kRogue: How are you establishing that this is numerically unstable near the boundaries of the texels?

One thought I had was are you using texel_location (or a value derived from it) to lookup into a texture? You may know this but in areas where the texture coordinates are discontinuous (e.g. that fract around the edges of the texels), you’re going to have the default texture function sensing a big change and thus causing it to flip way up and select really course MIP levels. The work-around being to use tex2DGrad and feed in smooth gradients for the 3rd and 4th arg so the MIP level selected doesn’t go balistic.

I guess my choice of phrase of “texel” location is was really bad. This is what I am after:

Lets say you have a textures whose magnification filter is GL_LINEAR, in a fragment shader
one does:


value=textureLod(filteredTexture, tex_coord, 0);

Internally, the GL implementation produces, for each dimension 3 numbers:
[ul][li] Coordinate “before” tex_coord, call it A [] Coordinate “after” tex_coord, call it B[] interpolate, call it t[/ul][/li]
so for one dimensional textures, then value is (1-t)A + tB.
The value of t is, naturally, discontinuous since when tex_coord is
on the edge of a texel, t jumps to/from 0.0 from/to 1.0.

I want that t, in a way that is consistent with whatever the GL implementation
would make it.

  1. Doing fract(texture_size*tex_coord) is flaky
  2. using another texture of the same dimensions whose filtering is GL_NEAREST
    and the image data at p is the value p (normalized, etc) and subtracting that from
    texture_size*tex_coord seems usually reliable.

Doing (2) though is absolutely silly: another texture look up to get a value that
the GL implementation might have on hand already.

My eyes are on where I want the “coordinates” of the fragment within
the texel to do some interesting shading where the texture I look up
from is set to GL_NEAREST and the texture represents data localized to
the texel.

Are you trying to implement bilinear filtering in a shader?

No, I am not implementing a filtering algorithm at all.

Hi, sorry to disturb this topic again, but I was wondering if you found a solution? I’m trying to do a very similar thing.

I’m blending two textures together, but my blending function is kind of awkward. In particular, if the aforementioned “t” is within [0.25 0.75] interval, then I use one texture fully, but when it’s outside of that interval, I do a linear blend from the first texture to the second texture. It’s something along the lines of the following:


4*t*first_texture_sample + 4*(0.25-t)*second_texture_sample, if t is in [0.0 0.25]
first_texture_sample, if t is in [0.25 0.75]
4*(0.25-(t-0.75))*first_texture_sample + 4*(t-0.75)*second_texture, if t is in [0.75 1.0]

The problem is finding t in an efficient, and stable way…

Thanks in advance