Edge texture filtering

Hi All,

I’ve recently encountered a very peculiar problem. Edges of the triangles where texture changes (i.e. where level of texture array changes) expose noticeable artifacts. The artifacts are probably due to the texture filtering. I have to emphasize that texels on those edges are far from the border of the texture (level), and also I blend two adjacent levels. With or without blending, artifacts are the same. What could be the solution of the problem?

Images on the following links illustrate the problem (the same (zoomed) scene filled and wireframed).
Artifacts
Wireframe

Thank you in advance for any suggestion!

Aleksandar

Are you sending the exact same vertex coordinates down for that edge in both triangles? Are those stray pixels being filled? Would clear the screen to 100% blue beforehand and then render the scene. Do you see blue holes? Try another clear color – same result?

Also, are you rasterizing with MSAA?

Are your texture coordinates for that edge in both triangles the same? Do your texture coordinate have continuous derivatives at the edges of triangles? Are you using MIPmap filtering and/or anisotropic texture filtering? What happens if you revert to non-MIPmap filtering (NEAREST or LINEAR)?

Thank you, Dark! Your questions help me to locate the problem, but it cannot be eliminated easily.
Well, let me first answer to all your questions. Maybe the answers could reveal deeper cause of the problem.

[ul]
[li]Edges in question are boundary edges of two adjacent triangles of the same triangle strip. So, coordinates should be the same. In fact, the problem repeats on the same triangle strip whenever texture level (of the same texture array) is changed.[/li][li]Those pixels are filled (there is no holes, and the color is somehow related to used texture level)[/li][li]Yes, I’m using multisampling.[/li][li]The texture coordinates are not the same for both sides of the edge (different levels).[/li][li]Mipmaping is not used (not applicable in this case). GL_LINEAR is used for the MIN/MAX filtering, and GL_REPEAT for wrap in both direction.[/li][li]Anisotropic filtering IS used and THAT causing the problem![/li][/ul]

If anisotropic filtering is disabled, the problem disappears. Unfortunately, I cannot permanently disable it, since it is a terrain rendering application, and the impact will be unacceptable.

Have you any suggestion how to solve it?

I guess I’m not understanding something. You say MIPmapping is not used, but then you say that texture “level” varies across the edge. This doesn’t make sense to me. Levels are MIPmap levels. If level varied, I could easily see how you might get filtering discontinuities.

In any case, what about the X and Y texture derivatives on either side. First, are they always valid? And are they always the same? Is the content of the texture level(s) they’re looking up into the same?

Where you sometimes run into problems is if on the edges of your triangle the texture coordinates aren’t defined outside of the triangle, so you end up with invalid (huge) texture derivatives for pixels (or samples) on the edge, which causes MIPmap filtering and/or anisotropic texture filtering to try and sample and filter over a large area of your texture. This can generate seam artifacts like this.

OK, I’m sorry I was not clear. I’ll try it better now.

When I say “level” I mean “texture array layer”. I’m sorry for misuse of the word level (in my context those are levels of detail, hence the term).

I need a bit of coding to confirm values of the derivatives, but they are probably not the same since those are different textures (different layers of the same texture array).

Texture coordinates are defined outside of the triangle (GL_REPEAT is used and always have a “buffer” of at least 10% of the texture size that is not used in the drawing process). The artifacts stays even if less than a half of the layer is used for drawing.

Here are some other examples of the artifacts:

  • With anisotropic filtering (img1)
  • With anisotropic filtering - artifacts marked (img1a)
  • Without anisotropic filtering (img2)
  • With anisotropic filtering - another example (img4)

I assume, this is some kind of virtual texturing?
It very much looks like you’re using the wrong derivatives for fetching the texels. Mind you, the anisotropic filter is using the length of the derivatives to determine its footprint.
I guess the edges we’re seeing are discontinuities in the texture coordinates (jumping from one tile to another?), which cause a sudden increase in the gradient length and therefor a different filtering than surrounding texels. Also, you should provide a large-enough border around your tiles to prevent sampling outside the tile when the filter gets close to the tile border.
More on VT stuff: https://mollyrocket.com/forums/viewforum.php?f=21&sid=fa694cbe619a365b6359fc0160cb180b

Thank you for the reply, skynet!

What I’m trying is some kind of clipmapping. Instead of using derivatives, I thought it was easier to use simple distance (in texture coord space) for choosing right layer. This is a fragment of my FS code (without blending, additional coloring, accessing static texture etc.)


float mL = max(abs(ex_TexCoord.x), abs(ex_TexCoord.y)); // ex_TexCoord – texture coord in ph. space
int layer;
float tSize; // size of a layer
float Dopt = Dmin * 0.9; // Dmin is a minimal size of a layer in physical coordinates
if(mL < (Dopt/2))
{
            layer = MaxLevel;
            tSize = Dmin;
}
else
{
            float fLayer = float(MaxLevel) + 1.0 + log2( mL / Dopt );
            layer = int(ceil(fLayer));
            tSize = Dmin * exp2(layer-MaxLevel);
}
//...
if(layer < (levelCount-1))
{
            vec3 texCoor = vec3(startTex[layer].x + 0.5 + ((ex_TexCoord.x + centTexOffset[layer].x) / tSize), startTex[layer].y + 0.5 + ((ex_TexCoord.y + centTexOffset[layer].y) / tSize), layer);
//...
}

At the first glance this is not a classical virtual texturing, but I have to take a deeper look at the material you have recommended. Thank you very much for the link!

I’ll be grateful if you have any comment on the code. startTex[] and centTexOffset[] are offsets in a layer according to toroidal update and texture-mesh mismatch.

Could be that the texture coordinates are not continuous when you switch from one layer to another. When actually sampling the texture, try using explicit gradients (textureGrad()), based on the unmodified ex_TexCoord. You may need to scale the computed gradient to match the ‘real’ size each layer corresponds to.

textureGrad(clipmap, texCoor, dFdx(ex_TexCoord) * s, dFdy(ex_TexCoord) * s)

whereas ‘s’ is some scale factor, probably something like (1.0/ (2^layer))

I note that both layer and tSize are assigned within non-uniform control flow, and derivatives are undefined in this situation. IIUC, enabling anisotropic filtering can cause derivatives to be used even when mipmapping is disabled.

Multisamping presents another complication, as it can cause interpolation to occur other than at the fragment’s centroid. Without multi-sampling, the fragment shader won’t be run if the centroid is outside the primitive; with multi-sampling, the sample location is forced to lie inside the primitive. If the fragment shader is executed per-sample, the accuracy of derivatives may be impaired.

Thank you again, skynet! And also thanks to Dark Photon (the first assumption on derivatives discontinuity)!

Assumptions are correct. The problem lies in wrong derivatives!

In this picture, coloring is done according to dFdy(ex_TexCoord). A discontinuity is easily seen.
Let’s elaborate the problem a bit more. The main problem is in the way graphics card process fragments.
Usually, fragments are processed in small groups (2x2 blocks or so). Doing this GPU easily gets “approximated derivatives” by subtracting values of the adjacent fragments. On the edges of different texture layers (in my example), texture coordinates are totally different. That causes extreme derivatives if fragments from the same block belong to different textures. Artifacts are inevitable.

Using textureGrad() GLSL function for accessing texture with explicit partial derivatives can really help. Thank you again, skynet! But effect of the anisotropic filtering is spoiled. Take a look at the following pictures.

Obviously, anisotropic filtering without textureGrad() fetching achieves best results. Calculating derivatives according to ex_TexCoord gives worse results (but better than without anisotropic filtering). Aliasing effects are quite apparent in this case. By multiplying derivatives with layer ID (higher number means wider spatial extent) gives much better results. Textures are much smoother, aliasing effects appear for much smaller viewing angles, and the scene appears more like using mipmaping than anisotropic filtering (too blurry). In any case, the standard anisotropic filtering with only factor 2 gives much better results than anything I’ve tried so far with textureGrad(). Probably partial derivatives dependency is more complex than we assumed at the start. I’ll try to play with different coefficient, but any further suggestion would be more than welcome. :slight_smile:

Also be aware that the layer selection (that would be the mipmap selection for a real texture) the way you do it is only suited for the bilinear filter. If you want to have ‘real’ anisotropic filtering, you have to use the right math as well. Consult the EXT_texture_filter_anisotropic specs for that.

You have noticed it correctly; my filtering is confined to a single layer only. But I have a reason for that. In this application, layers do represent the same spatial area, but the imagery can be totally different, ranging from satellite imagery to aero-photo, from near infrared (false coloring) to true color. There are gaps in layers. Also, they are made in different date/time intervals, hence some features do not exist in some layers, or are differently shaped. Using trilinear/anisotropic method to choose texels from different layers for the same spatial areas makes “islands” of “false texturing”.

Nevertheless, some kind of spatial coherency is achieved by using blending. What I mean will be (probably) clearer after seeing the following images: EC-1, EC-2, EC-3, EC-4, EC-5, EC-6, EC-7 and EC-8.

For example, take a look at image EC-6. The airport runaway is from year 2000. The texture is a near infrared satellite image that I’ve recolored. The runaway is shorter with bomb craters. There is also some ghosting effect (a ghost-runaway parallel to the real one). On the aero-photo, the runaway is fixed, longer and with wider facilities around it. Blending according to spatial distance from the viewer is the only solution that gives more or less acceptable transition (fig. EC-7 and EC-8).

I did, and made a short document for myself about anisotropic filtering. :smiley:
There is one question left. Maybe this is a right place to discuss about.

In the spec, Px denotes distance in texture space in screen x-direction. Py is a distance in texture space down screen y-direction. Anisotropy-factor is calculated as:

AF = N = min( ceil( max(Px,Py) / min(Px,Py) ), maxAniso)

and Pmin is used for layer selection (hence, more detailed layer is used for texels fetch).

On the other hand, in a NV example, anisotropy-factor is calculated as:

AF = max(Px,Py) * max(Px,Py) / det

where det is a determinant ( det = abs(dx.x * dy.y - dx.y * dy.x) ) of partial derivatives. Also, there are differences in layer selection.

Both approaches come from NV, but the second is about 8 years “newer”. Is there any official definition of anisotropy-factor, or it can be loosely interpreted?

P.S. EXT_texture_filter_anisotropic spec contains false pronouncing of Greek letter lambda. Just a remark for younger readers. :wink:

I don’t think there’s a definite spec on how anisotropic filtering had to be performed. EXT_texture_filter_anisotropic was the only document I was aware of. Where did you find yours?
Do the Direct3D specs contain any word about it? The way I interpret the EXT_tfa math is: chose that level which spaces the maxAniso taps around 1 texel apart along the longest extend of the texel’s footprint in texture space.

Yes, there is no evidence of HW implementation of anisotropic filtering, or at least I haven’t found any.
Conclusion about how it might be implemented is coming from sample codes. Take a look at NV Clipmaps sample (more precisely, take a look at Clipmaps.fx, at function PS_Anisotropic). Although it is a D3D10 code, shaders can be easily translated to GLSL. In the accompanying document (pdf) the calculation is somewhat different, but the code is probably better source.

Generally it is a correct interpretation, but it doesn’t have to be 1 pixel precise. It depends on the extent of the anisotropy and maxAniso.
According to EXT_texture_filter_anisotropic, anisotropic filtering uses N samples ( N = min(ceil(Pmax/Pmin),maxAniso) ) along greater (of the two screen-space) direction (not in texture space). U and V coordinates of the samples are calculated according to screen-space derivatives. Correct me if I’m wrong.

Since lambda (LOD) is an integer value, both approaches probably choose the same LOD level for the sampling, but if math in the NV code example has better approximation (determinant is actually the size of the aniso-field), the trilinear filtering could give better output.