Texture filter and atlas image distortion problem

Hello,

I have a question about premultiplied alpha images, texture atlas and texture filter (minification with bilinear filter).

(I use correct blending function for PMA and all my images are in premultiplied alpha format.)

Suppose that there are 3 different versions of a plain square image:

  1. In a separate file, by itself. No padding (padding means alpha 0 pixels in my case).
  2. In an atlas in which the subtexture’s top left corner(0, 0) is positioned on top left corner(0, 0) of atlas. There are padding on right and bottom but not on left and top.
  3. In an atlas, in which there are padding in each of the 4 directions of the subtexture.

Do these 3 give the same result when texture is minified using Bilinear filter? If not I assume this means premultiplied alpha distorts images since alpha 0 pixels are included in interpolation. If so why do we use something that distorts our images?

And even if we don’t use premultiplied alpha and use “normal” blending, alpha 0 pixels are still used in interpolation. We add bleeding to overcome problems caused by this (don’t know if it causes exact true pixel rendering though). So the question is: Does every texture atlas cause distortion in contained images even if we use bilinear (without mipmaps)? If so why does everyone use atlas if it’s something that’s so bad?

I tried to test this with a simple scenario but don’t know if my test method is right.

I made a yellow square image with red border. The entire square (border included) is 116x116 px. The entire image is 128x128.

I made two versions of this image. Both images are in premultiplied alpha format.

1st version: square starts at 0, 0 and there are 12 pixels padding on bottom and right.
2nd version: square is centered both horizontally and vertically so there are 6px padding on top, left, bottom and right.

I scaled them to 32x32 (scaled entire image without removing padding) using Bilinear filter. And when rendered, they both give very very different results. One is exact while the other one is blurry. I need to know if this is caused by the problem I mentioned in the question.

Here are the images I used in this test:

Image (topleft):

Image (mid):

Render result:

I try to use x, y and width, height values for sprite which match integers so subpixel rendering is not intended.

Yes, provided that all 3 cases are using the same texels. If the texture coordinates form a rectangle which is aligned to texel edges, minification will never use the values from texels outside of that rectangle (but magnification can).

One of the reasons for using pre-multiplied alpha is that it correctly handles blending with partial-alpha texels, whereas using unmultiplied alpha doesn’t (you get bleeding). Consider a 50-50 blend between (1,0,0,0) (transparent red) and (0,1,0,1) (opaque green). With unmultiplied alpha you get (0.5,0.5,0,0.5) (translucent yellow, i.e. the red has bled through); with pre-multiplied alpha you get (0,0.5,0.0.5) (translucent green).

Whether an image is its own texture or a region of a texture atlas doesn’t matter unless you sample beyond the bounds of the image. This can be an issue for magnification, as sampling near the edge will interpolate between edge pixels and the border (or the opposite edge if the wrap mode is repeat); with an atlas, it can cause bleeding with adjacent regions (but only with the immediately-adjacent rows/columns of texels, so this is a non-issue if all images have a transparent border of at least one texel). It can also be an issue if you use mipmaps and forget to clamp the level of detail (so you end up sampling mipmaps whose texels span multiple images). But for the most part, atlases work fine if used correctly.

It’s probably caused by texture coordinates not being aligned to the texel grid, or by using a non-unity scale factor, or by using mipmaps. It’s entirely possible to get exact (1-to-1) reproduction when using bilinear filtering with a texture atlas.

Thanks for the answer.

How can I force three example squares to use the same texels?

In my two square example (images I attached to the first post), I scale the 128x128 images I mentioned to 32x32. When I think about it if interpolation algorithm starts at top left it would sample different pixels because of different paddings in those two images. What if I set u, v coordinates of texture to include only the square region of 116x116 but not the transparent pixels and scale the images to 29x29? Does OpenGL consider the passed u, v coordinates after minifying the entire atlas page or it only minifies the region described by u, v coords and not the entire image?

It happens in the example for which I posted the images and the render result. I don’t use mipmaps at all. I tried to match integers while setting coords. Scale factor is 1/4 in both dimensions.

By offsetting the texture coordinates. In the case where the images are offset by 6 pixels within a 128x128 texture, the texture coordinates should be offset by 6.0/128.

[QUOTE=hellgasm;1291482]
In my two square example (images I attached to the first post), I scale the 128x128 images I mentioned to 32x32. When I think about it if interpolation algorithm starts at top left it would sample different pixels because of different paddings in those two images. What if I set u, v coordinates of texture to include only the square region of 116x116 but not the transparent pixels and scale the images to 29x29?[/QUOTE]

It’s not strictly necessary to change the size of the target rectangle. You can just offset the texture coordinate for all 4 corners by the same amount, so the bottom and left edges have texture coordinates of 6/128 and the top and right edges have texture coordinates of 1+6/128. You’ll need to set the appropriate wrap modes (GL_REPEAT is simplest). But that won’t work for an atlas.

Does OpenGL process the passed texture (u, v) coords after minifying the entire atlas page or it process that region separately and only minifies that region and not the entire image? Should scale factor be multiple of this region or entire atlas page?

I know the basics about repeating modes but couldn’t understand what this has to do with repeating.

OpenGL doesn’t minify textures. Texture coordinates are calculated for each fragment, then the texture is sampled at those coordinates. The sampling process is affected by either the minification filter or magnification filter depending upon whether the spacing between sample points is less or greater than the spacing between texels in the texture’s base level. If the spacing between sample points is less than a texel, the magnification filter is used; if it’s greater than a texel, the minification filter is used.

I reread your posts and tested again. I can get same renders for subtextures in atlas as textures in separate image files. But I got very bad results with bilinear interpolation when scaling below 1/2. I now want to use mipmaps (trilinear filtering). Is it possible to do the same for MipMapLinearLinear filter? I basically want to get the same render for my atlas subtexture as separate image texture (both using mipmaplinearlinear).

When using mipmaps with a texture atlas, you need to set GL_TEXTURE_MAX_LEVEL or GL_TEXTURE_MAX_LOD so that texels in the lower-resolution levels don’t straddle the boundaries between individual images within the atlas.

So if the individual images are 32x32, then the maximum level or LoD should be 5. At the maximum LoD, each image will be reduced to a single texel.

Thank you for your interest in this topic. Your answers are very helpful.

[QUOTE=GClements;1291539]When using mipmaps with a texture atlas, you need to set GL_TEXTURE_MAX_LEVEL or GL_TEXTURE_MAX_LOD so that texels in the lower-resolution levels don’t straddle the boundaries between individual images within the atlas.

So if the individual images are 32x32, then the maximum level or LoD should be 5. At the maximum LoD, each image will be reduced to a single texel.[/QUOTE]
I don’t think I need that because scale factor is 0.125 to 0.25 in my programs. For example I have a 200x200 subtexture in 1024x1024 atlas and minify that subtexture to 40x40 using MipmapLinearLinear filter. As far as I know trilinear filtering (mipmaplinearlinear) applies linear interpolation with nearest two mipmaps (so in this example i assume they are 256x256 and 128x128). In this case neither subtexture nor the entire atlas seem to have anything to do with subpixels (why should it use mipmaps smaller than 128x128 in this example?). But my problem persists, I can’t get the same render as separate file texture by using atlas. What am I doing wrong?

(Also is there a way to use those LoD options with OpenGL ES 2.0?)

[QUOTE=hellgasm;1291541]
I don’t think I need that because scale factor is 0.125 to 0.25 in my programs. For example I have a 200x200 subtexture in 1024x1024 atlas and minify that subtexture to 40x40 using MipmapLinearLinear filter.[/QUOTE]
So the scale factor is 1/5, which means that it will use the 1/4 and 1/8 mipmap levels. 200 is a multiple of 8, but is the alignment of the subtexture within the atlas a multiple of 8 pixels? If it isn’t, then the 1/8 mipmap level will have bleeding with the adjacent subtextures.

No. But if you’re only doing 2D, it should be easy enough to ensure that the scale factor never gets low enough for it to be an issue.

So the start coordinates, width and height of a subtexture in atlas must all be multiple of scale factor in bilinear and also scale factors of used mipmaps in trilinear. This seems like impossible to achieve, is trying to avoid that bleeding (and subpixels) an achievable thing and worthy effort?

(I also try to scale all graphics according to mobile device resolution which may need non-integer scale ratios)

The borders of each subtexture must be aligned to texel boundaries for all of the mipmap levels you will actually use. So the dimensions of each subtexture need to be a multiple of the texel size of the lowest-resolution mipmap level which will be used. This tends to be trivially true if all subtextures have power-of-two sizes, but that isn’t a requirement.

Yes.

In which case, you have to choose between the jagged-ness of “nearest” filtering modes or the blur of linear filtering. For icons, alpha-testing may be an option, but even these don’t work particularly well with features not much larger than a pixel.

Lowest resolution? If I use various different scale factors between 0.5x to 0.125x and suppose I have a 125x125 image in 2048x2048 atlas. Lowest resolution mipmap used is 64x64. If I make my image 128x128 its dimensions are multiples of 64x64 and 128x128 but not 256x256? Can you give me an example?

So I should forget about mipmapping if there are many different scale ratios and they can even be non-integers. I think Linear has problems with decimal pixels, too.

When I use MipMapLinearLinear with decimal scale ratios, my textures get clipped at the edges. Probably because fully transparent pixels (0, 0, 0, 0) bleed.

Is 125 a typo?

2048/64=32. I.e. an aligned 32x32 block in the base texture will be a single texel in the 64x64 mipmap level. So all of your subtextures must have dimensions which are multiples of 32.

You should definitely be using mipmaps if you’re planning on minifying textures. But you need to ensure that the subtextures within the atlas have suitable dimensions; you can’t just use arbitrary dimensions without taking into account the resolution of the mipmap levels.

Also: you need to ensure that the type used for texture coordinates has adequate precision. Normalised types are problematic because the denominator is 2n-1, not 2n. Bytes aren’t sufficient, shorts are borderline.

No. Why should it be? I meant 0.125x = 1/8. And the image in that example was 125x125 before modifying it for mipmaps and changing dimensions to power of two. If 125 being odd number you are talking about, then let it be 124x124 i just gave an arbitrary number as example.

Also I can adjust subtexture sizes, this is OK but texture atlas is generated by texture packer programs and they don’t seem to care about starting positions. I can make my own texture packer or modify existing ones but still I wonder if its a “wise” choice

My last question is:

I have textures in max resolution I support (for example 8k or 1080p) in an atlas. I never magnify and just need to minify. My application will always be fullscreen and since I target mobile, device resolutions are non-standard. The scale factor may be different on each device and I cannot know that when preparing atlas. Maybe relying on OpenGL texture filters is bad idea for this scenario after all and I should use a better re-sizing algorithm at program startup?

Also:

I don’t have problems with normalised types or variable types. It’s about subpixels I think. For some reason when I position my textures at decimal coords or scale factor is something like 3.2, texture is clipped at edges.

Well, if you want to use a 1/8-scale mipmap, the size neeeds to be a multiple of 8. So 124x124 won’t work either.

You can probably assume that an atlas generator isn’t going to introduce arbitrary gaps. So if all of the subtexture sizes are multiples of 8 (or whatever), then the subtextures should be aligned to multiples of 8 regardless of exactly how the code chooses to pack them.

Well, you can’t guarantee integer scale factors if you’re scaling fixed-size images to an arbitrary display resolution. Rescaling images prior to texture creation allows for more control over the rescaling algorithm. Depending upon the nature of the images, you might be better off using a vector format (e.g. SVG) and rasterising at the target resolution.

[QUOTE=hellgasm;1291554]
I don’t have problems with normalised types or variable types. It’s about subpixels I think. For some reason when I position my textures at decimal coords or scale factor is something like 3.2, texture is clipped at edges.[/QUOTE]
I’m not sure what you’re saying. You’ll need to provide a concrete example.

I have a 128x128 yellow rectangle with black borders. When I put them to some integer coordinate like 100, 100 (pixel coords) there is no problem, they are rendered exactly as they should. But when I put them on 100.5, 100.5 they are rendered like this

Original rectangle image:
[ATTACH=CONFIG]1785[/ATTACH]

Rectangle at decimal position (with MipMapLinearLinear filter):
[ATTACH=CONFIG]1786[/ATTACH]

Similar thing happens when width and height are decimal, too. Edges of texture are clipped.

Is this because of how OpenGL renders pixels smaller than 1? Should I expect the same thing to happen when minification with a scale factor which has a non-integer denominator?

Your reference to MipMapLinearLinear (presumably GL_LINEAR_MIPMAP_LINEAR under-the-hood) suggest that you are sampling your texture with trilinear filtering.

However, the fact that your 2nd image shows no pixels which are partial blends between 100% black and your yellow color (0xFFF200) seems to contradict that.

Perhaps you should show the code for how you’re creating and sampling your texture.