PDA

View Full Version : sub-ranged texture() in GLSL



kRogue
09-18-2012, 02:12 AM
The basic idea of this suggestion is to make texture atlasing less awful to do. Create (another!) texture function of the form:



texture(sampler2D T, vec2 p, vec4 rect)


which does the sampling restricted to rect (encoded as .xy=position withing texture and .zw=size within texture) i.e. the texture coordinate p is relative to rect. Without filtering, it is equivalent to



texture(T, p, rect) == texture(T, rect.xy + rect.zw*p)


with filtering is where it is useful, texels on the boundary of rect get filtered with the texel on the opposite size for repeat, and with itself for mirror repeat, mirror and clamp. Extending, I want this:



enum RepeatMode
{
mirror,
repeat,
clamp,
mirror_repeat
}


texture(sampler2D T, vec2 p, vec4 rect, enum RepeatMode mode);


this does fit in with mipmapping and gives what you want IF both:

the texel-size of rect is a power of 2 AND
the boundary of rect is a multiple of the size of rect



Naturally this above can be logically extended to textureLod, textureGrad, textureProj, textureProjGrad, texture arrays, 1D textures, 3D textures, and texture rectangles. [3D textures will needs that the rect is a cube].

As a side note, it is debatable if it is also worth adding another enumeration argument stating if the filtering is linear or nearest. [mipmap filtering is essentially handled by LOD, though one can argue the filtering between mip maps also needs to be specified].

One can naturally emulate this doing the filtering by hand in shader using texelFetch, but that seems like an unnecessary burden on a developer and a potential optimization lost on the hardware.

thokra
09-18-2012, 03:43 AM
Kind of cool. Except for the restriction on POT rects. This partly crippels the only real advantage texture atlases still have over texture arrays, i.e. textures of arbitrary sizes packed into a single NPOT texture. Am I missing something?

kRogue
09-18-2012, 06:56 AM
The thing would still just work even if the rect is not a power of 2 in size and such, but things get ugly when higher LOD's are accessed. Take an example as follows:

Real Texture is say 4 texels long, say the contents of the entire texture is:

| A | B | C | D |

then with a typical box filter (on 1D) would give this:

LOD1: | (A+B)/2 | (C+D)/2 |
LOD2: | (A+B+C+D)/4 |

now lets say the sub-rectangle is length 2 starting at texel 1 instead of texel 0, then we would want that:

LOD0: | B | C |
LOD1: | (B+C)/2 |

which for LOD1 that texel value does not even exist. Going further what happen if we feed a texel coordinate on the boundary of B and C under GL_LINEAR_MIPMAP_NEAREST for minification and GL_LINEAR for magnification:

textureLod(, 0) --> (B+C)/2 which is what we want
textureLod(, 1) --> (A+B+C+D)/4, the average of (A+B)/2 and (C+D)/2

i.e. the mips leak. If we do not have mipmap filtering then no worries, if the sub-rectangle is a power of 2 and if it "starts" on a multiple of its size, then the mip maps won't leak either.

I am not advocating that the rect argument of the new function needs to be power-2 jazz, but saying that mip-maps will bleed from outside the rectangle.

Along these lines, I guess another function worth considering is to have texture() also get passed the maximum LOD allowed to be fetched, so for example if a sub-rect is say 13 texels long and starts at say texel 3, then the mip does not leak until LOD=2 for example....

aqnuep
09-18-2012, 08:23 AM
One can naturally emulate this doing the filtering by hand in shader using texelFetch, but that seems like an unnecessary burden on a developer and a potential optimization lost on the hardware.

Well, the only problem is that current hardware wouldn't be able to use hardware filtering of textures within an atlas, otherwise this would have been there for a long time. The only benefit you would get is that you don't have to write the filtering code yourself which doesn't justify adding this feature to GLSL.

What would be a better and cleaner method is if we would have texture arrays that can have layers arbitrary sizes. However, I don't think that there's hardware support to that either. The closest functionality may be NVIDIA's bindless textures what you can use to emulate such a texture array by having an array of samplers stored in a buffer.

kRogue
09-18-2012, 12:34 PM
Well, the only problem is that current hardware wouldn't be able to use hardware filtering of textures within an atlas, otherwise this would have been there for a long time. The only benefit you would get is that you don't have to write the filtering code yourself which doesn't justify adding this feature to GLSL.


I don't know if that is true, regardless, within the rectangle there is nothing to do, the only issue is on the rectangle boundaries.. and then it comes down to what is doing the filtering and how flexible it is....the textureFooOffset functions can also be emulated in GLSL, and they were added anyways..likely because the GL3 generation hardware could do it faster because of some extra magics.