Aliasing one texture with another

Does anyone know if it’s possible to alias a sub-region of one texture or framebuffer as another texture of the same pixel format?

From my point of view, this’d be really nice since you can then write shader code that doesn’t have to worry about keeping texture sampling within a particular region of a larger texture, and clamping/tiling/mirroring addressing modes would work transparently for the region aliased texture that are bound to the samplers. I’m talking about 1D and 2D textures here, where an aliasing texture would only have the first mip-level; supporting fully mipmapped sub-region aliasing textures sounds tricky. I can imagine it also being useful for aliasing a region of a volume/3D texture… Can’t quite see how it’d be useful to alias a rectangular region of a cube face, though.

As another example, this can also provide a handy mechansim for dynamically down-rez’ing any rendering and thereby dynamically balancing the GPU load to some extent, again, without having to muck about with clamping texture addressing regions. For example, if you time the GPU rendering time for a frame, if you run over a maximum permitted GPU render time (and where you decide a small loss of visual fidelity is acceptable… e.g. if the camera is panning or moving fast), you smoothly scale your main scene viewport size down a little. Just reducing the viewport size onto the framebuffer isn’t usually enough if you still have to do some post-processing on your scene or ultimately copy it to the main back-buffer, where you need it to fill the whole buffer. Your post-effects or copying then needs to sample from the main scene framebuffer within this shrunken region (0,0 to 0.9,0.9, say)… More correctly, it needs to stay half a texel’s width and height within this region. Instead of having to burden the shader code with ensuring it samples half a texel within this down-rez’d region, a smaller texture aliasing this texture region simplifies everything; the shader just samples from 0,0 to 1,1 as usual.

Apologies for the excessive examples.
Anyway, if this kind of thing is possible, could you point me to the relevant extensions or examples? If not, would others find this kind of thing a useful feature… or are there very good reasons for not allowing this kind of stuff?

Cheers

No, it’s not possible unless you code it by yourself using shaders.
Adding such functionality is beyond the philosophy of graphics API.

As for performance - everything is now done using shaders so there’s no difference if you have it in API or if you do it yourself.

One more reason why this should not be added is that for the sake of performance GPU’s are optimized for common cases, not for specific cases.

Originally posted by k_szczech:
Adding such functionality is beyond the philosophy of graphics API.
Could you elaborate? Specifically: What is it about this feature that means it doesn’t sit well with OpenGL?

Originally posted by k_szczech:
As for performance - everything is now done using shaders so there’s no difference if you have it in API or if you do it yourself.
Except the difference is much cleaner shader code and user-level rendering API, which is something worth striving for, surely.

Originally posted by k_szczech:
One more reason why this should not be added is that for the sake of performance GPU’s are optimized for common cases, not for specific cases.
Sorry to push you further, but that statement is too vague for me to accept as reason why this cannot be supported. Could you elaborate, please?

I use this very feature extensively on another platform: It significantly cleans up the rendering code, the fragment shaders, and is a very nice feature I miss on PC.

May I ask what is your “another platform” ?

I think that the PS2 was capable of tiling or clamping a subregions of texture so it is possible that the PS3 got this feature too.

But doesn’t PS3 have a GeForce 7800?

Ok, to answer some questions:

What is it about this feature that means it doesn’t sit well with OpenGL?
Just ask yourself a question: “Why this feature should be added and not 1000’s of other similar features?”
This is in my opinion the whole idea of clean API - not adding functionality that will be used by 1% of applications.

Except the difference is much cleaner shader code and user-level rendering API
Just implement a function that will replace texture2D() in the shader - put it in separate file with uniform variables necessary to re-calculate coordinates and simply make a shader object out of it - let your application link this shader object to every single shader you have. Then just use my_texture2D() instead of texture2D().
On the applicatoin side - instead of calling some kind of OpenGL function that will setup that subtexture area you just call your own function that does the same.

As you can see, from now on the only difference in soure code between self-implemented and native functionality is… function name.

Sorry to push you further
That’s what this forum is for, so bring it on! :slight_smile:

Could you elaborate, please?
Perhaps I’m wrong. Perhaps such functionality could be added to hardware without performence impact on the standard texturing.

But if there would be performance impact then consider this:

  1. Assume that:
    -application can emulate subtextures with 50% of speed
    -adding hardware support for subtextures will have sideeffect of 10% performance loss on standard textures

Case 1:
Application uses subtextures in 20% of what it draws, so 20% of application is now 2x faster, but 80% of application is 10% slower,
That means more less 10% of gain and 8% of loss.

Case 2:
Application does not use subtextures - it’s 10% slower.

Now assume that:
Case 1 - 1% of applications
Case 2 - 99% of applications

How much did that 1% of applications gain?
How much did these 99% of applications loose?

That’s why I’m saying that optimizing the common case at cost of rare case is typical, but optimizing rare case at cost of common case is rather unacceptable.

But as I said - it could be that such functionality would not introduce any sideeffects in normal texturing.

Originally posted by k_szczech:
Just ask yourself a question: “Why this feature should be added and not 1000’s of other similar features?”
To which I’ve answered that it nicely cleans up user-level render and shader code. Yes, it’s something that can be done through a bit of extra work and faffing about, setting additional region and half texel width shader parameters each time a different texture is bound to a particular sampler… something which, again, could be wrapped up a little more nicely in the user render code, but that’s what any library/API is for; to provide useful functionality to unburden the load on the user-level software. With your reasoning wouldn’t we all be having to write our own versions of Mesa, our own ray-tracers, or our own graphics card drivers for each software app we write, simply because you can always say, ‘well, if you can alway achieve what you want by writing it yourself, then I’ve no reason to add that feature to this particular API.’?

So, perhaps a better philosophy would be to ask all of the following -

  • Why should this feature be added to this library/API? … for which I’ve suggested a couple of reasons.
  • Are there enough existing users who want this feature to make it a worthwhile addition to the library? … Well, there’s me and a couple of my colleagues, for a start. Anyone else? :slight_smile:
  • Is this feature something that would give it some small advantage (whether simply usability, ease of learning for newcomers, performance, exposing some fancy new hardware features) over competing APIs?
  • Are there good reasons that would mean efficient implementation would be very difficult or support of this feature would be detrimental to other aspects of the API? … Well, if you don’t know, rather than speculate, can we get some thoughts on this particular aspect by any driver writers out there?

Originally posted by DrGoatcabin:
As another example, this can also provide a handy mechansim for dynamically down-rez’ing any rendering and thereby dynamically balancing the GPU load to some extent, again, without having to muck about with clamping texture addressing regions. For example, if you time the GPU rendering time for a frame, if you run over a maximum permitted GPU render time (and where you decide a small loss of visual fidelity is acceptable… e.g. if the camera is panning or moving fast), you smoothly scale your main scene viewport size down a little. Just reducing the viewport size onto the framebuffer isn’t usually enough if you still have to do some post-processing on your scene or ultimately copy it to the main back-buffer, where you need it to fill the whole buffer. Your post-effects or copying then needs to sample from the main scene framebuffer within this shrunken region (0,0 to 0.9,0.9, say)… More correctly, it needs to stay half a texel’s width and height within this region. Instead of having to burden the shader code with ensuring it samples half a texel within this down-rez’d region, a smaller texture aliasing this texture region simplifies everything; the shader just samples from 0,0 to 1,1 as usual.

You want to sample from region 0.0 to 0.9?
What’s stopping you from giving these coordinates to your polygon? I’m assuming you want to render a fullscreen quad with that texture.

From what I understood, you have this texture that you want to use in 2 ways.
1 method want to stay within the 0.0 to 0.9
and the other 0.0 to 1.0
The case of 0.0 to 0.9 samples textures slightly above the 0.9 pixel (whatever that might be)

Wasn’t there some integer texture coord extension to help with this kind of thing?

Here’s something I hope will illustrate the point… so long as my diagrams come out formatted correctly -

Original off-screen framebuffer texture (4x1 pixels for example) -

                          0.75,1       1,1
+---------+---------+---------+---------+
|         |         |         |         |
|         |         |         |         |
|         |         |         |         |
|         |         |         |         |
|         |         |         |         |
+---------+---------+---------+---------+
0,0

Right, so now I want to do this viewport down-rez’ing thing because I’m rendering quite a lot into the scene and the last frame time was over the maximum permitted rendertime
… and the camera’s panning/moving fast, so no one will notice and we should be able to keep up a decent render speed.
I set the viewport to render to a region of 3x1 pixels now.
So, there’s good stuff in the first 3 pixels and crap/nothing in the last one.

Now I want to do some post-processing effects or just a plain copy as a textured quad to the main display back-buffer, which is a 4x1 target.
Drawing to the main display back-buffer, I use the full 4x1 viewport, and render a full-screen textured quad, using my post-effects shader or whatever, but with texture coordinates now from 0,0 to 0.75,1.

Below is an illustration of the interpolated texture coordinates for the pixel centres.
These are where the copy or post-effects shaders will sample the original texture.

                                    0.75,1
+---------+---------+---------+---------+
|         |         |         |         |
|         |         |         |         |
|    X    |    X    |    X    |    X    |
|.094,.5  |.281,.5  |.469,.5  |.656,.5  |
|         |         |         |         |
+---------+---------+---------+---------+
0,0

You should be able to see that the last pixel along is going to look up the original texture with a U-coordinate of 0.656, which is show on the original texture, below -

                          0.75,1       1,1
+---------+---------+---------+---------+
|         |         |  0.625  |         |
|         |         |    |    |         |
|         |         |    X  X |         |
|         |         |       | |         |
|         |         |    0.656|         |
+---------+---------+---------+---------+
0,0

Sampling here will bilinearly interpolate and pick up a small contribution of the 4th texel, which has invalid information in it.
This is why we also need need to ensure that all samples of this 3x1 texture region stay half a texel within the region we’re interested in.

You might then think, “Well, just set your quads texture coordinates from 0,0 to 0.625,1”.
Imagine this scenario: We now want to our full-screen quad to a 3x1 viewport.
We’d expect to see an exact pixel copy of the original. These are the pixels we’d render (showing interpolated u-coords only) -

                          .625,1
+---------+---------+---------+
|         |         |         |
|         |         |         |
|    X    |    X    |    X    |
|  .1042  |  .3125  |  .5208  |
|         |         |         |
+---------+---------+---------+
0,0

which, on our original texture are here -

                          0.75,1       1,1
+---------+---------+---------+---------+
|         |         |         |         |
|         |         |         |         |
|  X      |   X     |    X    |         |
|         |         |         |         |
|         |         |         |         |
+---------+---------+---------+---------+
0,0

So you can see, in this case, the first two rendered pixels don’t sample in the centre of the desired texels; the second will actually be polluted by a small contribution from the first.
Again, for the simplicity of this example, I’ve kept the sub-texture region to one side. If the sub texture region required was entirely within the interior of a texture, we’d have to worry about clamping or repeating to the top, bottom, left, and right edges.

As another example -
I want to draw a large quad with UVs from 0,0 to 10,10, where the sampler addressing mode is something like tiled/repeat and as fragments interpolate from 0 to 10, the sub-texture region aliased by the sampler’s bound texture needs to tile within the aliased region of the original texture.

I hope these explains the need for having to handle the half-texel width region interior ontop of simply using primitive texture coordinates of 0,0 to 0.9,0.9 or 0.1,0.2 to 0.5,0.5 or whatever.

Texture aliasing is just simple and a fairly clean (at least from a user’s point of view) way of not having to care about these kinds of issues shown above.
If this can be handled totally transparently to the user with the existing functionality of the samplers, through using simple non-mipped aliasing textures, which I suspect it can (but would like confirmation), then all the user needs to do is set up a simple 3x1 pixel aliasing texture or whatever they like, with samplers using whatever addressing mode they like, and everything else is handled efficiently and painlessly by the hardware :slight_smile:

(Edit: Bugger! My ascii art didn’t come out right. Fixed, hopefully.)