PDA

View Full Version : texture filtering precision



michael.bauer
04-14-2005, 12:13 PM
Hi,

I would like to know if there is a way to choose which precision of texture filtering is applied by the hardware. AFIAK current nvidia hardware can do 16bit fixed point (or even 16 bit floating point) bilinear (or trilinear for 3d-textures) filtering. There are cases where I want to get 16bit precise filtering for 8bit textures. E.g. when dependent lookups using the first texture lookup result follow. I have thought of using the glHint mechanism but I don't know if there is any hint for that.

Any help is highly appreciated.
Michael

dorbie
04-15-2005, 12:54 AM
There is no explicit control over this. Texture units aren't exposed in this detail and the internal operation of a texture fetch is pretty much a black box. An implementation may well interpolate with more precision or it may choose not to. You could promote your texture to the desired internal format where it would then probably be interpolated with the desired precision but that may not be what you want.

michael.bauer
04-15-2005, 12:02 PM
thanks dorbie,

What about workarounds? I could upload 8bit textures and then copy them to 16 bit filtered ones (fast since on GPU). If they are small enough they would be in some kind of cache then and could be used without hurting performance too much since the following reads come out of cache. Doesn't sound very elegant but could work. Did anybody do this kind of workaround yet?

gybe
04-15-2005, 04:40 PM
Did you try the 10_10_10_2 texture format? Since they have more than 8 bits of precision, maybe the hardware will use the 16 bits path.

michael.bauer
04-16-2005, 06:16 AM
gybe,

I'm using LUMINANCE8. Btw: 10_10_10_2 is precision substituted to RGBA8888. (see nvidias website)

michael.bauer
04-16-2005, 06:18 AM
what about defining an extension for filtering precision. It would be great to get access to these parameters and it is likely to be already supported by the hardware, right?

gybe
04-16-2005, 07:01 AM
Oh sorry, on ATI 10_10_10_2 are not clamp to 8 bits, I though it was the same thing on NVidia.

If you dont want to copy your 8 bits texture to a 16 bits texture, maybe you can do the filtering in the pixel shader. (You sample 4 times the texture in nearest and you blend the 4 samples together).

michael.bauer
04-16-2005, 07:20 AM
gybe,

I also thought about filtering using the pixel shader but that whould hurt performance too much, especially for 3d textures (8 samples).

What's the fastest way to copy from a 8 bit texture object to a 16 bit texture object? Rendering it to a 16 bit fixed point texture isn't possible. rendering to 16 bit float buffer would lead us to a float texture and could also hurt performance.