View Full Version : texture precision
I seem to only get 8-bits of precision on my textures even if I define them as 16-bit components. I believe everything is handled as floating point in the shader, and I suspect the texture engine does a 16-bit filter operation, but I think the filter only passes 8-bits on to the shader.
This behavior is on my nVidia GeForce 6800 Ultra. I got better than 8-bit results from my old ATI Radeon 9800 card.
One way to visualize this problem is to mulitply your textures by a number greater than 1.0. This amplifies the least significant step size. If the entire hardware pipeline has sufficient resolution, the steps should not become apparent, but no nVidia they become apparent very quickly.
Does anyone know if I am doing something wrong (wrong data types or something)? Or is this really the way the nVidia hardware is?
12-07-2004, 01:32 PM
What texture format are you using?
internal format is GL_LUMINANCE16_ALPHA16
type is GL_FLOAT
I see the same problem on 8-bit LINEAR blended textures (ie, GL_RGB, GL_UNSIGNED_BYTE). If my texture values range from 0 to 255, the bilinear (or trilinear) blend should introduce fractional bits after filtering. For example, if I zoom in on two texels with values 24 and 25, the resulting pixels should ramp between 24.0 and 25.0 across that texel (ie, 24.0, 24.1, 24.2, 24.3, etc.). But all I get in to the shader is 24.0 or 25.0. The shader then converts that to a float, but there are only 8 useful bits.
12-07-2004, 04:32 PM
On my GeForce 5900, a format of GL_FLOAT_R32_NV and a type of GL_FLOAT work fine. I don't know why yours isn't working, though. Does it help to request 32 bits instead of 16?
Edit: I misread your question. I thought you were just trying to use one channel, so this probably isn't relevant.
12-08-2004, 03:52 AM
You can't count on getting more than 8-bits of precision out of an 8-bit texture after filtering. You might even get less than 8 bits on certain hw.
12-10-2004, 01:20 AM
I have another question concerning the texture precision:
how can I access a grayscale value in my shader? I haven't done that yet...
I should define my texture format as GL_LUMINANCE8 and the type GL_FLOAT, if I want to access a grayscale image with 256 values - right?
But how can I get this values in my shader? The function texture2D returns a vector with components between 0 and 1, or am I wrong?
I don't know how I can get the float value (for example 150) from the texture instead of a vec3...
Can anyone explain that to me, please?
12-10-2004, 02:31 AM
Check the OpenGL specs for conversions of LUMINANCE. It's mentioned in multiple places.
(Table 3.21: Correspondence of filtered texture components to texture source components.)
For a texture download the luminance is stored in the R channel.
A lookup will return the luminance in RGB again. Same for DrawPixels operations.
Values are 0.0 to 1.0 if the texture is not a float texture. Just multiply by 255.0 if you want byte values.
Powered by vBulletin® Version 4.2.3 Copyright © 2016 vBulletin Solutions, Inc. All rights reserved.