I am having a problem with resolution (bit depth) when indexing into a 3D texture map. The problem is apparent on all nVidia cards I have tested, but not ATI based cards.
The problem is that values that are returned from the texture3D() call in my fragment shader appear to be clipped to 8 bit precision, so subsequent operations on the data (such as applying a gamma correction) yield unacceptable banding.
I can test this by multiplying the result of a texture sample by some large factor (10, say), and letting the texture coordinates vary between one texel and the adjacent texel (which differ by 1 8-bit “grey level”). Instead of seeing a nice gradient ramp from the value in bin 1 to the value in bin 2, I see a hard transition (an increase in 10 intensity levels between adjacent output pixels).
The same test on ATI cards yields the desired linear transition. My conclusion is that on the nVidia cards the sample precision is clipped to my output precision before they are returned to me in the fragment shader, though this seems odd/unlikely.
If I do my own interpolation (complete with 8 texture fetches, and GL_NEAREST sampling) I (of course) get the results I want.
I would expect this to be either a usage problem on my part (I can’t figure out what it is) or a very common issue with developers (though my searching hasn’t turned anything up). Anyone seen this, or understand it well enough to make a suggestion?
Cheers
-Steve
For clarity: This occurs in 1D textures as well – it’s not restricted to 3D textures.