this is the same issue that I’ve talked about in the glsl-forum but since the error didn’t have anything to do with shaders, I’ll be annoying and post the same question here…
I’ve got a really simple glut-application in which I create a 2x4 texels big 2D-texture (which is power of two) and apply that texture to a quad.
I set the texture to hold floating point GL_LUMINANCE values and define them to the following:
0.1 0.1
0.1 0.1
0.5 0.5
1.0 1.0
If I apply this texture using a nearest-neighbour lookup I would expect to get the following behaviour:
Nvidia gf8800 with latest drives:
texcoord.y 0.749999 returns 0.5
texcoord.y 0.75 returns 0.5 (some precision errors here, some fragments are 0.5 some are 1.0)
texcoord.y 0.750001 returns 1.0
I would say that Nvidia follows my expectations, ATI on the other hand…
ATI hd2900 with latest drivers:
texcoord.y 0.7480 returns 0.5
texcoord.y 0.7481 and all above returns 1.0
If I haven’t done something REALLY stupid in my application, I’d say that this is an ATI bug\feature here. Those precision errors are not really acceptable… Can someone confirm my findings? (and maybe suggest how to proceed with the issue…)
I have tried to use an ordinary RGB texture format, with no luck. I have tried to use glsl-shaders for texturing, with no luck. I have tried to use a NPOT-texture, with no luck. I have tried to use a 3D-texture, with no luck… maybe I should try a 1D-texture
If you run my app, you can switch between different y-texcoords with keys 1 through 5.
Did you try hardcoding the coordinates in the texture2D call, just to make sure? How about the settings in your ATI control panel…are they set to their default values?
If I remember correctly, texture cordinates on Radeon X800 are represented by 24-bit fixed-point registers. You could try outputting gl_FragCoord.xy onto high-res screen (or large FBO) and see what you get.
Older Radeon’s don’t have hardware support for gl_FragCoord and use varying variable for it. I’m not sure about the latest Radeon’s.
He means you can determine where exactly the hardware samples and might be able to deduct the rasterizer’s interpolators’ precision from it.
(All considering that the maximum value precision you can write is to a FP32 buffer.)
In the ideal world the gl_FragCoord should be spot on the center of each pixel, so at 0.5, 1.5, 2.5, etc.
Same goes for the texture interpolators. If you dump the coordinates the rasterizer uses for sampling on a 1-to-1 texel to pixel mapping of projection and modelview matrix setup, you can see how exact those are as well.
Won’t help you much if it’s too shabby.
>>(and maybe suggest how to proceed with the issue…)<<
Why do you need this amount of subtexel precision?
If my math is right you’re getting 7 bits of subtexel precision. That’s about what you can expect. Earlier hardware may give you less, don’t have any exact numbers. Have you tried this on any older Nvidia hardware? I’d be surprised if anything but the G80 passes your test. Most hardware since the beginning of time converts texture coordinates to a fixed point representation before sampling the texture. If you need more subtexel precision bits, you can increase the size of the texture. Each double in size will give you another bit of precision. Alternatively, you can snap your texture coordinates to texel centers in the shader.
I’m doing medical viz, so the precision is quite important, typically when evaluating cancer radiation treatment plans some thresholds needs to be spot on.
I’ve tested on various Nvidia hardware, and actually all seems to pass, I don’t have any model numbers in my head but only one was from the G80 series.
However, I didn’t know that larger textures made the lookup precision better. The textures we use are quite big, but in our testing environment we only use small ones. I tried using a 64x64 texture, and the preicison was better.
0.7498 gave me 0.5, and 0.7499 gave me 1.0. I guess I’ll settle with that since the textures we use are bigger than 64x64. Just a thought, if I have a 1024x2 texture, would the coordinate precision be different in the x and y component or is it the same?
It would be different. Basically you multiply the floating point coordinates by the texture dimensions and then round them to a fixed point value, likely with 6 fractional bits: