ATI texture coordinate precision

Hi,

this is the same issue that I’ve talked about in the glsl-forum but since the error didn’t have anything to do with shaders, I’ll be annoying and post the same question here…

I’ve got a really simple glut-application in which I create a 2x4 texels big 2D-texture (which is power of two) and apply that texture to a quad.
I set the texture to hold floating point GL_LUMINANCE values and define them to the following:

0.1 0.1
0.1 0.1
0.5 0.5
1.0 1.0

If I apply this texture using a nearest-neighbour lookup I would expect to get the following behaviour:

texcoord.y 0.749999 returns 0.5
texcoord.y 0.75 returns 0.5
texcoord.y 0.750001 returns 1.0

This is what I actually get:

Nvidia gf8800 with latest drives:
texcoord.y 0.749999 returns 0.5
texcoord.y 0.75 returns 0.5 (some precision errors here, some fragments are 0.5 some are 1.0)
texcoord.y 0.750001 returns 1.0

I would say that Nvidia follows my expectations, ATI on the other hand…

ATI hd2900 with latest drivers:
texcoord.y 0.7480 returns 0.5
texcoord.y 0.7481 and all above returns 1.0

If I haven’t done something REALLY stupid in my application, I’d say that this is an ATI bug\feature here. Those precision errors are not really acceptable… Can someone confirm my findings? (and maybe suggest how to proceed with the issue…)

I have tried to use an ordinary RGB texture format, with no luck. I have tried to use glsl-shaders for texturing, with no luck. I have tried to use a NPOT-texture, with no luck. I have tried to use a 3D-texture, with no luck… maybe I should try a 1D-texture :wink:

If you run my app, you can switch between different y-texcoords with keys 1 through 5.

In your situation I would give shaders a try and output the intermediate texture coordinates to a 32bit floating point FBO for debugging…

N.

Well… I’ve done that, and I get the exact same values as the ones I specified in my glTexCoord2d-call, no precision error at all.

So the shader retreives the correct texture coordinates. IMHO the problem seems to lie in the actual texture lookup…

Did you try hardcoding the coordinates in the texture2D call, just to make sure? How about the settings in your ATI control panel…are they set to their default values?

N.

Yeah, the values are hardcoded and pretty much everything is set to default…

I’ve tried my application on two different ATI cards and on two different NVidia cards. The NVidia ones works, ATI doesn’t…

If I remember correctly, texture cordinates on Radeon X800 are represented by 24-bit fixed-point registers. You could try outputting gl_FragCoord.xy onto high-res screen (or large FBO) and see what you get.
Older Radeon’s don’t have hardware support for gl_FragCoord and use varying variable for it. I’m not sure about the latest Radeon’s.

gl_FragCoord? How does those values effect texturing?

He means you can determine where exactly the hardware samples and might be able to deduct the rasterizer’s interpolators’ precision from it.
(All considering that the maximum value precision you can write is to a FP32 buffer.)

In the ideal world the gl_FragCoord should be spot on the center of each pixel, so at 0.5, 1.5, 2.5, etc.

Same goes for the texture interpolators. If you dump the coordinates the rasterizer uses for sampling on a 1-to-1 texel to pixel mapping of projection and modelview matrix setup, you can see how exact those are as well.

Won’t help you much if it’s too shabby.

>>(and maybe suggest how to proceed with the issue…)<<

e-bay? :wink:

Have you tried updating your driver? What card do you have and is it a desktop?
It could be a sampling issue and that the texcoord precision is fine.

Why do you need this amount of subtexel precision?

If my math is right you’re getting 7 bits of subtexel precision. That’s about what you can expect. Earlier hardware may give you less, don’t have any exact numbers. Have you tried this on any older Nvidia hardware? I’d be surprised if anything but the G80 passes your test. Most hardware since the beginning of time converts texture coordinates to a fixed point representation before sampling the texture. If you need more subtexel precision bits, you can increase the size of the texture. Each double in size will give you another bit of precision. Alternatively, you can snap your texture coordinates to texel centers in the shader.

I’m doing medical viz, so the precision is quite important, typically when evaluating cancer radiation treatment plans some thresholds needs to be spot on.

I’ve tested on various Nvidia hardware, and actually all seems to pass, I don’t have any model numbers in my head but only one was from the G80 series.

However, I didn’t know that larger textures made the lookup precision better. The textures we use are quite big, but in our testing environment we only use small ones. I tried using a 64x64 texture, and the preicison was better.

0.7498 gave me 0.5, and 0.7499 gave me 1.0. I guess I’ll settle with that since the textures we use are bigger than 64x64. Just a thought, if I have a 1024x2 texture, would the coordinate precision be different in the x and y component or is it the same?

But I guess I don’t have an issue any more :slight_smile:

thanks.

It would be different. Basically you multiply the floating point coordinates by the texture dimensions and then round them to a fixed point value, likely with 6 fractional bits:

0.7480 * 4 = 2.992 = 191.448 / 64 ~> 191 / 64 = 2.984375
0.7481 * 4 = 2.994 = 191.616 / 64 ~> 192 / 64 = 3

s * 1024 = s * 65536 / 64 -> precision is 1/65536
t * 2 = t * 128 / 64 -> precision is 1/128

Ah, I didn’t know texture coordinates was handled that way…

You live and you learn :slight_smile:

So for the record, NVidia cards probably use more fractional bits than ATI, right?

Yes, although the difference you see could also stem from different rounding modes.