PDA

View Full Version : ATI_texture_float problems on 6800gt



ar2k
08-13-2004, 11:47 PM
I was wondering if anyone else has tried the ATI_texture_float extension with a GeForce 6800.

My performance is terrible, previously I ran my application which uses a 512x512x6 float32 cubemap, on a radeon 9700 and it flew (80+ fps). On my GeForce 6800gt 256mb and the latest nvidia drivers I get about 1 fps.

What's even stranger, I tried setting texture filtering to GL_NEAREST and as a result I got corrupt non-floating-point textures and good performance, so it seems it's GL_LINEAR or nothing.

I don't have the greatest AGP port, I think it's only 4x, but I don't think that's the problem.

My program is not using any float_buffer extensions or rectangle textures or anything, just straight-up ATI_texture_float. I also use ARB vertex programs and no fragment programs.

Apreciate any help.

OneSadCookie
08-14-2004, 12:29 AM
We also tried out this path recently, and found abysmal performance with 32-bit float textures. 16-bit float textures performed very well.

-NiCo-
08-14-2004, 02:53 AM
Take a look at page 36 of the nvidia programming guide
http://download.nvidia.com/developer/GPU_Programming_Guide/GPU_Programming_Guide.pdf

Apparently filtering is only fully supported for 16 bit fp textures.

Greetz,

Nico

Korval
08-14-2004, 05:23 PM
I tried setting texture filtering to GL_NEAREST and as a result I got corrupt non-floating-point textures and good performance, so it seems it's GL_LINEAR or nothing.Sounds like you should submit a bug report to nVidia, more than simply accepting it.

ar2k
08-15-2004, 12:36 AM
Originally posted by OneSadCookie:
We also tried out this path recently, and found abysmal performance with 32-bit float textures. 16-bit float textures performed very well.Pity about float32, my Radeon 9700 hauled ass in 32-bit :rolleyes:

I tried 16-bit float textures out and that fixed the performance problem. However, I'm trying to use a high dynamic range reflection cubemap, and the same code has high dynamic range in 32-bit and low dynamic range with 16-bit textures. I color my HDR skybox to be (0.5,0.5,0.5) and all colors fade evenly, including the bright white sun flare.

Do 16 bit floats get clamped earlier in the pipeline somehow? Do I need to enable something? Right now I'm simply using

glTexImage2D( target, 0, GL_RGB_FLOAT16_ATI, 512, 512, 0, GL_RGB, GL_HALF_FLOAT_NV, data );

without enabling anything.

I could also just be screwing up my conversion from 32-bit to 16-bit floats. Would anyone care to post an algorithm or some code to do it correctly?

Appreciate all the help!

bionicman
08-15-2004, 05:18 AM
Your Radeon 9700 actually hauled ass in 24-bit.

Korval
08-15-2004, 10:21 AM
Your Radeon 9700 actually hauled ass in 24-bit.The texture itself was still 32-bit. It may have been downgraded into a 24-bit when sampled and loaded into the fragment program, but the sampling data was still 32-bit data.

dorbie
08-16-2004, 09:14 AM
Originally posted by bionicman:
Your Radeon 9700 actually hauled ass in 24-bit.ROFL