New 16 and 32 bit float textures and speed

Hi,

Im sure i’ve read the answer to this somewhere before, but i cant find it, so im going to (maybe) repost

When using GL_RGBA_FLOAT16_ATI i get a huge performance hit on 3d Texturing, in both GL_LINEAR and _GL_NEAREST modes, however NVIDIA claim that there is no performance hit using these new modes. I am using a blending operation

I understand that currently filtering for 32bit floats and bloending does not work, so i have tried simply 16bit floats, and i get around half the frame rate of using GL_RGBA16, both trials were cast from floats originally (i.e GL_FLOAT)

can anybody offer me an explanation!? or maybe i could be setting something up wrong that would effect this.

Im using an NVIDIA 6800

Thanks in advance

also, having thought of it, is there any additional setup to make the frame buffer accept 16 bit floats, i.e. if i just set the textures internal format, does it then get clamped back to 8 bit integer for blending and writing to the frame buffer ?

There seem to be two things confused here.
You’re saying you use a FLOAT16 pixelformat to render into, because you want to have float buffer blending?
And you’re using a FLOAT16 3D texture, too? (If yes, is it necesssary?)
The texture format and filtering should have nothing to do with the float buffer blending.
How big are both?
Maybe you simply ran out of memory and the texture doesn’t fit into the video memory anymore?
Make sure you allocate only the buffers you need, e.g. no back and no depth pbuffers.

Hi, um basically i have used float 16 texturing, and havent set up any buffers to render to (other than the standard pipeline) as an after thought i thought i might have to do this (and i dont know how to do this),

I do need to use highest precision possible, so with blending that’ll be 16bit floats. this goes for the texture particularly, the texture also fits with ease on the cards memory)

so i guess theres two separate steps, getting the textures format to 16 bit float (done with GL_RGBA_FLOAT16_ATI) and then the part i dont know about, setting up a frame buffer that supports 16 bit floats to render to, im guessing if this buffer is set up in this manner, blending will happen at 16 bit float automatically!?)

please could you suggest an extension to me for setting up the frame buffer (or auxilary buffer) and also if there is any need to change the blending to suit

Thanks ver much

Here’s a link to the fp16 blending demo from the nvidia SDK.

http://download.nvidia.com/developer/SDK/Individual_Samples/DEMOS/OpenGL/simple_fp16_blend.zip

I believe the demo uses nv texture rectangles, but it also works fine for ati_float_16 textures using “ati_float=16” parameter for the pbuffer setup.

Nico

NVIDIA doesn’t support 16-bit RGBA textures in hardware; the driver is silently converting to 8-bit. So it’s no surprise that you see a roughly 2x speedup over 16-bit FP RGBA. For an apples-to-apples comparison, try a 1- or 2-channel 16-bit format (LUMINANCE16, ALPHA16, or LUMINANCE_ALPHA16).

We certainly do support 16-bit floating point RGBA textures (64-bits total) in hardware on the GeForce FX and GeForce 6 series.

You must be thinking of previous generations, which only supported 2-component 16-bit integer formats (GL_HILO etc).

16-bit per component formats will certainly be slower than 8-bit formats, if only because of the extra memory bandwidth required.

Originally posted by Steve Demlow:
NVIDIA doesn’t support 16-bit RGBA textures in hardware; the driver is silently converting to 8-bit. So it’s no surprise that you see a roughly 2x speedup over 16-bit FP RGBA. For an apples-to-apples comparison, try a 1- or 2-channel 16-bit format (LUMINANCE16, ALPHA16, or LUMINANCE_ALPHA16).

I should have been more clear. I know 16-bit floating point is supported. But as far as I (and my NVIDIA support contact) know, ordinal RGBA16 is still not supported in hardware. So the original poster’s comparison of 16-bit FP and “16-bit” ordinal performance is not apples-to-apples.

Steve

thanks for your responses

Ok that sounds good to me. From what i was hearing i thought FP 16 bit texturing was fudged in the drivers (which goes against the grain somewhat! accoding to nvidia documentation) im not too bothered about an int 16 model.

So, if i use a float 16 texture and do 16 bit floating point blending in my scene to a pbuffer (16 bit float), do i have to declare a 16 bits per colour frame buffer to display it correctly ?

or can i use a normal frame buffer and the precision because of previous caclulations will be enough (i.e. it gets truncated back to 8bit precision) (or does it automatically allocate the right frame buffer when working with pbuffers??)

im using GLUT currently, and as far as im aware you cant change the bit depth of the frame buffer.

i guess i need to know, if working with float 16 textures and pbuffers, do i need to somehow create a float 16 frame buffer ??

Hi,

If you already have your 16 bits textures and pixel buffer set up, you do not need to do anything else. If you do all the blending in the pixel buffer, then you will not lose any precision. Bear in mind though that the values will be clamped to the [0, 1] interval when you render as a texture your pixel buffer in your “normal” frame buffer.

I hope it helps.

Hi,

it took some time to figure this one out myself:

There is no way, with available hardware today, to output to a 16bit framebuffer (to display). pBuffers can be 16bit per channel and can be read out and saved to file, though.

If I am wrong on this I would gladly hear otherwise! :slight_smile:

Hi,

As far as I know, your are totally right. At least, it is the the case for the current available hardware. It is the reason why I had to use pixel buffers. They allow 16-bits and 32-bits precision by channel.

Does anyone know if 128-bits framebuffers will be supported in the near future?

Cheers!

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.