10 bits per channel

Hi there!

I’m having problems figuring out how to set up OpenGL (with and without SDL) to use 10 bit resolution for RGB, something like D3D D3DFMT_A2R10G10B10. This might be obvious to anyone but myself…

BTW, I’m using a parhelia, but will port my volume renderer to ATI/NVidia as well.

Does anyone know if there are RGBA textures with 10 bits per channel, including alpha?

Oh, this would begood to know!

Thanks a lot!

Look in your GL.h and see what you find. Took me about 5 seconds to find it there. You could also check any documentation on glTexImage2D and see what parameters it accepts.

Hi Bob,

I think you refer to my second question. For some reason I assumed that GL_RGBA12_EXT and GL_RGBA16_EXT mean 12/16 bits for the complete pixel, and not for 12/16 bit for every channel, resulting in 48/64 bits per pixel.

Thanks a lot for your hint here.

Is the allocation of a 10-bit/channel drawable so obvious (again) that I can’t see it?

Sorry, I didn’t realize there was two questions, only one regarding texture formats.

Setting the pixel format is something you do with your windowing API (Win32, GLUT, SDL or whatever you use). So you should look into your windowing API to see how you can set bit depth. In Win32, you fill in the PIXELFORMATDESCRIPTOR structure. I have never used SDL myself, only seen some code, but I remember seeing some function to set bit depth. Should be easy to find.

As for the texture format, I was takling about GL_RGB10_A2. 10 bits per color channel, and 2 bits alpha; 32 bits in total.

[This message has been edited by Bob (edited 04-15-2003).]

Thanks Bob!

Actually I am aware that this is not a pure opengl topic, but I did hope that the is some experience with this issue out there that would make my life easier.

I did contact Matrox to shed some light on this issue, which I have to admit, does not receive a lot of coverage at all. Although, ATI and GeForce FX users will have the same problem with 10 bit, don’t they?

Thanks again!

From what I recall windows doesn’t let you create a RGB10_A2 desktop, so right now I don’t think you can get at that with OpenGL on most systems. Matrox may have done something special in HW or otherwise to trick this into working. You can get this in D3D, because you can go into exclusive mode, replacing the desktop.

One thing you can do is allocate a higher depth pbuffer and render to that then copy it to the main window of your app. ATI HW allows you to allocate both 64 bit integer, 64 bit floating point, and 128 bit floating point targets for this. The only issue is that blending causes a SW fallback with these formats, which I would expect is bad for your app. NVIDIA has similar capabilities, but their extensions explicitly disable blending.

-Evan

I am wondering about current hardware… is it able to handle floating point framebuffers? when could we render to a 64bits RGBA framebuffer or even 128bits RGBA framebuffer? is there a link with superbuffers?

Regards,

nystep,

Yes you can render to floating point targets with the latest video cards from ATI and NVidia.