PDA

View Full Version : 10 bits dpx texture



qnext
07-28-2013, 02:23 AM
Hi,

I need to display 10 bits texture coming from 10 Bits dpx file.

1)- I have a firepro, so I can enable 10 bits opengl output (that's what I wants to do add the end). For now I have disable it, and it hopes to see the 10 bits texture in a 8 bits per pixel opengl view.
I can see something but the color are strange and it's not a simple swap of color ( for example part of a grayscale have color !) . I use :

GL_UNSIGNED_INT_10_10_10_2 (I have also tryed GL_UNSIGNED_INT_2_10_10_10_REV)texture format : GL_RGBA and internal format : GL_RGB10_A2
Do I need a special shader to display the 10 bits texture on a 8 bits /channels fbo ?


2)- I need to use alpha channel for compositing : I have intermediate fbo : What format should I use to preserve the 10 bits texture and have access to full range alpha for compositing : Do I need to use for example RGBA12 or RGBA16 ?

Bruce Wheaton
07-28-2013, 12:51 PM
Hi,

I need to display 10 bits texture coming from 10 Bits dpx file.

1)- I have a firepro, so I can enable 10 bits opengl output (that's what I wants to do add the end). For now I have disable it, and it hopes to see the 10 bits texture in a 8 bits per pixel opengl view.
I can see something but the color are strange and it's not a simple swap of color ( for example part of a grayscale have color !) . I use :

GL_UNSIGNED_INT_10_10_10_2 (I have also tryed GL_UNSIGNED_INT_2_10_10_10_REV)texture format : GL_RGBA and internal format : GL_RGB10_A2
Do I need a special shader to display the 10 bits texture on a 8 bits /channels fbo ?


2)- I need to use alpha channel for compositing : I have intermediate fbo : What format should I use to preserve the 10 bits texture and have access to full range alpha for compositing : Do I need to use for example RGBA12 or RGBA16 ?

If your texture format isn't natively supported on a platform you're targeting (as your weird colors indicate) then yes, you'd need to write a shader. It isn't just a colorspace problem? For instance, would a dpx be in Cinema XYZ (?) format, not RGB?

You should also look to do your uploads in a format that the driver won't 'swizzle' if you're hoping for high speed.

I recommend 'HALF' as an interim FBO/texture format. It's 16-bit float RGBA. Good mix of size and fidelity, and supported pretty well across the board these days.

Bruce

Alfonse Reinheart
07-28-2013, 01:03 PM
Um, GL_RGB10_A2 and GL_RGBA16 are required image formats for OpenGL 3.0+. So they exist and are supported on anything released since 2008.

malexander
07-28-2013, 01:21 PM
I recommend 'HALF' as an interim FBO/texture format. It's 16-bit float RGBA. Good mix of size and fidelity, and supported pretty well across the board these days.

I agree. We use FLOAT16 for representing 10b log Cineon images, which is a good format for playback speed and memory use, and it provides a good base for further color correction if needed.

qnext
07-30-2013, 01:30 PM
thanks for all your answers ... I have tried to have an fbo with float16 ... but I have always theses stranges colors ....
A-Do I need to have a 30 bits display screen to have true color or can I simulate this on a normal screen even if I don't display the full range
B- is GL_RGB10_A2 a 10b log format like it seems I have on dpx ?
C- Do I need a 30 bits buffer and fbo in 16 bits to display a normal color with a texture in GL_RGB10_A2 ? or I should see a normal image (with only loss in precision) with an standard opengl format ?

Bruce Wheaton
07-30-2013, 02:58 PM
No, if your texture is being read properly, then your target format should be irrelevant.

Run a color bars still through your pipeline and post a picture.

GClements
07-30-2013, 05:56 PM
A-Do I need to have a 30 bits display screen to have true color or can I simulate this on a normal screen even if I don't display the full range

OpenGL won't care if the physical framebuffer is only 24-bpp, and will care even less about what the monitor supports.


B- is GL_RGB10_A2 a 10b log format like it seems I have on dpx ?
No. It's just 10 bits for each of R,G,B and 2 bits for A, packed into a 32-bit word. It's not even sRGB, let alone logarithmic. OpenGL doesn't directly support logarithmic textures, but you can convert the values in a shader. Note that you'll also have to ensure that the result is correctly converted to sRGB (or whatever the monitor uses).


C- Do I need a 30 bits buffer and fbo in 16 bits to display a normal color with a texture in GL_RGB10_A2 ? or I should see a normal image (with only loss in precision) with an standard opengl format ?
Rendering and copying don't care about the number of bits. For unsigned normalised formats, all components are treated as real numbers in the range 0..1. The number of bits just determines the precision.