How to deal 10bit image on OpenGL

my program need to upload 10bit “v210” format image into a texture,the v210 format is a 10bit image,every component of it(Y,U,V) is 10bit,not 8bit regularly,so how can I upload it to texture,and how to download 10bit image from framebuffer(FBO) without any color precision lose.

If hardware doesnt support 10bit textures then you can upload image as regular RGBA texture and write wrapper fragment shader code which repack RGBA -> YUV(10bit).
RGBA takes 32 bits
YUV takes 30 bitsd + 2 bits unused.
According to http://www.fourcc.org v210 is 10bit YCrCb 4:2:2 format in which samples for 5 pixels are packed into 4 4-byte little endian words.
See http://developer.apple.com/quicktime/icefloe/dispatch019.html#v210

Anyway… with new hw which support integer textures, you can write shader code which fetch several pixels, apply some bitwise stuff and convert such data into v210.

On hw which doesnt support integer textures, you can extract bits with multiplication or by dividing with constants.
Something like…
let says Y U V pixel is: 750 922 444 (10bit allow range from 0-1023)
float values are: 0.73242 0.90039 0.43359 (I rounded to 5 digits)
binary: XX 1011101110 1110011010 0110111100 (XX = unused)
reordering to 8bits: XX101110 11101110 01101001 10111100
Let says XX is 00.


XXYYYYYY YYYYUUUU UUUUUUVV VVVVVVVV
00101110 11101110 01101001 10111100
46       238      105      188        in RGBA
float vals are: 0.17968 0.92968 0.41015 0.734375

So… to get Y we need whole R and 4 bits of G.
(R * 16 + G / 16)/4 = (2.87488 + 0.058105) / 4 = 0.73324625 * 1024 = 750.844

To get U we need lower 4 bits of G and upper 6 bits of B:
(mod(G,16/256)64 + B / 4) /4 = (0.0546864 + 0.1025375)/4 = (3.49952 + 0.1025375)/4 = 3.6020575 / 4 = 0.900514375 = 922.12672

To get V we need lower two bits of B and whole A:
(mod(B, 4/256)256 + A)/4 = (0.0039256 + 0.734375) / 4 = 1.732775/4 = 0.43319375 = 443.5904
In above exampe there is a numerical errors because of rounding on 5 digits.

To download v210 image use inverse math formulas to convert v210 10bits laouy to 8bit RGBA layout and then download image as RGBA. Im using similar trick with classic 8bite YUV422. In your case, macro pixel layout is Cb0-Y0-Cr0-Y1-Cb1-Y2-Cr1-Y3-Cb2-Y4-Cr2-Y5. So…
Cb0-Y0-Cr0 goes to first RGBA pixel
Y1-Cb1-Y2 goes to second RGBA pixel
Cr1-Y3-Cb2 goes to third RGBA pixel
Y4-Cr2-Y5 goes to 4th RGBA pixel
(check this layout order in documentation… maybe there is an typo)

If your original image is W x H pixels you need W * 4 / 5 pixels wide RGBA image buffer for conversion. For example HD 1920x1080 v210 can fit in 1536x1080 RGBA.

Which OGL extension indicate that my hardware support 10bit texture & integter texture?

Any one can tell me how to access integer texture’s integer pixel value in CG pixel shader?

GL_EXT_packed_pixels
http://www.opengl.org/registry/specs/EXT/packed_pixels.txt

Hi,Mr yooyo,I found the upload performance of integer texture is poor(look my thread " Performance of integer texture upload"),so I had to choose 10-bit texture,I decide to use below command to update v210 image to texture:
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,TWidth,THeight,GL_BGRA_EXT,GL_UNSIGNED_INT_2_10_10_10_REV,NULL);

then,does OGL normalize first 10bit as the blue element of first pixels,second 10bit as green,third 10bit as red,and forth 2bit as alpha,and then normalize fifth 10bit as blue of second pixel,sixth 10bit as green,…?What I guess is right?

I dodnt play with integer textures… Give me some time, and I’ll do some benches and tests…

Mr yooyo,I had written a CG shader to decode v210,but I found a very strange problem,I must use below code to calculate “Word0” pixel’s coordinate:

int rectX = (int)(texCoord.xtWidth);
int rectGroupX = rectX/6
4;
float2 uv = float2((float)rectGroupX/tWidth,texCoord.y)+float2(1.0f/tWidth/2.0f,0);

in my code,“tWidth” is the texture width,“texCoord” is the current tex coordinate,“uv” is the “Word0” pixel coord,I found I must add half pixel width to “uv”,otherwise the decoded image is incorrect,I can’t understand why “uv” must be added half pixel width,did you know the reason?the shader Language I use is NVIDIA’s cg.

It could be texel to pixel mapping issue. Read this:
http://www.inalogic.com/post/1-to-1-texel-to-pixel-alignment/

btw… Take look my post about integer uploading speed.

Mr yooyo,I’ll do some image processing for the unpacked 10bit image,if I want to keep the color precision,should I set the texture internal format as float(such as GL_RGBA16F_ARB,or GL_RGBA32F_ARB)?Now I set all textures’ internal format as GL_RGBA(the one store v210 image is excluded),in this format GL only allocate 8bit for every component,so the color precision lose is ineluctable.I prefered the “GL_RGBA16F_ARB” format,so does it can preserve the 10bit color precision?

Hi yooyo,

The info you provided here and in the other thread has been very useful, thanks. I have another question for you though: Rather than using an RGBA_INTEGER_EXT format and then doing the conversion internally with a shader, can we not just use an RGB10_A2 internal texture format with UNSIGNED_INT_10_10_10_2 data type? Are RGB10_A2 textures not supported by any hardware yet?

Thanks,
frkelly

Whole story is how to handle RGB10_A2 if its not supported by hardware. On the newest hw seems it is supported so you can play with it.