Hi everyone,
I’m quite new in openGL. I’m working on a video player able to manage 10bits frames.
Actually my app is able to convert a yuv frame into rgb but only considering 8 bits frames.
I would like to expand to to 10 bits frame (represented on 16 bits little endian).
The idea is to have the conversion in the shader:
16 bits yuv -> 8 bits rgb.
it is on this point that i’m blocked…
here is how it is working today for one component:
glActiveTexture(GL_TEXTURE0); //select active texture unit
glBindTexture(GL_TEXTURE_2D, id_y); //bind a named texture to a texturing target
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, size_plane[0][0], size_plane[0][1], 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, plane2Display[0]); //specify a two-dimensional texture image
glUniform1i(textureUniformY, 0);//Specify the value of a uniform variable for the current program object
For an 8-bit unsigned normalised texture (GL_R8), a byte with a value of 255 corresponds to a result of 1.0 from texture(). For a 16-bit unsigned normalised texture (GL_R16), a word with a value of 65535 corresponds to a result of 1.0 from texture(), while a word with a value of 1023 corresponds to a result of 1023/65535 ~= 0.0156 from texture(). So scale the results from texture() by 65535/1023 if the texture contains 10-bit values.
So only replace the “gl_FragColor = vec4(rgb, 1);” line by something like “gl_FragColor = vec4(rgb, 1) * 256;” for example ?
(or scale values by the 8bits to 16 bits factor into the 3x3 matrix stored in the rgb variable)
Conceptually, applying the scale directly to the value returned from texture() is easiest. If you don’t do that, you’ll also need to scale the -0.5 offset applied to U and V.
What is the exact format of your initial data, it’s a packed format (alls components of the colour of a pixel are stored in the same continuous block of data ) or a planar format (each color component is stored in a distinct plane) ?