Color space conversion

Hello,

My application intends to blend various synchronised video streams into one output stream in YUV.
I have multiple cameras delivering raw RGB (Bayer pattern) streams.
I also have a functional fragment shader doing conversion from raw RGB to regular RGB.
I know the formulas to do RGB->YUV conversion.
Where I need your help is on how to write YUYV 4:2:2 interleaved data.
Basically my workflow is :
1/ read bayer pattern textures
2/ for each texture
draw it using alpha blending (a single shader doing color conversion and alpha blending for each stream)
3/ glReadPixels to write the resulting frame buffer to disk

YUYV 422 format impose that each even pixel has a Y and a U component, each odd pixel a Y and V. Thus I would use :
gl_FragColor = vec4(vec3(Y/3),U); or gl_FragColor = vec4(vec3(Y/3),V); depending on the case and
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glReadPixels(0,0,w,h,GL_LUMINANCE_ALPHA,GL_UNSIGNED_BYTE,&data); to write only two bytes of data for each pixel.

I think this idea should work if I only had 1 stream to write.

But, using the alpha channel to pass U or V component, I can no more do alpha blending between video streams, each one being logically overwritten by the next one with irrelevant alpha values.

So do you know any efficient mean to write only 2 components per pixel without using alpha channel?
Or do you have any other workflow ideas to reach my goal?

Thanks in advance for any tips.

Adrien

Your code snippet is odd - it seems to be a mix of GLSL and OpenGL code, so it’s not clear, for instance, what frame buffer you’re writing to, and what you’re doing with your finished pixels.

It would be more normal to write 2 YUV pixels with each fragment shader pass, than try to write twice for each pixel - if you use an 8-bit BGRA texture as a target, you can sample the two YUV pixels you’re interested in, write the two Y components, then sum and write the shared U & V. That one BGRA pixel will actually hold your two YUV pixels.

Since you’re not in a format OpenGL understands, you’ll have to do your own blending, which means that you need to blend in your GLSL shader and write the exact values to your frame buffer (GLSL has no control over blending).

Bruce

Hello Bruce,

I mixed shader code snippets with GL code snippets, to shorten the message, hoping it was obvious.

Basically, I was hoping to write directly into the main frame buffer with shaders and then directly copy the result with glReadPixels into some output buffer. Output buffer being used later as a frame to a video encoder. But I agree that I will have to do some extra step since this is not standard OpenGL procedure.

I hesitate between two approaches, don’t know which one will provide better efficiency.

Idea 1:
1/ for each video stream
draw in the same output RGB texture using alpha blending and raw RGB to YUV 4:4:4 conversion.
2/ render previously filled texture, using a simple quad, to a frame buffer. Each pixel being YYYU or YYYV depending on its position
3/ copy frame buffer to output buffer as GL_LUMINANCE_ALPHA. Resulting into YUV 422 interleaved.

Idea 2:
1/ for each video stream
draw in a frame buffer using alpha blending and raw RGB to YUV 4:4:4 conversion.
2/ copy frame buffer to temp buffer as GL_RGB. Resulting into YUV 444 interleaved.
3/ copy temp buffer to output buffer YUV 422 interleaved.

the point is to know which one is the fastest, one additional rendering or one additional pseudo-memcpy. I would go for the memcpy…

I can’t tell which of your approaches matches what I’m doing… but I’ll describe it in case it helps. My approach is based on making all the features I want feasible.

But I think your question will only be answered by testing both approaches.

My approach is (and this happens in parallel with basic use of RGBA framebuffers to show on screen):

  • bring in each video stream and convert to YUVA 4:4:4:4 to color correct, then through to 16-bit (HALF) linear floating point RGBA.
  • blend the streams (using a shader to further tweak alpha channel) as needed onto a target 16-bit FBO.
  • use a shader to convert to (video gamma and) desired YUV or RGB format in an 8-bit texture attached to a PBO.
  • transfer PBO back to CPU, hopefully with minimal per-pixel noodling (the final shader puts it into a format that GL can transfer directly, usually referred to as the ‘fast path’)

Bruce

  • blend the streams (using a shader to further tweak alpha channel) as needed onto a target 16-bit FBO.

Admittedly, I’m not 100% up on the YUV colorspace, but isn’t it a non-linear colorspace? I’m not sure these values are something you can LERP and expect to get reasonable results from it. You need to convert it to RGB or something first, right?

in an 8-bit texture attached to a PBO.

There is no such thing as a PBO attached to a texture.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.