I think I made a design flaw in my video processing pipeline, and it’s not a good time to correct it, but I wonder if I can do a nice workaround?
I’m passing in video frames as PBOs, using their native format, colorspace and color sub-sampling. So, for instance, I pass in a 1280 x 720 yuv 420 frame as a 1280 x 720 8-bit luminance plane, and two 640 x 360 planes (with padding to 32 byte multiples).
Then I use a shader to convert to a linear RGB half FBO, with a mild dither. Later I do a decent scaling on the linear pixels, then convert back to whatever I need to output.
The problem is that ‘color’ scaling gets poor attention, since I can’t scale the u and v planes right from their stored (quarter size) pixels to the destination. So I think I’m seeing artifacts - blockiness in rich colors.
The question is what scaling can I apply at the YUV > RGB part of the pipeline without softening things up? In my head, I can use the ‘cardinal sine’ or ‘sinc’ filter to upscale, this being a simple case, and I presume since I’m doing a fixed ratio upscale, I could use constants, but I’m not sure whether the scaling being in a video gamma space means I have to vary the scaling?
Anyway, as an example, if I have output pixels A-E, the contributing pixels would be:
A y0 u0 v0
B y1 u0 v0
C y2 u1 v1
D y3 u1 v1
E y4 u2 v2
So, taking pixel C, y2 is centered right on C, but u1 and v1 are centered between C and D, so I need to do a weighted sum of u0, u1 and maybe even u2? (say (0.75 * u0 * sincO) + (0.25 * u1 * sincC) where sincO and sincC are from the sinc curve?)
Does it even matter that when the original 4:4:4 to 4:2:0 conversion was made, they presumably only used the C and D pixels to generate the u1 value?
Hope that’s clear enough that someone can help me see the wood for the trees.
Bruce