Richer ReadPixels()

Sometimes we need to build interlaced animations for NTSC or PAL output. We have to draw one field, ReadPixels() and send the pixels to a video board, once per field.
We have two approaches: to read the whole frame and to send alternate lines (reading more lines than needed) or read every needed line, issueing a lot of one-line ReadPixels() commands.
We have found that every approach works better depending on the hardware and its driver version.

Because this happens in real time, every millisecond does matter. We miss a way to say to ReadPixels()that it must read alternate lines, not the whole rectangle. So optimizations could be performed by the driver, avoiding the overhead or reading as much as twice the needed information or hundred of function calls.

um… in don’t know what kind of hardware you have but my GFFX 5600 does that just fine.
I reguarly hook it up to a widescreen tv and watch anime and it’s a pretty nice quality.

one way of solving this is to render two fields.
One on the top of the screen and one in the bottom with just a tiny bit of offset(about one pixel).
It should produce the same effect

Originally posted by kcmanuel:
Sometimes we need to build interlaced animations for NTSC or PAL output. We have to draw one field, ReadPixels() and send the pixels to a video board, once per field.
We have two approaches: to read the whole frame and to send alternate lines (reading more lines than needed) or read every needed line, issueing a lot of one-line ReadPixels() commands.

As a workaround, can you not simply render to a compressed viewport? Eg if you want half-images for a 640x480 video, you could render to a 640x240 viewport and read it back in one function call.

Originally posted by kcmanuel:
We miss a way to say to ReadPixels()that it must read alternate lines, not the whole rectangle.
There are a bunch of video-format ReadPixels/DrawPixels extensions. Some of them support interlaced transfers, some support YCrCb component packing and color space conversion.

The GL_OML_interlace extension is probably the most up-to-date. It was defined by the Khronos Group as part of OpenML 1.0 (OpenML is a set of APIs for handling video data, including a handful of OpenGL extensions). The GL_INGR_interlace_read extension has similar functionality. Apple may have one that isn’t in the extensions registry yet, I’m not sure.

Checking Delphi3d’s database, it looks like 3Dlabs Wildcat cards support the INGR and OML extensions. Not sure what other vendors do.

Originally posted by zeckensack:
As a workaround, can you not simply render to a compressed viewport? Eg if you want half-images for a 640x480 video, you could render to a 640x240 viewport and read it back in one function call.
We have tried it but we get artifacts when we compose the fields, mainly with texts. We believe it is because of antialiasing. It is applied differently.
Thanks for your help

Originally posted by oddhack:
The GL_OML_interlace extension is probably the most up-to-date (…) The GL_INGR_interlace_read extension has similar functionality[/QB]
This is exactly was I was searching. Thank you very much.