Video and OpenGL (and YCrCb)

Does anyone know any good directions to look in terms of dealing with video, specifically handling YCrCb and field based rendering with OpenGL? I’m also interested in finding out whether there are any movements towards better handling of video in OpenGL - other than here is there anywhere good to keep track?

Slightly O.T. is there any way of finding out whether a driver is implementing a specific OGL feature in s/w or h/w other than by coding up a test case and seeing how fast it moves? All of this is based on NV cards and Linux and I’m thinking of the SGIX_ycrcb and NV_ycrcb calls.

Thanks everyone,

Joe

Originally posted by joe_rutledge:
Does anyone know any good directions to look in terms of dealing with video, specifically handling YCrCb and field based rendering with OpenGL? I’m also interested in finding out whether there are any movements towards better handling of video in OpenGL - other than here is there anywhere good to keep track?

Mesa has an extension for sending YUV data to the GFX card without converting it to RGB in software (the hardware does this).

It is hardware accelerated on ati/matrox hardware when using DRI opensource drivers on Linux.

I don’t know about the other extensions… but I hope some kind of ARB extension will be done for sending YUV (video) data to the GPU.

  • Pasi

At least to your second question i can give you those links to the developer Pages (ATI and Nvidia)
This pdf from Nvidia shows all supported extensions and has a extension support table where is shown, which extensions are supported by a hardware/driver combo: http://developer.nvidia.com/object/nvidia_opengl_specs.html

Dont really know for ATI if there is something that is also that specific, but they also list their supported Extensions: http://www.ati.com/developer/sdk/radeonSDK/html/info/Prog3D.html

greetings
Lars

It is likely that if a feature is exposed using an extension string, it has hardware support. The only exception I know about is ARB_vertex_program, which is exposed on GeForce2 but implemented in software in the NVIDIA driver.

Note that, when a feature is folded into the core OpenGL, you don’t know whether it’s hardware or software, BUT the appropriate extension is usually still exposed (for programs written for earlier versions of OpenGL) so you can still look for sentinel extensions to determine feature support.

For example, current OpenGL supports 3D texturing, but it’s software rasterized on GeForce2. GeForce2 does not expose the 3D texture extension.

Originally posted by jwatte:
It is likely that if a feature is exposed using an extension string, it has hardware support. The only exception I know about is ARB_vertex_program, which is exposed on GeForce2 but implemented in software in the NVIDIA driver.

ARB_imaging is also exposed in the extension string, but implemented in software.

  • Pasi

I think its all going to depend on what kind of processing you need to do and what assumptions you can make about the input.

The NV_YCRCB extension is not documented at all so I’m not so sure that its entirely safe to use. In any case, the extension and/or hardware overlays are only supported on certain cards and you are restricted to the formats that the card supports.

Another problem with the extension and/or overlays is that they do not support generalized YCrCb components of different precisions. The standard TV formats are supported, but in my case, I need to be able to handle 8-8-8-alpha 32-bit formats as well as 6-5-5 16-bit formats.

Another problem with overlays is that there is usually only one supported plane. This is no big deal if you are only trying to stream video. My Nuon emulator has to display two blended, arbitrarily sized, scaled and placed channels. You simply can’t do that with a single overlay plane (without significant CPU overhead).

I am currently using the ARB imaging subset, but as Mark Kilgard confirmed in an email response, nvidia cards perform the imaging subset blending modes in hardware, but the color matrix is implemented in software.

Given Nvidia’s stance on adding additional hardware acceleration of the color matrix, I would have to agree with many people in that pixel shaders are probably your best bet for performing the color space conversion. This is especially true if you need to deal with invalid color components as part of the input. One feature that I am still waiting for is support for integer operations. I need support for 6-5-5 YCrCb textures so that I can avoid having to do CPU conversion to 5-6-5. You can convert using floating point but its kind of ugly and it requires the use of the floor function. I’m hoping that at some point boolean instructions on integer registers are added to the pixel shader functionality.

If you only need to display video and can get your video into a standard texture format with pre-clamp and bias, you should be able to use register combiners to do the color space conversion. Its rather ugly on Geforce 2 level hardware but the Geforce 3 has plenty of stages to do a single 3x3 matrix multiply. In fact, you might be able to do a clamp and bias in hardware too. On the other hand, it takes six of the eight stages to do the same thing for two planes, and you can forget about a pre-clamp/bias stage.