YCBCR textures / ARB extension

It would really nice to have ARB (or EXT) version of YCBCR-texture extension…

for using yuv-video as a texture (and not need to convert it to rgb before uploading to the gfx card) - to save CPU and bus bandwidth.

There’s already a Mesa version of this kind of thing at: http://www.mesa3d.org/MESA_ycbcr_texture.spec

There’s also APPLE and SGIX extension for doing something like this…

Please comment.

I know you could convert yuv texture with fragment programs to the rgb on the fly, but it’s kinda kludge and the hw supports this so why don’t use it directly as it should be done…

I know you could convert yuv texture with fragment programs to the rgb on the fly, but it’s kinda kludge and the hw supports this so why don’t use it directly as it should be done…

How do you know that the hardware supports it?

It could very well be that, in order to use a YUV format image, it has to be in an overlay, not a texture (an overlay is kinda like a framebuffer, if I understand correctly).

Not only that, you’ve solved your own problem: use a fragment program.

If ARB_imaging or SGI_color_matrix is supported in hardware, then you can get the hardware to do the YCrCb -> RGB translation.

Originally posted by Korval:
How do you know that the hardware supports it?

The Mesa extension I mentioned is made because the current (and past) hardware supports yuv-textures.

YCBCR textures are hardware-accelerated and supported on ati cards (r128, r100 and r200-chipsets), matrox g200/g400 and intel i810/i830 when using XFree86/DRI OpenGL drivers on Linux.

So this extension allows the card to use yuv-textures without need (for driver or card) to convert it to RGB first.

There’s more information about this in the dri-devel mailinglist archives.

Originally posted by al_bob:
If ARB_imaging or SGI_color_matrix is supported in hardware, then you can get the hardware to do the YCrCb -> RGB translation.

Does current pc-hardware support ARB_imaging in hardware? How about SGI_color_matrix?

Even ATI Rage 128 or Matrox G400 supports YUV-textures (for example with that Mesa-extension).

The Mesa extension I mentioned is made because the current (and past) hardware supports yuv-textures.

Mesa is not hardware accelerated. It is a software renderer. It can do whatever it wants.

YCBCR textures are hardware-accelerated and supported on ati cards (r128, r100 and r200-chipsets), matrox g200/g400 and intel i810/i830 when using XFree86/DRI OpenGL drivers on Linux.

First, who writes these drivers? They don’t look like the vendor-provided ones. As such, who knows what they could be doing wrong, or what combination of state could cause the implementation to fail or crash.

Korval,

Yes I know what Mesa is.

DRI OpenGL-drivers for XFree86/Linux are built on top of Mesa. They are hardware accelerated (with supported hardware) using DRI/DRM mechanism in XFree86 and Linux kernel.

NVIDIA’s Linux drivers do not use Mesa or DRI. Ati’s Linux drivers use DRI, but not Mesa.

Opensource XFree86 (DRI) OpenGL drivers use both DRI and Mesa.

DRI Opensource OpenGL drivers exist for ati r128, r100, r200, matrox g200/g400 and some other chipsets.

see http://dri.sourceforge.net for more information.

Hardware accelerated support for MESA_ycbcr_texture extension was added to the DRI drivers lately.

DRI OpenGL drivers are not vendor made. They are mostly made by guys at Tungsten Graphics, IBM and the other opensource developers.

There is a test-program included with mesa-demos which can be used to test if the hardware-accelerated YCBCR-texture-extension runs OK or not. And it does run OK.

If you don’t believe, you are free to download the code and try it yourself. Or even look/modify the code or ask the people who wrote the support to the drivers

And if there are bugs in setting up some states, the drivers should (and will) be fixed.

The point was that current and past hardware supports YUV-textures. MESA_ycbcr_texture extension enabled in DRI-OpenGL drivers shows this.

There are also the GL_SGIX_ycrcb and GL_EXT_422 extensions, which deal with YUV-type textures.

Note that for YUV textures to be used nativaly, the hardware will need a YUV to RGBA converter on each texture unit. This isn’t circuitry that’s cheap to build.

Using FP to do that isn’t trivial because you run into filtering issues. YUV-type colors don’t interpolate as nicely as RGB.

Most likely the hardware (or driver) performs the conversion at the front end. This isn’t too far off from using the color matrix.

The point was that current and past hardware supports YUV-textures.

Acutally, you haven’t proven that “current” hardware supports this. You don’t mention my R300, or any nVidia card at all. R200’s are the most advanced thing you mention.

There’s already a Mesa version of this kind of thing at: http://www.mesa3d.org/MESA_ycbcr_texture.spec

Wow. That’s more limiting than *_texture_rectangle. That’s not even texturing; that’s blitting. There aren’t any per-fragment operations defined for them.

A fragment-program-based approach, not only works, but is far superior to this extension. You can actually use it as a texture, combined with other operations.

[This message has been edited by Korval (edited 08-05-2003).]

Originally posted by Korval:
[b] Wow. That’s more limiting than *_texture_rectangle. That’s not even texturing; that’s blitting. There aren’t any per-fragment operations defined for them.

A fragment-program-based approach, not only works, but is far superior to this extension. You can actually use it as a texture, combined with other operations.

[This message has been edited by Korval (edited 08-05-2003).][/b]

Well, r200 is the newest I mention because it’s the newest chipset-family currently supported by the DRI OpenGL drivers. Nvidia cards are not supported by the DRI drivers (because Nvidia doesn’t give specs for their hardware).

And why the imaginary “ARB_ycbcr” thingy should be as limiting as MESA_ycbcr_texture ?
Of course it could/should be more like normal rgb-textures.

Fragment-program based approach works of course, but yuv-textures are possible on old Matrox and ATI hardware where you definitely don’t have fragment programs running hardware accelerated.

I still think this would be a good idea.

Originally posted by Korval:
[b] Wow. That’s more limiting than *_texture_rectangle. That’s not even texturing; that’s blitting. There aren’t any per-fragment operations defined for them.

A fragment-program-based approach, not only works, but is far superior to this extension. You can actually use it as a texture, combined with other operations.

[This message has been edited by Korval (edited 08-05-2003).][/b]

I wrote the extension spec. The extension, as defined, works wonderfully. The idea was to specify texture images in the YCbCr format directly, thus exposing an R200 hardware feature.

From what you’ve written, it’s evidident that you don’t know what you’re talking about.

-Brian

I wrote the extension spec. The extension, as defined, works wonderfully.

It may “work wonderfully”, but it is poorly written.

No mention is made as to how it aliases with per-fragment operations, or what the result of a texture fetch operation should even be. In fact, quite the contrary; observe:

There is no support for converting YCbCr images to RGB or vice versa. The intention is for YCbCr image data to be directly sent to the renderer without any pixel transfer operations.

This language implies that the result of a texture fetch is a YCbCr texel, which is not particularly useful to the rest of the pipe unless either:

1: it is converted into an RGB texel by the fragment processing
2: it is passed directly as is to the pixel-pipeline, which understands YCbCr textures. Since performing an operation on a YCbCr texel mathematicaly isn’t anything like doing so on an RGB texel, most per-fragment operations on said texels are useless.

From your tone, it is evident that this extension does something that your spec doesn’t specify: that the result of a YCrCb texture fetch is an RGB texel (which makes the extension actually texturing rather than blitting). That needs to be explicitly stated somewhere.

Originally posted by Korval:
[b] This language implies that the result of a texture fetch is a YCbCr texel, which is not particularly useful to the rest of the pipe unless either:

1: it is converted into an RGB texel by the fragment processing
2: it is passed directly as is to the pixel-pipeline, which understands YCbCr textures. Since performing an operation on a YCbCr texel mathematicaly isn’t anything like doing so on an RGB texel, most per-fragment operations on said texels are useless.

From your tone, it is evident that this extension does something that your spec doesn’t specify: that the result of a YCrCb texture fetch is an RGB texel (which makes the extension actually texturing rather than blitting). That needs to be explicitly stated somewhere.[/b]

I think the problem here is that you’re looking a slightly out-of date version of the spec. Look at the Revision History at the end. Does it mention the 29 April 2003 update?

The latest spec clearly indicates that the YCbCr values are converted to RGB during the texel fetch operation. And because I don’t say otherwise, all the subsequent per-fragment opertions occur normally.

But really, from YOUR tone, you jumped all over this extension saying how bad it is without fully understanding it (i.e.: “That’s more limiting than *_texture_rectangle. That’s not even texturing; that’s blitting.”). Sheesh, give me some credit.

Lots of people are interested in displaying YCbCr image data in an efficient manner with OpenGL. This extension allows one to do that on modern hardware. I can’t disclose the technical specs for modern hardware (NDA), so I can’t prove it (as you demand).

Indeed, today’s new fragment-programmable hardware does offer a nice, new way of doing YCbCr->RGB conversion in hardware, but a lot of people are still using older hardware that doesn’t support fragment programming but DOES support YCbCr texture images.

-Brian

Look at the Revision History at the end. Does it mention the 29 April 2003 update?

The link you posted does not.

How does your extension deal with filtering? Is the filtering done prior or after the conversion to RGB? How do you handle results outside of the RGB range (the YCrCb color cube is larger than the RGB one)?

> How does your extension deal with filtering?

Filtering will be performed after conversion to RGB 4444. Of course, it may be the case that ony GL_NEAREST is supported.

> It would really nice to have ARB (or EXT) version of YCBCR-texture extension…

Irrespective of whether 422->444 conversion is supported in HW, no new extensions are needed - GL_EXT_422_pixels or GL_OML_subsample/resample should be adequate.

Lastly, although it is possible to use a fragment program do acheive the same thing it’s also overkill and limited. If you try it, you’ll find out that on certain GPUs it only works with textures and not glDrawPixels.

In summary, I agree that there is a good argument for asking vendors to expose their hardware 422->444 upsampling capabilities. But you probably won’t have much joy because the market is dominated by 3D games, and they don’t have much use for YCbCr textures.

Filtering will be performed after conversion to RGB 4444. Of course, it may be the case that ony GL_NEAREST is supported.

Are you telling me that an R200 only filters at 16-bits per pixel? I seriously doubt that. Any conversion done in a texture unit will go to full 32-bit precision, if not higher precision.

Irrespective of whether 422->444 conversion is supported in HW, no new extensions are needed - GL_EXT_422_pixels or GL_OML_subsample/resample should be adequate.

That’s just forcing the driver writers to write code that could just as easily be written by the user. There’s no real point to it besides user convienience.

If you try it, you’ll find out that on certain GPUs it only works with textures and not glDrawPixels.

Then those GPU’s aren’t following the fragment program specifications (assuming the spec specifies that fragment programs work with glDrawPixels).

In summary, I agree that there is a good argument for asking vendors to expose their hardware 422->444 upsampling capabilities.

Whose to say that hardware even has this functionality? Sure, the R200’s do, but my bet is that the R300’s, with their fragment-programs, don’t. When asked to draw YCrCb images, they probably use fragment programs to do the RGB conversion. And nVidia hardware may or may not; there’s no info either way.

In short, the extension could be imposing a burden on hardware vendors.