Hardware accelerated video stream decoding in OpenGL

Hi all.
Some of the modern graphics cards have hardware accelerated decoding of MPEG-2, MPEG-4 (H.264), I think nVidia calls this “PureVideo”. I would like to utilize this hardware decoding instead of doing the decoding on the CPU. I believe that MS DirectShow utilizes this hardware decoders, but is possible to access this from OpenGL?

I have worked with OpenGL for many years now, but I have not seen any extension that will do this. My gut feeling is that this is not possible from OpenGL, but maybe someone knows some trick to make it work?

Thank you in advance.

Look into OpenML at khronos.org

Thank you V-man for your suggestion.

I’ve briefly looked at OpenML previously, and it looks quite interesting. But I’m not sure how alive that project is, do you have any information on if there are (commercial) users of OpenML? I really need something that is supported and can be depended on. Seems that no one has answered question on the OpenML forum for over 1 year, guess that is a sign?

you can use dshow filter graphs which will use hardware acceleration to decode the frames, all you need to do is add your own render filter which will take the decoded frame and upload it to your texture. Obviously this involves a readback from the graphics card, then an upload back to the gl texture.

Just read a bit about OpenML and it seems as if it never really took off, when the spec was presented 6 years ago. Seems not to be a save choice.

Jan.

It is possible to use shader or to accelerate a videostream decoding. Some examples:

The motion compensation is very easy to implement. Only the previous frame have to be used as texture for rendering.

YUV -> RGB is a simple shader with a few multiplications

IDCT: It is possible to implement the butterfly to transform the DCT encoded blocks with some passes.

Scaling

Things that won’t run well with shaders:

  • Parsing the input stream
  • Huffmann decompression
  • RLE decompression

I’m with V-man on the OpenML. Looks top notch, and it’s open source to boot.

If the sea looks calm and withdrawn it’s probably an indication of an impending tsunami, and this one looks gnarly. Get on board and ride the wave, just try to keep your Gimbals about you while all are losing theirs…

Edit: I need one of those context sensitive spell checkers…

Have a look at this old thread:
http://www.opengl.org/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic;f=3;t=013112#000013

Thank you all for feedback. I’ve tried to contact Khronos about OpenML, will report back if I get an answer.

I’m aware of the possibility of building a dshow filter graph, but the challenge is to write a custom renderer that can output the decoded frame. Afaik, this is not trivial. “knackered”: Have you done this? I probably can live with a read-back from the graphics memory, as this is quite fast on modern graphics card.

I’ve put a lot of thought into writing my own GPU-based encoder/decoder, as “oc2k1” suggests. Thank you for sorting out what is possible or not. The ting that worries me is; how much job is it to write a GPU-based encoder and will I (and my team) be able to produce a encoder that can be comparable with modern encoders? I’ve not seen any papers or articles describing GPU-based video compression. We are passing the video on the network, so I really would like to keep the video steam size low.

Thank you for the thread-link sqrt[1-], but I think that it describes software decoding, right? Sadly the code source mentioned in the thread is no longer available, so I cannot be sure.

I think most of the examples were ways of hooking up DirectShow into OpenGL.

Just looked at my old dshow filter graph code…heck you’re right, it is not trivial - but I managed to do it. If I can do it, any old fool can do it.

Doing YUV to RGB conversion in shaders or displaying directshow buffers with OpenGL is all nice, but the heavy work is in decoding. I wonder how one can utilize things like the dulcet sounding “PureVideo HD” to do H264 decoding on the GPU.