video capture as underlay in open gl window

I am writing a program to display captured video as an underlay in an openGL window. I have a hardware device that will DMA a video frame into either system memory or video card memory. I am trying to decide the best way to manage the image. So far the only way I see to use the image as an underlay in a window is to bind a buffer using GL_PIXEL_UNPACK_BUFFER, and then use glTexImage2D to transfer the pixel data to the video card, and finally to use the texture on a rectangle that fills my window. My problem is that I need to do this at 60Hz. I am concerned about transferring a new “texture” every frame.

Is there a way to have openGL use the frame image as an underlay directly? This way I can have the capture card DMA directly to the video card. Of course I would like to have two buffers, one to write while openGL is drawing the other.

Any help is appreciated.

[QUOTE=advorak;1272123]I am writing a program to display captured video as an underlay in an openGL window. I have a hardware device that will DMA a video frame into either system memory or video card memory. … GL_PIXEL_UNPACK_BUFFER, and then use glTexImage2D …, and finally to use the texture on a rectangle… My problem is that I need to do this at 60Hz. I am concerned about transferring a new “texture” every frame.

Is there a way to have openGL use the frame image as an underlay directly? This way I can have the capture card DMA directly to the video card. [/QUOTE]

What options you have is going to depend on your system setup.
What it sounds like you need is something very similar to VDPAU.

Background questions:

What OS are you wanting to do this on?
What GPU (make/model) or GPUs?
What bus connects your GPU and CPU/main memory (e.g. PCIe version 3 x16)?
Does your GPU support hardware video decoding and post-processing (scaling, deinterlacing, etc.)?
What format is the video data in now?
Are you open to transcoding it to a format that will perform more optimally (if applicable)?
What is the max resolution of the color video stream?
What is the max bitrate of the compressed video stream?

I wouldn’t get caught up in the specific GL usage yet, because if/how GL is involved depends on specifics we don’t know yet. As a minor correction though, you’re not going to use glTexImage2D, as this would force a needless free and reallocation of the texel data storage.

What OS are you wanting to do this on?

RedHat Linux

What GPU (make/model) or GPUs?

NVidia Quadro K5200

What bus connects your GPU and CPU/main memory (e.g. PCIe version 3 x16)?

PCIe 3.0 x16

Does your GPU support hardware video decoding and post-processing (scaling, deinterlacing, etc.)?

I don’t know if the gpu supports hardware video decoding. I need to capture four channels of video at 1920x1080 @ 60Hz.
I bought a specific card that will capture the video and make it available as an RGB array via the V4L2 API.
As to the second part, I can’t imagine the card doesn’t support post processing. I bought one of the biggest/baddest cards NVidia offers.

What format is the video data in now?

RGB array.

Are you open to transcoding it to a format that will perform more optimally (if applicable)?

I’m open to suggestions.

What is the max resolution of the color video stream?

Source data into the capture card channels will be 1920x1080. The input side is fixed.
I have three outputs to support. They are at different resolutions.

What is the max bitrate of the compressed video stream?

Don’t have a compressed video stream.

Ok. A recent nVidia GPU (Kepler-class) on Linux – good. You’re in good shape there.

Is your video stream pre-captured and encoded, or is it a real-time captured video stream?

Just’s to do some estimating here… Suppose you did upload completely uncompressed video (e.g. RGB8 24-bpp) @ 1920x1080 60Hz non-interlaced. That’s 356MB/sec just to upload to the GPU. That’s pretty hefty, but PCIe x16 v3 sports a theoretical bandwidth of 15.75 GB/s, so that’s not likely to be a limiter.

So you may be OK with uncompressed video, particularly if your capture card will DMA data directly into GPU buffers, and probably even if not. Check this out for some ideas: Optimizing Texture Transfers (nVidia) for streaming video data to the GPU efficiently. Seems like there’s a chapter on this in OpenGL Insights ([). Also search the forums here on opengl.org for “nvidia video streaming” tips.

You might also read up on nVidia’s [URL=“VDPAU - Wikipedia”]VDPAU](Indexing Multiple Vertex Arrays | 35 | OpenGL Insights | Patrick Cozzi[/url) API (more), which is supported on Linux. It’s used by MythTV, so you can check out its source for example code. I’m sure there are other free packages that use it as well. VDPAU supports real-time video decoding (e.g. H.264 High 4.1, VC-1 Advanced 3, or MPEG-2 MP@HL), post-processing, and playback on the GPU. You’d use this if your video was pre-encoded to save bandwidth shoveling the video stream to the GPU. It also supports completely uncompressed video too. However, using this is going to tie your solution a bit more closely to Linux than just using raw OpenGL.

Is your video stream pre-captured and encoded, or is it a real-time captured video stream?

It will be a real time captured stream. I have an image generator creating the visual scene in real time. I simply have to pick the video stream(s) used by the operator. If an overlay mode is selected, I may need to blend two of the video streams.

Thank you for you insights. I will read up on your suggestions.