View Full Version : Using the VRAM of a capture card

02-07-2002, 12:50 PM
Can the VRAM be accessed in OpenGL to render 3D elements in it ?

The goal again is to render on real-time video frames. (see previous postings)

Thanks again !

02-07-2002, 03:03 PM
A capture card typically doesn't have a framebuffer (VRAM), but instead will DMA into system RAM.

GL surfaces typically don't come with physical addresses that would make them available as DMA targets.

Thus, the best you can do with "normal" OpenGL is to DMA from the capture card into some are of memory, which you upload as a texture for every frame. You can then clear the screen by drawing this texture to the framebuffer, instead of clearing to black or whatever. Then draw all the GL you want.

The Matrox G400 used with the RT2000/RT2500 video editor set-up uses special driver knowledge to be able to more efficiently transfer data between capture, DV decode, rendering and DV encode, as far as I can tell.

Anyway, once you have the image as a texture, it's easy to do rolls, fades, whatever. And the upload performance of AGP 4x (and even AGP 2x) is big enough that one frame of video (typically uploaded using glTexSubImage on a 720x486 rect into a 1024x512 texture) per frame rendered really isn't that big of a deal.

It's rather harder to get the video capture hardware to cooperate without getting extra copies of the data. However, with DirectShow video capture, you can set it up so that you get direct access to the bits in the buffer, and if the driver is good, it'll directly put the bits in the buffer, so you can make it all good with a little sweat.

02-08-2002, 11:20 AM
You may want to check my answer back on this forum http://www.opengl.org/discussion_boards/ubb/Forum2/HTML/007276.html

There has been posted some examples of rendering a AVI to an OpenGL object on this site go do a search on the main page.

There are different approaches you can take to give you the results you want.

Fist is the hardware approach, you used a genlock type device which overlay’s one video source over another. Your source video would be overlaid by the input video from your computer.
I am not sure, but the ATI All-n-wonder card could be used for this since it has both video inputs and output’s. And built in video processing, you can contact ATI about it.

Second would be done all with software, process the video stream into a frames.
Render the frame into the back of your OpenGL scene.

Also are you wanting to send the doctored frame back out into a video format or a streaming format?

If you are using some type of streaming format then you will have some issues to deal with on the security bits in the stream and that is a whole other issue.