Getting a device context for the back buffer

Let me explain what I’m trying to do, since I’m not even sure this is the best way to do it.
I’m working in Win32. I am capturing video frames from an attached camera and then using that image as the background in my openGL window. Right now I’m doing this by getting the image, finding the current OpenGL context (using wglGetCurrentDC() ) and BitBlt-ing the image into this DC, then drawing my OpenGL objects.

When I do this with double buffering, I get flickering. It seems that I’m BitBlt-ing into the front buffer, which gets replaced when I swapBuffers. When I only use a single buffer, it works okay, but I have all the typical flickering associated with single buffering. So it seems that the right thing to do is double buffer and make sure that I BitBlt into the back buffer.

Hence my question: how to I get a DC for the back buffer.

I’m also open to suggestions for a better way to do this.

Thanks
-Rob

Mixing OpenGL and GDI calls is not generally a good idea.

glDrawPixels blits an image into the buffer you specify with glDrawBuffer, and will cooperate with other OpenGL calls much better.

Mike

The current version of the code actually does use glDrawPixels, but it’s extremely slow on the PC (it’s fine on the O2s we’ve been running on), compared to using BitBlt. On the order of 10 times slower.

Originally posted by Rob:
The current version of the code actually does use glDrawPixels, but it’s extremely slow on the PC (it’s fine on the O2s we’ve been running on), compared to using BitBlt. On the order of 10 times slower.

Yup, this sounds familiar. What video-cards are you using? Both glReadPixels and glDrawPixels are notoriously unoptimised on many consumer cards - it’s a catch-22 situation where nobody uses them because they’re unoptimised and nobody optimises them because they’re not widely used. Take a look inside SGI’s Sample Implementation and you’ll see where all that time is going.

Is this for a commercial product? If so, can you get away with restricting the set of supported/recommended cards?

If it’s for an in-house tool, you can try a variety of things to get performance up to an acceptable level on the cards you have.
The first thing to try is messing with the format in which you pass in your images; if
you find one that’s considerably faster, switch to that and do the pixel-format conversion yourself.
If the card supports suitably-sized textures, doing a glTexSubImage2D followed by rendering a full-screen quad may give ok results.
Finally, and this one is a real wild stab in the dark, you might be able to do something with overlay or underlay planes. I know very little about the subject so I only mention it as a possiblity.

Hope this helps,
Mike

[This message has been edited by Mike F (edited 05-15-2000).]

We are currently using an ASUS v6800 deluxe. This card is great for us because it has very good video in/out features as well as overall good OpenGL performance.

Is there some particular pixel format which is typically faster than others? I believe right now we have it in the BGRA that comes from the camera.

I’ll try doing it as a texture.

Thanks for the suggestions.

Originally posted by Rob:
We are currently using an ASUS v6800 deluxe

That’s a GeForce card, right? Try getting the latest nVidia reference drivers; I remember getting new (beta) GeForce drivers a while ago and getting a massive improvement with glReadPixels, so they may well have done glDrawPixels too.

Mike F

I may try the reference drivers just out of my own curiousity to see if the glDrawPixel performance is improved, but this isn’t a viable solution for us since the reference drivers won’t support the video in/out functions of the card. Nice thought though.
(I’m about to walk over to my coworker right now and suggest she try using a texture map, so we’ll see how that goes.)

From my experience I have had faster renderings with textures over glDrawPixels. Try making a set of textures from the frames in the stream and draw a large quad for the background.