GL access to desktop pixels, windows vs. linux

Hi guys,

I have a project I am working on, and I’m having trouble getting something to work on the Linux side the same way that it does on the Windows side. Here is what I’m trying to do:

Create an OpenGL context that can access the pixels in the front buffer of the desktop / root window. Think “screenshot into a texture”. On Windows XP I can accomplish this by creating a fullscreen window that I isn’t visible, then the GL context can access the pixels with glCopyTexSubImage2D, etc.

I’m trying to accomplish the same thing on Linux, and so far I haven’t had any luck. Any suggestions?

Performance is important, and I need it on the GL side of the fence, so using some other method to copy to system memory and then upload back to the card is not an option.

-Steve

Are you sure it works on windows? It shouldn’t work, actually… this is absolutely undocumented behaviour and you are just lucky if the driver appears to play it your way.

Basically, there is no way to accomplish what you want with OpenGL.

Yes, I’m sure it works on Windows. Several “make a video of your game” apps use this method, but you are probably right that it’s “not supposed to work” – I haven’t found any official / documented discussion of this usage.

-Steve

I’m absolutely certain that won’t work under Vista. However, I can understand why it works under XP.

On Windows, the proper way to do it is to get the DC of the desktop with GetDC(NULL) and then you can use GetDIBits to read the entire framebuffer.
That’s the official way to do it and I imagine it would work with Vista as well.

On the linux side, I’m pretty sure this is possible with recent Xorg versions with the Composite extension and using the GLX extension GLX_texture_from_pixmap. And it is really speedy. I would look into these. I haven’t tried it myself. Composite managers such as compiz are able to to blend transparent elements of windows into the root window (and other windows) with no problem. Other compiz effects such as the desktop cube render the entire desktop in realtime onto faces of a cube.

On Windows, the proper way to do it is to get the DC of the desktop with GetDC(NULL) and then you can use GetDIBits to read the entire framebuffer.
That’s the official way to do it and I imagine it would work with Vista as well.

It will at least be slow under Vista (slower than under XP) and won’t work if the background is coming from a fullscreen exclusive 3D rendering which doesn’t mix with GDI rendering which happens in a totally different surface.
Video surfaces also won’t grab (video surface or DRM).
BitBlt-ing screen areas in 3D windows have the same problem. GDI surfaces are disjunct from 3D surfaces so you’ll get undefined data, grey crap most of the times (window default workspace color probably).

Try this: Vista, Aero on, enable the accessibility tool Magnifier which shows a zoomed in area of the screen, start an animated windowed 3D app (OpenGL or DX9 doesn’t matter), move it over the 3D window and you’ll see, right, either nothing or at least no animation. Great job, Microsoft! GDI is royally screwed under Vista.