grabbing the entire screen

i want to grab the entire content of the actual screen [not only the actual window]and use it as a texture. is this possible with opengl and how good is the performance?
thx in advance, floww

I’ve got the code for it at work. It’s a lot of fun to goof w/ the screen. You can do some pretty neat things. Just - be careful running it at 1600x1200x32 - only the best cards will even deal w/ that.

Tomorrow night I’ll post it. Basically, you create a screen dc using CreateDC( 0,“DISPLAY”,0,0 ), create a compatible DC from that screen DC, create a DIB section that matches the screen, select the dib into the compatible DC, then bitblt from the screen DC to the DIB’s dc. The screen contents will be accessible as the DIB’s bits.

Like I said, I’ll post the code here tomorrow when I get home from work, if I remember.

Good luck!

thanks a lot.

Heregoes:

void GrabScreenRip( int* DestWidth,int* DestHeight,int* DestBytesPerPixel,void** DestBits ) throw( std::exception )
{
HDC hScreenDC = 0;
HBITMAP hScreenBmp = 0;
HDC hDCCopy = 0;
HGDIOBJ hOriginalBitmap = 0;
try
{
// gain access to screen contents
hScreenDC = GetDC( 0 );
if( !hScreenDC )
throw std::runtime_error( “Unable to gain direct screen access.” );

    int Width = GetSystemMetrics( SM_CXSCREEN );
    int Height = GetSystemMetrics( SM_CYSCREEN );
    int BitsPerPixel = GetDeviceCaps( hScreenDC,BITSPIXEL );
    int BytesPerPixel = BitsPerPixel >> 3;

    // setup format for screen DIB
    int Size = Width*Height;
    BITMAPINFOHEADER bmih =
    {
        sizeof( bmih ),
        Width,-Height,
        1,(unsigned char)( BitsPerPixel ),
        BI_RGB,Size,
        0,0,0x100,0x100
    };
    std::vector<unsigned __int8> Buffer( sizeof( BITMAPINFO ) + 0x100*sizeof( RGBQUAD ) );
    BITMAPINFO* pbmi = reinterpret_cast<BITMAPINFO*>( Buffer.begin() );
    pbmi->bmiHeader = bmih;
    memset( pbmi->bmiColors,0,0x100*sizeof( RGBQUAD ) );

    // create main screen DIB
    void* ScreenBits = 0;
    hScreenBmp = CreateDIBSection( hScreenDC,pbmi,DIB_RGB_COLORS,&ScreenBits,0,0 );
    if( !ScreenBits )
        throw std::runtime_error( "Unable to copy screen data." );

    unsigned NumBytes = Size*BytesPerPixel;
    memset( ScreenBits,0,NumBytes );

    // select DIB into an offscreen memory DC
    hDCCopy = CreateCompatibleDC( hScreenDC );
    hOriginalBitmap = SelectObject( hDCCopy,hScreenBmp );

    // grab actual screen rip
    if( !BitBlt( hDCCopy,0,0,Width,Height,hScreenDC,0,0,SRCCOPY ) )
        throw std::runtime_error( "Unable to blit screen data." );

    // copy out bitmap bits
    void* Bits = new unsigned __int8[NumBytes];
    if( !Bits )
        throw std::bad_alloc();
    memcpy( Bits,ScreenBits,NumBytes );

    // update source params
    *DestWidth = Width;
    *DestHeight = Height;
    *DestBytesPerPixel = BytesPerPixel;
    *DestBits = Bits;

    // cleanup
    SelectObject( hDCCopy,hOriginalBitmap );
    DeleteDC( hDCCopy );
    DeleteObject( hScreenBmp );
    ReleaseDC( 0,hScreenDC );
}
catch( std::exception& )
{
    // failure - clean up any allocated resources
    if( hScreenDC )
        ReleaseDC( 0,hScreenDC );

    if( hDCCopy )
    {
        if( hOriginalBitmap )
            SelectObject( hDCCopy,hOriginalBitmap );

        DeleteDC( hDCCopy );
    }

    if( hScreenBmp )
        DeleteObject( hScreenBmp );

    throw;
}

}
//---------------------------------------------------------------------------

[edit 1] DOAH! I forgot it took out my formatting lines. Well, w/ all of the lines out, it looks a lot worse than it really is. Comments start a block of code, so put in lines just before the comments as blocks.

Like I said before:

  1. Get the screen DC (via GetDC( 0 ), rather than CreateDC( 0,“DISPLAY”,0,0 ); )
  2. Create a DIB section matching the screens dimensions and color depth.
  3. Create an offscreen DC that matches the format of the screen DC.
  4. Select the DIB into the DC.
  5. Bitblt from the screen’s DC to the DIB’s DC.

A copy of the screen’s contents are then accessible via the bitmap bits pointer created during CreateDIBSection.

  1. Cleanup.

I ripped this code from my general library. Normally I use resource wrappers for DC’s and HBMPS, (similar to std::auto_ptr), but that stuff’s not really part of the question. I wouldn’t normally dup cleanup code like I did here, but this is just a quick rip to get the point across.

Anyways, good luck screen ripping!

As a side, does anyone else have any other ways of pulling this off? This way is relatively fast, but it’s slow enough that I don’t mind using exceptions in it, because I’d never, ever call this when frame rates are important.

Thank you for your bandwidth,
– Succinct

[This message has been edited by Succinct (edited 08-18-2003).]

thanks for the code. it works fine except for the lines
std::vector<unsigned __int8> Buffer( sizeof( BITMAPINFO ) + 0x100sizeof( RGBQUAD ) );
BITMAPINFO
pbmi = reinterpret_cast<BITMAPINFO*>( Buffer.begin() );

(Using VS.NET)

I rewrote them to a simple malloc. I don’t know if this is valid but at least the code compiles.
I get a maximum 10fps wich is too low for my application.

I want to use the screen content as a texture for mapping on a sphere. it should work in realtime.

Has anyone an idea how to realize such an application (similar to the nvidia keystone correction tool -> http://www.nvidia.com/object/feature_nvkeystone.html)??)

Or how nvidia could have done it?

thx in advance, floww

  1. You have to include <vector> - why you’d use malloc, I’ll never know, unless you’re going embedded, but you can’t be, because you’re running a windows OS.
  2. As I said in the last post: This will not run in real time (“I’d never, ever call this when frame rates are important.”).

There are three main contributing factors as to why this method won’t run in real time. The first is that you’re using windows GDI. The second is the sheer transfer volume done in the bitblt. The fill rate needed to transfer the contents of the entire screen back through the PCI bus to the main system memory is pretty big, and most likely done by the processor. The last part is that in order to use it as a texture, you have to send it back down to the graphics hardware. Now we’re getting into AGP limiting due to transfer bandwidth. Further complicating the issue is that if you’re running at larger than 1024x768, only the real high end cards will support the texture unless you break it up, which means more sends. The benefit to breaking up, say, 1024x768, is that you can break it up into 4x3 256x256 textures, instead of black padding the 1024x768 image to 1024x1024.

The only way to make this run in real time under windows would be to circumvent the round trip transfers to the cpu/system memory. I don’t know how to get to the frame buffer contents of other applications in AGP memory, though… Maybe the NVidia guys know some hacks to get it? I’m sure it won’t be anything clean, portable, or safe.

The only thing I can suggest is grab it once, tile it into some textures, then maniuplate them. This is how I demoed breaking the screen as in FFX just before you get into a battle.

Good luck! Let me know if you get any other ideas.

– Succinct

[This message has been edited by Succinct (edited 08-19-2003).]

just a thought.
you could, in your initialise routine, set the pixel format of the dc you get using screenDC = GetDC(0), to the same format that you used to create your opengl context, then simply wglmakecurrent(screenDC, glrc), set a viewport etc., and glcopysubimage the framebuffer into your texture, then wglmakecurrent(mywindowDC, glrc) to render with it.
I’m sure there’s reasons why this won’t work, but it’s just a bit of a laugh.

i have included vector. but the problem is not the vector itself:
error C2440: ‘reinterpret_cast’ : cannot convert from ‘std::vector<_Ty,_Ax>::iterator’ to ‘BITMAPINFO *’
with
[
_Ty=unsigned char,
_Ax=std::allocator<unsigned char>
]

whats wrong here?

Isn’t it unsafe to assume that vector::begin() will point to contiguous memory? The whole concept of implementation independent containers…

Originally posted by CatAtWork:
Isn’t it unsafe to assume that vector::begin() will point to contiguous memory? The whole concept of implementation independent containers…

its save to assume in every implementation that it is a continuous memory chunk. it will be officially save even in the c++ standards in the next releases.

and, as he wants to use it as a texture, use glCopyTexSubImage… veryvery fast and simple.

Ugh. Looked it up. Requiring that it be continuous feels wrong. I’m not a standards author, however. No more OT posts, I promise.

std::vector<unsigned __int8> Buffer( sizeof( BITMAPINFO ) + 0x100*sizeof( RGBQUAD ) );
BITMAPINFO* pbmi = reinterpret_cast<BITMAPINFO*>( Buffer.begin() );

Good God, man! What’s wrong with the good old-fashioned new operator? Or is that too old school for you? And what makes you so sure that the iterator is a straight C pointer? It could be a wrapper class and still conform to the standard. Perhaps this is why bebedizo’s compiler won’t let him reinterpret_cast it. And even if you could be absolutely sure that it works on every implimentation of the STL, what’s the advantage over using new (or malloc or calloc for that matter)? Are you afraid someone else might actually understand what your code is trying to do?

Whew, glad I got that off my chest.

aaron: advantage: savety that there will never be any memoryleak, even when you use exceptions in your code… RAII, ya know…

and yes, a vector IS guaranted to be implemented as a simple dynamic allocated array.
not yet in the standard, but yet in every implementation.

First of all, sorry for thread-jacking, floww…

aaron: advantage: savety that there will never be any memoryleak, even when you use exceptions in your code… RAII, ya know…
If he can free OS-allocated resources, like the bitmap and DC, in the catch block, what’s the harm in calling free or delete there too? How is new any more dangerous than CreateDIBSection, for example?

and yes, a vector IS guaranted to be implemented as a simple dynamic allocated array.
not yet in the standard, but yet in every implementation.
I didn’t say it wasn’t. My concern is that the reinterpret_cast of the iterator assumes that it is a pointer. Consider this code snipet from a hypothetical implementation of vector:

template <class T, class Allocator>
class vector {
    class iterator {
        int dummy;
        T *pointer;
        iterator(T *p) { pointer=p; };
        iterator operator ++() {
            return iterator(pointer++);
        };
        T operator *() {
            return *pointer;
        };
        T * operator ->() {
            return pointer;
        };
        T operator [](int x) {
            return pointer[x];
        };
        ... rest of iterator class here
    };
    ... rest of the vector class here
};

I left out a few functions from the iterator class of course, but the point is that you can write a class that does everything the iterator is supposed to do, but is not just a pointer. Therefore, a reinterpret_cast of the iterator does not have the desired effect.

There must be a better way to safely allocate memory than this abuse of the vector class.

oh, sorry… yeah, the cast… you’re right

well, with c++ std? no bether way for a save array…

with boost?

boost :: scoped_array

or write a small wrapper class…

why he used the vector and still released resources manually? i guess its just convencient in both cases…

he thought: hm, i need memory… right, vector makes it savely…

then he thought: hm, i need some win32 thingies, well… do it the good old way…

together, it makes a funny mix

why don’t you just write your own dynamically allocated array class? should take you no more than an hour.
seems to me that speculating at such length about how a std::vector is implemented kind of defeats the purpose of its invention.
i’ve not ever come across a need to use the standard template library - I’ve had solidly robust home grown implementations of those kind of objects for years.
still, takes all sorts I suppose.

Originally posted by knackered:
why don’t you just write your own dynamically allocated array class? should take you no more than an hour.
seems to me that speculating at such length about how a std::vector is implemented kind of defeats the purpose of its invention.
i’ve not ever come across a need to use the standard template library - I’ve had solidly robust home grown implementations of those kind of objects for years.
still, takes all sorts I suppose.

It’s beyond me why some people would want to reinvent the wheel when the C++ std lib offers so much. Not to even mention boost lib and the future C++ release.
The C++ std lib is probably more solid and robust (and probably faster) than anything you could write. I mean does your code come with tons of doc about exception safety and minimum garanteed algorithmic and memory complexity?

Granted, writting a vector class is fun… but putting it in production code just to reinvent the wheel square?
Plus not everything is as simple as a vector (maps, multi references strings, C++ I/O lib, graphs, etc…)

It’s not reinventing the wheel - that’s a silly statement. God knows most applications could simply be a huge list of entry point calls into a huge list of 3rd party libs.
Writing a vector class most certainly isn’t fun. Among the many exciting things that attracted me to programming…writing vector classes was not one of them.
Fact is, a simple and very fast dynamic array class is easy to write - for most projects there’s no need to involve a third party library for such basic tasks…it’s enough hassle dealing with the external libraries we’re forced to use, without introducing yet more 3rd party wildcards. I say wildcards because that is exactly what this thread was addressing, like many other threads, people scouring specs looking for clues as to what assumptions can be made about the implementations of various damn simple std lib objects. It’s ridiculous - if you can find a use for the more exotic features of these classes (the features that could potentially threaten the robustness of a homegrown variety), then use them, if you can’t then give them a wide birth.
Jesus wept, you’re not managing the countries inland revenue records, you’re writing renderers for games.

I see there’s a lot of “personal programming style” in this topic. but i can’t see the point why vs.NET doesn’t want to compile the code (just because it’s not in the standard?).

Maybe the best way to quickly access the screen data is to write a display driver.
damn…

i have a dual head card.

is the following possible without writing a driver?

  • render to head 1
  • use the rendered context for projection (maybe cg can help?)
  • render the projection to head 2

[This message has been edited by bebedizo (edited 08-21-2003).]