PDA

View Full Version : grabbing the entire screen



floww
08-17-2003, 05:01 AM
i want to grab the entire content of the actual screen [not only the actual window]and use it as a texture. is this possible with opengl and how good is the performance?
thx in advance, floww

Succinct
08-17-2003, 06:59 AM
I've got the code for it at work. It's a lot of fun to goof w/ the screen. You can do some pretty neat things. Just - be careful running it at 1600x1200x32 - only the best cards will even deal w/ that.

Tomorrow night I'll post it. Basically, you create a screen dc using CreateDC( 0,"DISPLAY",0,0 ), create a compatible DC from that screen DC, create a DIB section that matches the screen, select the dib into the compatible DC, then bitblt from the screen DC to the DIB's dc. The screen contents will be accessible as the DIB's bits.

Like I said, I'll post the code here tomorrow when I get home from work, if I remember.

Good luck!

floww
08-17-2003, 10:04 AM
thanks a lot.

Succinct
08-18-2003, 08:13 AM
Heregoes:




void GrabScreenRip( int* DestWidth,int* DestHeight,int* DestBytesPerPixel,void** DestBits ) throw( std::exception )
{
HDC hScreenDC = 0;
HBITMAP hScreenBmp = 0;
HDC hDCCopy = 0;
HGDIOBJ hOriginalBitmap = 0;
try
{
// gain access to screen contents
hScreenDC = GetDC( 0 );
if( !hScreenDC )
throw std::runtime_error( "Unable to gain direct screen access." );

int Width = GetSystemMetrics( SM_CXSCREEN );
int Height = GetSystemMetrics( SM_CYSCREEN );
int BitsPerPixel = GetDeviceCaps( hScreenDC,BITSPIXEL );
int BytesPerPixel = BitsPerPixel >> 3;

// setup format for screen DIB
int Size = Width*Height;
BITMAPINFOHEADER bmih =
{
sizeof( bmih ),
Width,-Height,
1,(unsigned char)( BitsPerPixel ),
BI_RGB,Size,
0,0,0x100,0x100
};
std::vector<unsigned __int8> Buffer( sizeof( BITMAPINFO ) + 0x100*sizeof( RGBQUAD ) );
BITMAPINFO* pbmi = reinterpret_cast<BITMAPINFO*>( Buffer.begin() );
pbmi->bmiHeader = bmih;
memset( pbmi->bmiColors,0,0x100*sizeof( RGBQUAD ) );

// create main screen DIB
void* ScreenBits = 0;
hScreenBmp = CreateDIBSection( hScreenDC,pbmi,DIB_RGB_COLORS,&amp;ScreenBits,0,0 );
if( !ScreenBits )
throw std::runtime_error( "Unable to copy screen data." );

unsigned NumBytes = Size*BytesPerPixel;
memset( ScreenBits,0,NumBytes );

// select DIB into an offscreen memory DC
hDCCopy = CreateCompatibleDC( hScreenDC );
hOriginalBitmap = SelectObject( hDCCopy,hScreenBmp );

// grab actual screen rip
if( !BitBlt( hDCCopy,0,0,Width,Height,hScreenDC,0,0,SRCCOPY ) )
throw std::runtime_error( "Unable to blit screen data." );

// copy out bitmap bits
void* Bits = new unsigned __int8[NumBytes];
if( !Bits )
throw std::bad_alloc();
memcpy( Bits,ScreenBits,NumBytes );

// update source params
*DestWidth = Width;
*DestHeight = Height;
*DestBytesPerPixel = BytesPerPixel;
*DestBits = Bits;

// cleanup
SelectObject( hDCCopy,hOriginalBitmap );
DeleteDC( hDCCopy );
DeleteObject( hScreenBmp );
ReleaseDC( 0,hScreenDC );
}
catch( std::exception&amp; )
{
// failure - clean up any allocated resources
if( hScreenDC )
ReleaseDC( 0,hScreenDC );

if( hDCCopy )
{
if( hOriginalBitmap )
SelectObject( hDCCopy,hOriginalBitmap );

DeleteDC( hDCCopy );
}

if( hScreenBmp )
DeleteObject( hScreenBmp );

throw;
}
}
//---------------------------------------------------------------------------



[edit 1] DOAH! I forgot it took out my formatting lines. Well, w/ all of the lines out, it looks a lot worse than it really is. Comments start a block of code, so put in lines just before the comments as blocks.

Like I said before:
1) Get the screen DC (via GetDC( 0 ), rather than CreateDC( 0,"DISPLAY",0,0 ); )
2) Create a DIB section matching the screens dimensions and color depth.
3) Create an offscreen DC that matches the format of the screen DC.
4) Select the DIB into the DC.
5) Bitblt from the screen's DC to the DIB's DC.

A copy of the screen's contents are then accessible via the bitmap bits pointer created during CreateDIBSection.

6) Cleanup.

I ripped this code from my general library. Normally I use resource wrappers for DC's and HBMPS, (similar to std::auto_ptr), but that stuff's not really part of the question. I wouldn't normally dup cleanup code like I did here, but this is just a quick rip to get the point across.

Anyways, good luck screen ripping!

As a side, does anyone else have any other ways of pulling this off? This way is relatively fast, but it's slow enough that I don't mind using exceptions in it, because I'd never, ever call this when frame rates are important.

Thank you for your bandwidth,
-- Succinct


[This message has been edited by Succinct (edited 08-18-2003).]

floww
08-19-2003, 04:34 AM
thanks for the code. it works fine except for the lines
std::vector<unsigned __int8> Buffer( sizeof( BITMAPINFO ) + 0x100*sizeof( RGBQUAD ) );
BITMAPINFO* pbmi = reinterpret_cast<BITMAPINFO*>( Buffer.begin() );

(Using VS.NET)

I rewrote them to a simple malloc. I don't know if this is valid but at least the code compiles.
I get a maximum 10fps wich is too low for my application.

I want to use the screen content as a texture for mapping on a sphere. it should work in realtime.

Has anyone an idea how to realize such an application (similar to the nvidia keystone correction tool -> http://www.nvidia.com/object/feature_nvkeystone.html)?

Or how nvidia could have done it?

thx in advance, floww

Succinct
08-19-2003, 06:40 AM
1) You have to include <vector> - why you'd use malloc, I'll never know, unless you're going embedded, but you can't be, because you're running a windows OS.
2) As I said in the last post: This will not run in real time ("I'd never, ever call this when frame rates are important."). http://www.opengl.org/discussion_boards/ubb/smile.gif

There are three main contributing factors as to why this method won't run in real time. The first is that you're using windows GDI. The second is the sheer transfer volume done in the bitblt. The fill rate needed to transfer the contents of the entire screen back through the PCI bus to the main system memory is pretty big, and most likely done by the processor. The last part is that in order to use it as a texture, you have to send it back down to the graphics hardware. Now we're getting into AGP limiting due to transfer bandwidth. Further complicating the issue is that if you're running at larger than 1024x768, only the real high end cards will support the texture unless you break it up, which means more sends. The benefit to breaking up, say, 1024x768, is that you can break it up into 4x3 256x256 textures, instead of black padding the 1024x768 image to 1024x1024.

The only way to make this run in real time under windows would be to circumvent the round trip transfers to the cpu/system memory. I don't know how to get to the frame buffer contents of other applications in AGP memory, though... Maybe the NVidia guys know some hacks to get it? I'm sure it won't be anything clean, portable, or safe.

The only thing I can suggest is grab it once, tile it into some textures, then maniuplate them. This is how I demoed breaking the screen as in FFX just before you get into a battle.

Good luck! Let me know if you get any other ideas.

-- Succinct

[This message has been edited by Succinct (edited 08-19-2003).]

knackered
08-19-2003, 07:08 AM
just a thought.
you could, in your initialise routine, set the pixel format of the dc you get using screenDC = GetDC(0), to the same format that you used to create your opengl context, then simply wglmakecurrent(screenDC, glrc), set a viewport etc., and glcopysubimage the framebuffer into your texture, then wglmakecurrent(mywindowDC, glrc) to render with it.
I'm sure there's reasons why this won't work, but it's just a bit of a laugh.

bebedizo
08-19-2003, 10:44 AM
i have included vector. but the problem is not the vector itself:
error C2440: 'reinterpret_cast' : cannot convert from 'std::vector<_Ty,_Ax>::iterator' to 'BITMAPINFO *'
with
[
_Ty=unsigned char,
_Ax=std::allocator<unsigned char>
]

whats wrong here?

CatAtWork
08-19-2003, 10:50 AM
Isn't it unsafe to assume that vector::begin() will point to contiguous memory? The whole concept of implementation independent containers...

davepermen
08-19-2003, 11:35 AM
Originally posted by CatAtWork:
Isn't it unsafe to assume that vector::begin() will point to contiguous memory? The whole concept of implementation independent containers...

its save to assume in every implementation that it is a continuous memory chunk. it will be officially save even in the c++ standards in the next releases.

and, as he wants to use it as a texture, use glCopyTexSubImage.. veryvery fast and simple.

CatAtWork
08-19-2003, 01:17 PM
Ugh. Looked it up. Requiring that it be continuous feels wrong. I'm not a standards author, however. No more OT posts, I promise. http://www.opengl.org/discussion_boards/ubb/smile.gif

*Aaron*
08-19-2003, 04:48 PM
std::vector<unsigned __int8> Buffer( sizeof( BITMAPINFO ) + 0x100*sizeof( RGBQUAD ) );
BITMAPINFO* pbmi = reinterpret_cast<BITMAPINFO*>( Buffer.begin() );Good God, man! What's wrong with the good old-fashioned new operator? Or is that too old school for you? And what makes you so sure that the iterator is a straight C pointer? It could be a wrapper class and still conform to the standard. Perhaps this is why bebedizo's compiler won't let him reinterpret_cast it. And even if you could be absolutely sure that it works on every implimentation of the STL, what's the advantage over using new (or malloc or calloc for that matter)? Are you afraid someone else might actually understand what your code is trying to do?

Whew, glad I got that off my chest. http://www.opengl.org/discussion_boards/ubb/wink.gif

davepermen
08-19-2003, 07:24 PM
aaron: advantage: savety that there will never be any memoryleak, even when you use exceptions in your code.. RAII, ya know..

and yes, a vector _IS_ guaranted to be implemented as a simple dynamic allocated array.
not yet in the standard, but yet in every implementation.

*Aaron*
08-20-2003, 06:55 AM
First of all, sorry for thread-jacking, floww...


aaron: advantage: savety that there will never be any memoryleak, even when you use exceptions in your code.. RAII, ya know..If he can free OS-allocated resources, like the bitmap and DC, in the catch block, what's the harm in calling free or delete there too? How is new any more dangerous than CreateDIBSection, for example?


and yes, a vector _IS_ guaranted to be implemented as a simple dynamic allocated array.
not yet in the standard, but yet in every implementation.I didn't say it wasn't. My concern is that the reinterpret_cast of the iterator assumes that it is a pointer. Consider this code snipet from a hypothetical implementation of vector:



template <class T, class Allocator>
class vector {
class iterator {
int dummy;
T *pointer;
iterator(T *p) { pointer=p; };
iterator operator ++() {
return iterator(pointer++);
};
T operator *() {
return *pointer;
};
T * operator ->() {
return pointer;
};
T operator [](int x) {
return pointer[x];
};
... rest of iterator class here
};
... rest of the vector class here
};

I left out a few functions from the iterator class of course, but the point is that you can write a class that does everything the iterator is supposed to do, but is not just a pointer. Therefore, a reinterpret_cast of the iterator does not have the desired effect.

There _must_ be a better way to safely allocate memory than this abuse of the vector class.

davepermen
08-20-2003, 07:16 AM
oh, sorry.. yeah, the cast.. you're right

well, with c++ std? no bether way for a save array..

with boost?

boost :: scoped_array

or write a small wrapper class..


why he used the vector and still released resources manually? i guess its just convencient in both cases..

he thought: hm, i need memory.. right, vector makes it savely..

then he thought: hm, i need some win32 thingies, well.. do it the good old way..

together, it makes a funny mix http://www.opengl.org/discussion_boards/ubb/biggrin.gif

knackered
08-20-2003, 07:25 AM
why don't you just write your own dynamically allocated array class? should take you no more than an hour.
seems to me that speculating at such length about how a std::vector is implemented kind of defeats the purpose of its invention.
i've not ever come across a need to use the standard template library - I've had solidly robust home grown implementations of those kind of objects for years.
still, takes all sorts I suppose.

GPSnoopy
08-20-2003, 09:10 AM
Originally posted by knackered:
why don't you just write your own dynamically allocated array class? should take you no more than an hour.
seems to me that speculating at such length about how a std::vector is implemented kind of defeats the purpose of its invention.
i've not ever come across a need to use the standard template library - I've had solidly robust home grown implementations of those kind of objects for years.
still, takes all sorts I suppose.

It's beyond me why some people would want to reinvent the wheel when the C++ std lib offers so much. Not to even mention boost lib and the future C++ release.
The C++ std lib is probably more solid and robust (and probably faster) than anything you could write. I mean does your code come with tons of doc about exception safety and minimum garanteed algorithmic and memory complexity?

Granted, writting a vector class is fun... but putting it in production code just to reinvent the wheel square?
Plus not everything is as simple as a vector (maps, multi references strings, C++ I/O lib, graphs, etc...)

knackered
08-20-2003, 12:47 PM
It's not reinventing the wheel - that's a silly statement. God knows most applications could simply be a huge list of entry point calls into a huge list of 3rd party libs.
Writing a vector class most certainly isn't fun. Among the many exciting things that attracted me to programming...writing vector classes was not one of them.
Fact is, a simple and very fast dynamic array class is easy to write - for most projects there's no need to involve a third party library for such basic tasks...it's enough hassle dealing with the external libraries we're *forced* to use, without introducing yet more 3rd party wildcards. I say wildcards because that is exactly what this thread was addressing, like many other threads, people scouring specs looking for clues as to what assumptions can be made about the implementations of various damn simple std lib objects. It's ridiculous - if you can find a use for the more exotic features of these classes (the features that could potentially threaten the robustness of a homegrown variety), then use them, if you can't then give them a wide birth.
Jesus wept, you're not managing the countries inland revenue records, you're writing renderers for games.

bebedizo
08-20-2003, 02:13 PM
I see there's a lot of "personal programming style" in this topic. but i can't see the point why vs.NET doesn't want to compile the code (just because it's not in the standard?).

Maybe the best way to quickly access the screen data is to write a display driver.
damn...

bebedizo
08-21-2003, 03:38 AM
i have a dual head card.

is the following possible without writing a driver?
- render to head 1
- use the rendered context for projection (maybe cg can help?)
- render the projection to head 2

[This message has been edited by bebedizo (edited 08-21-2003).]

dorbie
08-21-2003, 04:09 AM
There is a clear distinction to be drawn between YOU rendering and then reading back, and WINDOWS rendering (and 3rd party applications) and you reading back. It's also important to know how you render, is it the GDI or is it OpenGL rendering. Is it the same application or a different one.

I think you need to spell out exactly what you're doing and don't assume responses to another poster (or to an scenario unclear to a poster) applies directly to you.

The various differences may seem subtle but technically they are huge and impact the calls you make and the performance in a big way, and even the feasibility (the requirement for dual head for example).

Hope that helps.

davepermen
08-21-2003, 12:17 PM
Originally posted by knackered:
It's not reinventing the wheel - that's a silly statement.
...
Fact is, a simple and very fast dynamic array class is easy to write
uhm.. wich std::vector exactly is.. a simple, fast dynamic array class..

so you DO reinvent the wheel.


for most projects there's no need to involve a third party library for such basic tasks...it's enough hassle dealing with the external libraries we're *forced* to use, without introducing yet more 3rd party wildcards.
on any c++ compliant compiler, this IS THE STANDARD. its not just another third party library.
and so.. it is.. actually.. the way to do exactly that.


It's ridiculous - if you can find a use for the more exotic features of these classes (the features that could potentially threaten the robustness of a homegrown variety), then use them, if you can't then give them a wide birth.
exotic feature? constructing a dynamic array with the required size.. what is so exotic at this? and then.. accessing it.. even more exotic http://www.opengl.org/discussion_boards/ubb/biggrin.gif

what i see is that he isn't really consistent in the code, using new shortly later.. while he could use a static array where he uses the vector.. you could attack his non-existent consistency, THAT is very dangerous.

but the use of vector there is correct and good and simple.



Jesus wept, you're not managing the countries inland revenue records, you're writing renderers for games.
wich should be written in a a good way. so be, and stay consistent. thats the most important thing.

the second important thing is savety. your code is not allowed to fail in ANY way, and IF it does, to be able to check for all its own resources.

i don't want to download patches for your app.


i think thats about it for the offtopic part in here.. http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Robert Osfield
08-21-2003, 01:05 PM
std::vector<T>::begin() returns an iterator, which in old STL implementations used to be a straight forward T* so could be safely cast to T*.

However, Visual Studio .NET is amoung new breed of Standards complient compilers than the std::vector<T>::begin() return a interator which is a class in its own right, its no longer a T* so its not safe to cast it as one.

What I have to be most convient to do is do the cast the address of std::vector<T>::front().

So the orignal code:

BITMAPINFO* pbmi = reinterpret_cast<BITMAPINFO*>( Buffer.begin() );

Becomes:

BITMAPINFO* pbmi = reinterpret_cast<BITMAPINFO*>( &Buffer.front() );


This will compile fine on new and old compilers.

Robert.

*Aaron*
08-21-2003, 01:41 PM
I'm curious why it is necessary to create the DC and DIB section and then destroy them each frame. It seems to me that you could break the GrabScreenRip function into three functions: initialize, grab (this would be called each frame), and cleanup. Maybe this would make it fast enough to use in realtime.