I want to render pictures with resolution bigger than the screen resolution and save the pictures to bmp file. I tried to use window width and height bigger than the current visible window, render the scene in the default draw buffer, read the buffer, and save to file. But when I looked at the bmp file, it had only the lower left corner of the picture and other area were all gray. The lower left part seemed correctly rendered (color, scale, etc.) and the size of that part was exactly same as the visible window size when I did the rendering and saving.
Is that because the frame buffer is of the same size as the visible window? How can I use a frame buffer that is larger? Does windows support aux buffers? How do I specify the buffer size?
Thanks for your reply. But I may not quite understood what you said.
What I want to do is to render the current scene in the current window again in higher resolution, say 2400x2400, and save the newly-rendered image to a bmp file. So I still need the “window”. Is it possible to do what you said “change the pixel format” while the current window is still on? And how do I specify the resolution of the frame buffer? or do I need to?
I am not sure to understand your question …
My guess :
You want to know if opengl graphics are drawn directly in the window or copied from another place ? Well I would say it depends, but with accelerated GL, it is all handled by the video card on hardware. That one of the reasons why readpixels is quite slow. “Pixels have to go back up the stream” as said somebody on this board.
For pbuffers, there are several ways to use the result (from slowest to fastest) :
readpixels, then do whatever you want with it
glCopyTex[Sub]Image to a texture
use render to texture extension, so that the pbuffer is usable directly as a texture for another target like the framebuffer.
In fact, I would like read pixels of my result.
To do a statistic calculations. I would need RGBA bits.
Therefore if I want read pixels (one at a time) only “glReadPixels” can does it !?
Because with a texture :
“It doesn’t make sense to get a direct pointer to a buffer, because how data (typically pixels) are stored in a buffer is implementation dependant” as said Ysaneya on this board.
Are those the only choices for rendering off the screen on Windows, either software rendering or pbuffers? The trouble with pbuffers is that this feature is not available on all video cards. On Mac OS, I understand that one can just move a window off the screen and render to it. There is no such trick on Windows?
Originally posted by James W. Walker: On Mac OS, I understand that one can just move a window off the screen and render to it.
Such behavior is typically undefined, that is, no guarantee that non visible pixels will actually been drawn.
On most implementations (all the nvidias under windows I am aware of, from tnt2 to geforce6800), it will end up as black.