Drawing to off-screen areas.

Basically, I’m using Win2k and Opengl1.4 (latest nvidia drivers), and we do a rendering technique where I render stuff into the back buffer, then read it into a texture and use that to render the ‘real’ scene before swapping. Now, when windowed, with a window partially off the screen, opengl is ‘optimizing’ away the off-screen rendering, meaning I read back garbage into part of the texture. Is there any way to force it to actually render into this area?

(Yes, I have an alternate rendering path using pbuffers, but half the cards around don’t seem to support them so…)

To my knowledge there’s no way to do it. It even explicitly states in the OpenGL spec that only the visible portions of a window are valid. I’m guessing you will probably have the same problem, when your window is partially overlapped by another window (depends on the driver though).

Originally posted by wintal:
(Yes, I have an alternate rendering path using pbuffers, but half the cards around don’t seem to support them so…)

Really? All ATIs since the Rage128 and all NVIDIAs since the TNT have pbuffers. Even the Intel integrated chipsets support them. I wouldn’t use lack of hardware support as an excuse not to use pbuffers.

– Tom

I wouldn’t use lack of hardware support as an excuse not to use pbuffers.

Yeah, their speed on Nvidia-based cards is more the concern :slight_smile:

Y.

Originally posted by Asgard:
To my knowledge there’s no way to do it. It even explicitly states in the OpenGL spec that only the visible portions of a window are valid. I’m guessing you will probably have the same problem, when your window is partially overlapped by another window (depends on the driver though).

I suspected as much, but thought I’d ask anyway. THanks for confirming my suspicion.

Originally posted by Tom Nuydens:
[b] Really? All ATIs since the Rage128 and all NVIDIAs since the TNT have pbuffers. Even the Intel integrated chipsets support them. I wouldn’t use lack of hardware support as an excuse not to use pbuffers.

– Tom[/b]

Hmm… interesting. Specifically, I’m running into issues on a laptop chipset - geforce4 440go. I create a very large pbuffer, perhaps the limits are smaller on these cards?

Is there anywhere I can get a definitive list of capability differences for different chipsets?

Originally posted by wintal:
Hmm… interesting. Specifically, I’m running into issues on a laptop chipset - geforce4 440go. I create a very large pbuffer, perhaps the limits are smaller on these cards?

How large is “very large”?

– Tom

Very large is 2048x2048. Stupid card limitations won’t seem to let me go any higher (Ideally, I want to be able to go up to about 4096x4096)

I’ve actually worked out the problem now. It appears that while I can create a pbuffer with 32bit depth buffer, it won’t let me actually glClear on it. If I create it with 16bit depth buffer, it works fine on the ‘mx’ style cards. (On a geforce 4 Ti, it works fine with 32bit z-buffer).

>>Is there anywhere I can get a definitive list of capability differences for different chipsets?<<

Not really, that’s why you can query capabilites. From http://oss.sgi.com/projects/ogl-sample/registry/ARB/wgl_pbuffer.txt :

To query the maximum width, height, or number of pixels in any
given pbuffer for a specific pixel format, use
wglGetPixelFormatAttribivEXT or wglGetPixelFormatAttribfvEXT with
<attribute> set to one of WGL_MAX_PBUFFER_WIDTH_ARB,
WGL_MAX_PBUFFER_HEIGHT_ARB, or WGL_MAX_PBUFFER_PIXELS_ARB.

Originally posted by wintal:
Very large is 2048x2048. Stupid card limitations won’t seem to let me go any higher (Ideally, I want to be able to go up to about 4096x4096)

Don’t forget that the thing needs to fit in video memory. 2048x2048 with 32-bit color and 16-bit depth = 24 MB!

– Tom

Originally posted by Relic:
[b]>>Is there anywhere I can get a definitive list of capability differences for different chipsets?<<

Not really, that’s why you can query capabilites. From http://oss.sgi.com/projects/ogl-sample/registry/ARB/wgl_pbuffer.txt :

To query the maximum width, height, or number of pixels in any
given pbuffer for a specific pixel format, use
wglGetPixelFormatAttribivEXT or wglGetPixelFormatAttribfvEXT with
<attribute> set to one of WGL_MAX_PBUFFER_WIDTH_ARB,
WGL_MAX_PBUFFER_HEIGHT_ARB, or WGL_MAX_PBUFFER_PIXELS_ARB.[/b]

Yeah, I realise this, but it’s more from the point of view of being able to find out what hardware is suitable for what I do without getting a card in my grubby little hands. My code actually tests this stuff… I still don’t see any reason why the code should crash, I might throw together a trivial sample and throw it at nvidia.

Originally posted by Tom Nuydens:
[b] Don’t forget that the thing needs to fit in video memory. 2048x2048 with 32-bit color and 16-bit depth = 24 MB!

– Tom[/b]

Yeah… for our higher end stuff we limit people to very high end cards. I just wish they’d give us some cards with decent amounts of vram. I’d settle for a GB or so. (On something that isn’t by SGI, and costs less than the net worth of my company, for preference )