PBuffer problem

Hi.

I use pbuffers for off-screen rendering, but for some reason I can’t make it work as expected.

If I create a ‘small’ pbuffer (512x512 or smaller on ATI cards, 1024x1024 on nVidia) everything works as expected, however if I make it larger than that (but no larger than max texture size), performance drops below 1FPS, and if I try to read from the PBuffer, I get all black pixels!

I check all return values when creating the pbuffer, and keep checking for OpenGl errors, but I never get any errors, and when quering for the pbuffers size, I am also told that it is has the size I requested on creation!

Does anyone know what is wrong (is there a max pbuffer size I am not aware of?), and how to fix it?

Thanks
Anders

P.S. If you want to see some code, look at
www.daimi.au.dk/~rip/DPBuffer.h &
www.daimi.au.dk/~rip/DPBuffer.cpp

The WGL_ARB_pbuffer extenion provides queries to determine the maximum allowed pbuffer size: WGL_MAX_PBUFFER_WIDTH_ARB,
WGL_MAX_PBUFFER_HEIGHT_ARB, and WGL_MAX_PBUFFER_PIXELS_ARB. Look them up in the spec.
However, I don’t think you should even be able to create a larger pbuffer than these values, so I don’t think it’s the cause of your problem.

Maybe your pbuffer is lost, due to not enough video memory being availble. Have you tried to call wglQueryPbufferARB with WGL_PBUFFER_LOST_ARB before you read back the pixel data?

Just queried for max width and height. Apparently with the pixelformat I use, pbuffers of size up to 2048x2048 are supported.

Also, I checked if the buffer is lost, before enabling it, and before reading from it. The answer is always no, the buffer isn’t lost!

Anders

You are probably runing out of memory on the video card. One quick test you can try is to set your display properties to use 16 bit color, If they aren’t already, and rerun the program. If you are close to the limit with 32 bit then this should pull you back significantly, and hopefully you’ll see different behaviour.

Hope this helps
Heath.

Hmmm. Seems you are right. Changing the screen resolution helped, however that leaves me with a couple of new questions:

Why can’t I detect that this happends? I still don’t get any errors at all!

How can memory be a problem? I have a small scene, with 35 small textures (64x64), 6 medium size textures (256x256), two of size 1024x1024 and a 32 bit framebuffer (approx 1400x 1000) with z-buffer. That’s all, and I am using a 128 MB Radeon card!

Anders

These calculations are approximate, and may not be exactly what your doing…I am also not an expert on video card memory usage but a rough estimate would yield the following numbers.

Desktop 1400x1000 * 3 Bytes deep = 4.2MB
OpenGL Front Buffer 1400x1000x3 = 4.2MB
OpenGL Back Buffer 1400x1000x3 = 4.2MB
24bit Z-Buff for Front 1400x1000x3 = 4.2MB
24bit Z-Buff for Back 1400x1000x3 = 4.2MB

Pbuffer Front 2048x2048x3 =12.0MB
Pbuffer Back 2048x2048x3 =12.0MB
Pbuffer Front Z 2048x2048x3 =12.0MB
Pbuffer Back Z 2048x2048x3 =12.0MB

35 Tex 64x64x4+mipmaps =0.76MB
6 Tex 256x256x4+mipmaps =2.09MB
2 Tex 1024x1024x4+mipmaps =10.64M

Total = 21.0 + 48.0 + 13.49 = 82.49MB

Of course, you may not be quite so extravagant on Pbuffers, but you may have a stencil buffer by accident…

Hope this helps
Heath.

P.S.

Actually, I just took a quick look at the code, and it appears that an RGBA pbuffer was requested and stencil too…although not double buffered…so you’ll have to change my numbers a little…sorry I’m getting a little tired of all the math.

Good luck.

[This message has been edited by heath (edited 03-11-2003).]

> Desktop 1400x1000 * 3 Bytes deep = 4.2MB

Note: the card will pretty much always store all these buffers as 4-byte pixels, with one possibly “wasted” byte. That’s why stencil is pretty much free, if you have a 24 bit depth buffer.

Thanks for all the answers.

Heath, I did your calculations again, taking into account that all framebuffers are 32 bit (I use RGBA and stencil), pbuffers are not double buffered, and that the problem appears already when allocating a 1024x1024 buffer on my Radeon card. I get a total of 51MB used, which is nowhere near the 128MB I have on the card!

Furthermore I tested on my Geforce 3 (64 MB) at home yesterday.
First of all, the problem doesn’t appear until requesting a 2048x2048 pbuffer (Same scene). In this case, i can understand why, because then I’ll be using almost 74MB out of 64.
Second, it actually appears the problem no longer exists on GeForce cards. When asking for a 2048x2048 pbuffer, it simply gave me a software pixelformat!

This problem is actually something I ran into some 6 months ago, but after days of debugging I ended up just blaming the drivers. Then after I got a Radeon card I rediscovered the problem and thought that if they both fail the same way, then it probably isn’t a driver problem after all!
Looks like I’ll be going back to blaming the driver :wink:

Thanks again for all the help.

Anders

I have done this, and asking for 2048x2048 succeeds but rendering slows down to a crawl.
4096x4096 fails.

I thought that Geforces are suppose to be able to handle textures that big, so why not P-buffer.

And really, why make P-buffers so different then a simple 2D texture?
They need their own context and a pixel format setup. Why is it really necessary for them to have a separate GL context?