pbuffers problems, again.

It looks like there is a problem on the nvidia detonator drivers :

when I create more pbuffers than possible (when the card’s memory is full I guess), my app gets stuck for a few seconds while drawing inside the last created pbuffer and then I get an error when I use wglBindTexImageARB on it.
Normally, it should fail in wglCreatePbufferARB, which should do something like SetLastError(ERROR_NO_SYSTEM_RESOURCES);

Anyway I did a short application to show that, here is the source :
http://thomasbesson.free.fr/rendertotexture_2004-06-18/ or http://thomasbesson.free.fr/rendertotexture_2004-06-18.zip (vc6 src + bin) if you want to try it.

when you run the app, you can press + / - to add /remove pbuffers inside each other, or use the arrow keys to grow/shrink the pbuffers.

Also on my ATI Radeon 9600 card there is a big constant memory leak, something like 500k/second. I already mailed ATI for that but they are not very responsive… yeah I know I’m not from Id software :stuck_out_tongue:

Anyone can see any mistake in the source, or how I can work around the problem (like how to know if there is no more memory) ?

Thanks.

Same “wglBindTexImageARB: Win32 error : unknown error” for me. Not apparent memory leak, when adding puffers, ram usage increase, when removing pbuffers, ram usage goes down to initial value.

Geforce3ti200, detonator 45.23

http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=3;t=012110

I had similiar problem, driver fails to allocate max possible buffer (PBUFFER_LARGEST) when there is not enough resources. As far as I know this works on R7500 & fails on 4 geforces tested. From threads here at gamedev.net & devfeedback @ nvidia only response was at gamedev (app test on R7500 and GF’es) so I’m intrested is this a driver bug too…

M/\dm/
, this is not what I was talking about, but I had the same problem. To give more details :

To know what is the biggest size I can use for a pbuffer, I double check with something like :

  // The maximum texture size in the current context (I use pbuffes for rendering to texture)
  int max_size;
  glGetIntegerv(GL_MAX_TEXTURE_SIZE, &max_size);

  // Check the maximum pbuffer size in the given pixel format
  // Do this after wglChoosePixelFormatARB()
  int pf_attr[] =
  {
    WGL_MAX_PBUFFER_WIDTH_ARB,
    WGL_MAX_PBUFFER_HEIGHT_ARB,
    0
  };
  int size[2];
  wglGetPixelFormatAttribivARB(_old_hdc, pixel_format, 0, 2, pf_attr, size);
  if (width > size[0])
    width = size[0];
  if (height > size[1])
    height = size[1];

On ATI (R9600) this works fine, but on nvidia (FX5700) it gives me 4096 for max width and height, but accepts only 2048. I guess this is a bug, but this is not very important for me.