Inconsistency openGL implementations part 2

Sorry that I have to start anyother thread since my previous thread was hijacked. It got too long.

Korval, your points are well taken. I should have to rely only on defined behavior. Who to blame? I blame the spec which allows undefined behavior, which leads inconsistency.

Humus, thanks for swap behavior info about ATI, I haven’t thought about all those cases. Basically, they are inconsistent among themselves. I do like your view on pixel ownership test. I think the test should only affect the front buffer, the back buffer would be guaranteed to be updated because that’s the way how people think.

It’s very frustrating to see a solution developed on one openGL driver doesn’t work as expected on another driver.

swap behavior or pixel ownership test aren’t the few isolated cases. I see a pattern. pbuffer is another example. I developed my pbuffer solution on Nvidia. It seems working fine, but when tested on ATI or Intel, it doesn’t work.

1). I can’t successfully create pbuffer on Intel card even thought it lists ‘WGL_ARB_pbuffer’ as supported extension and it passes wglChoosePixelFormatARB() test. I have to spend more time to look into why.

2). ATI doesn’t support ‘WGL_NV_render_texture_rectangle’ extension on PBuffer even though it supports GL_EXT_texture_rectangle on main buffers.

3). Why does PBuffer have to be power of two, why can’t it be like any other buffer. I am creating a backing store for a openGL window, since my window can be obscured or off-screen, I no longer can rely on the back buffer to capture the content of my window. So I use pbuffer. If I have a 1600x1200 window, I end up creating pbufer of 2048x2048, very wasteful of vmem. The same thing is true to texture. Since ATI doesn’t support NPOTD texture in pbuffer, so my texture ends up just like pbuffer 2048x2048. I often end up having unpredictable behavior due to running-out-vmem.

In the end, my pbuffer solution only works on Nvidia.

There is WGL_ATI_render_texture_rectangle. It’s not documented but works like the NV extension.

I tried it, but I couldn’t make it work. I thought it would work just like Nvidia’s one, but since there isn’t a spec, I don’t know why it didn’t work. My ATI card has that extension.

Why does PBuffer have to be power of two, why can’t it be like any other buffer.
I don’t recall that pbuffers need to be powers of two, in and of themselves. Now, if you’re going to later use WGL_ARB_Render_Texture on them, then they need to follow all of the conventions for the texture format you’re binding them as.

BTW, you may want to consider FBO instead of the WGL stuff. It’s much easier to work with (particularly for render-to-texture ops), and it will (eventually) have support for multisampling and so forth.

Thank you, Korval

I checked the spec and you are right that it never mentions that pbuffer needs to be powers of two. After further investigation, I figured out what my problem was.

It turns out that I was creating a pbuffer with “WGL_ARB_Render_Texture” attributes. ATI doesn’t support ‘WGL_NV_render_texture_rectangle’; so no NPOTD pbuffer for “WGL_ARB_Render_Texture” and Intel never supports “WGL_ARB_Render_Texture”.

Once I revised the attribute list as below, I was able to make pbuffer’s dimension NPOT on ATI card and also I was able to successfully create pbuffer on Intel card.

int pb_attr[] =
{
WGL_PBUFFER_LARGEST_ARB, GL_FALSE,
WGL_TEXTURE_FORMAT_ARB, WGL_NO_TEXTURE_ARB,
WGL_TEXTURE_TARGET_ARB, WGL_NO_TEXTURE_ARB,
WGL_MIPMAP_TEXTURE_ARB, GL_FALSE,
0
};

I guess this time I was spoiled by NVIDIA’s openGL implementation on pbuffer which is more forgiving for mistakes than others.