Pbuffers & multiple threads

Hey all,

In one of the products I develop at work, we’re using pbuffers for RTT functionality (and keeping OpenGL requirements minimal).

I’ve had this pbuffer code working quite well for months now, however it’s just recently I’ve started to use this code in a multi-threaded programming environment.

Specifically, the problem I’m having is - if I create the hidden window & opengl context for the pbuffer in Thread A, I cannot destroy/bind it to Thread B.

I’m properly handling device context & OpenGL context management, and Thread A has released/unmade-current both the window & pbuffer DCs, as well as the OpenGL context (eg: DCs have been released, and Thread A has no OpenGL context current).

Yet Thread B still fails to obtain a valid DC to the hidden window, thus Thread B can’t bind the OpenGL context either…

This is currently causing issues as our code-base assigns arbitrary threads to arbitrary jobs, and we can’t guarantee that the context/window will be created/destroyed in the same thread.

Am I right to assume it’s not even remotely possible to create a window/glcontext in one thread, and use/destroy it in another?

Thanks in advance,

P.S. there’s nothing special about how I create the hidden window & context, just the plain old RegisterClass/CreateWindow/ChoosePixelFormat/SetPixelFormat/wglCreateContext.

I managed to refine the issue down to one fact:

After releasing my pbuffer’s DC, I can’t get it back again (PGetBufferDC always returns null after releasing the original, no matter what I try).

And clearly I can’t use a single DC across multiple threads, so I’m somewhat lost as to what I can do here, besides completely cutting off the hand full of clients we have that can’t meet the OpenGL 2.0 requirements for FBOs.

How (and why) are you releasing your Pbuffer’s DC?

How (and why) are you releasing your Pbuffer’s DC?
[/QUOTE]

wglReleasePbufferDC, and because DCs are thread-specific, so if I need to use the pbuffer in another thread, I have to release the pbuffer’s DC from the old thread.

I’ve managed to get this working flawlessly on XP, however Vista now has issues after rendering the first frame… even if I never release/re-get any of the DCs (eg: just keep the first one I get forever).

I’m not seeing it, Smokey.

I created a pbuffer in the main app thread… then, in a render thread, rendered a triangle to it and copied the lot to a texture, without any of the releasing business - all systems go.

[Vista32, G80/181.20]

That works for me too, in Windows XP - not Vista however.
Also technically, while nVidia’s driver clearly isn’t complying with Win32 standards on device contexts, Win32 API documentation clearly sates a DC is thread-specific.

In any case I don’t care about implementation lazyness, I care more about getting this to work on ‘both’ XP and Vista, properly releasing the DC or not.

I’m a bit lost as to what’s going wrong, I’ve got glGetError checks after every single gl* call, checking the return types of all wgl* calls, and DOUBLE checking with GetLastError even if the return type checks out. Not a single problem is reported when I don’t get/release the DC properly.

If I ‘do’ get/release the DC properly, Vista returns 0/NULL for the second call to wglGetPBufferDC, but GetLastError indicates the function completed successfully - which just adds to my confusion.

I’ve spent the last 6 hours trying all sorts of random things in Vista, with no luck :frowning: (again, XP working fine from the same code base.)

In all cases, the first frame renders fine - and all frames there-after don’t seem to render ‘anything’, however clear/swap buffers/etc is working fine, considering I get the correct values when reading the depth-buffer & colour buffer back (depth buffer is exactly what I cleared it to, and colour buffer is also the same colour I cleared it to (in this case, pure blue)) - which would indicate that the depth test is being affected, or maybe my rendering commands are somehow being ignored - but even re-setting the OpenGL states each frame doesn’t help…
confused

XP32 / Vista32 - also with a G80 running 181.20.

run your application with GLIntercept in full debug mode. it checks automatically for OpenGL errors (no need to call them after each OpenGL call) as well as thread checking and error logs.

Okay, I managed to get GLIntercept working.

With their full debugging configuration, GLIntercept reports no errors on anything other than the OpenGL calls it’s doing internally to double check everything (eg: calls not made by me) - specifically it’s calling wglDescribePixelFormat and wglGetPixelFormat quite a lot under incorrect circumstances. Besides that, no errors from my code.

Still very confused, the exact same code works fine under Windows XP, and the code that isn’t specific to pbuffers (eg: rendering code) works fine under Vista when not rendering to a pbuffer.

Edit: If I remove all of my own debugging code, GLIntercept with it’s full debug configuration runs fine, without reporting a single error at all, when running the application for over a minute.

Well, if you’re having problems, why don’t you just destroy it completely, instead of just releasing it?

You know:
wglDeleteContext( … );
wglReleasePbufferDCARB( … );
wglDestroyPbufferARB( … );

btw, I don’t know if this is helpful, but if you check the Wine sources on wglReleasePbufferDCARB(), it actually calls DeleteDC(hdc).

And if you want to further debug your DC handle, I would called GetObjectType() on it. It’ll let you know if it’s still valid or not.

All this time, it appears my original pbuffer code was flawless.

<s>This actually appears to be a problem with another library I’m using, which is giving me completely bogus camera transformation values in Vista… to the point where anything I render isn’t projected anywhere in the screen at all - explaining why I’m getting the correct cleared depth/colour values, but nothing I’m rendering is being displayed.

Sorry for the mix up… now I just have to figure out why this other library doesn’t like Vista.</s>

Edit: I spoke too soon, seems the library I’m using is working fine - my method for rendering isn’t though, at least not for vista. I replaced my VBO code with vertex pointer code, with no luck, then replaced my vertex pointer code with immediate mode rendering calls (glBegin/glVertex3fv/etc) - and it’s now rendering properly…

So I have to figure out what I’m doing wrong with both my VBO and vertex pointer code, which somehow freaks out Vista ‘only’ when rendering to a pbuffer -_-

ah-ha!

I’ve finally figured it out. CUDA seems to be leaving the OpenGL state some what mangled after locking/unlocking a PBO for use by CUDA.

The reason my vertex-pointer (glDrawArrays) code wasn’t working is because there was a buffer currently bound to GL_ARRAY_BUFFER despite the fact that I completely disabled all VBO related code in my OpenGL renderer.

My application ‘does’ however do quite a lot of CUDA related processing after reach frame I render, including OpenGL-interop (I copy from the pbuffer to PBOs, so CUDA can process the rendering results from my renders)…

It appears (although this isn’t documented in the CUDA docs at all, and is clearly driver-version / OS dependant) that after I I register/lock a PBO for use by CUDA, and then unlock it - it leaves the PBO still bound to GL_ARRAY_BUFFER - and because my OpenGL renderer made the naive assumption that nothing else would screw with it’s own context - I didn’t bother clearing buffer states and the likes before rendering - thus CUDA’s PBO was still bound…

Biggest waste of ~12 working hours ever, due to yet another un-documented ‘feature’ (cough side-effect/bug cough) of CUDA.

Edit: nVidia forum link to my problem report those who’re interested

That’s why GL needs to be cleaned up. So that the only way to render is VBO, instead of having VA, DL, compile VA, immediate mode…and on top of all that, the multiple way to submit (glDrawArrays, glDrawElements, glDrawRangeElements, glMultiDrawElements).

PS : since you are relying on PBO, that goes hand in hand with GL 2.1 drivers anyway, which always support FBOs.

Yup, but we already had a 1.5 code base - it was easier for us to add ‘optional’ support for some 2.1 features (PBOs) to the 1.5 renderer without changing anything else - than it was to write a 2.1 renderer - to accommodate for CUDA.

heh, thinking about it - if this issue didn’t take the last day and a half to resolve - I could’ve had a nice OpenGL 2.1/3.0 renderer up and running by now.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.