PDA

View Full Version : Multithreading in Opengl under windows



abhinav_80
07-18-2002, 07:39 PM
How to implement multiple threads in a opengl application in Windows ?? I have a sample application from Microsoft called "GLTHREAD.EXE" but its crashing on the 1st run, its failing on "glviewport()"
, How to solve the problem ????

immy
07-20-2002, 04:59 AM
hi
U can do multithreading in OGL but one rendering context can b current in only one thread. you have to make as many rendering contexts as the number of threads then share them using
wglShareList(rc1,rc2);

using this fucntion u can access textures & display lists created in rc2 from the thread containing rc1 as current thread.

Rest all is the same in multitasking as normal multitasking in windows programming(i suppose u r using window OS)
This info is full but not detailed(u must work ur self too)
Good luck bye http://www.opengl.org/discussion_boards/ubb/smile.gif

Good luck bye

ChrisBond
07-23-2002, 03:49 AM
*Or*, you can treat a single HGLRC the same way you would any other resourse shared between threads and only access it in a thread safe manner. Thus, you can avoid wglSharedLists, if you like. Just make sure only one thread can ever have the context current (i.e., wrap your calls to wglMakeCurrent() such that they must lock a Criticalsection first, for example).

ChrisBond

ToolChest
07-23-2002, 02:49 PM
does ShareLists share all opengl resources like texture objects? and can you can you create 2 threads each with its own context on the same window? this would still be slow right? if thread 1 is in Finish() and thread 2 is trying to work on the next frame, say a call to Vertex(). would thread 2 wait untill the Finish() completed?

hope this is clear...

Thanks...

John.

ToolChest
07-25-2002, 04:41 AM
never mind I've got it covered…

Chris I definitely like you idea. Sharing the lists will work for things like loading textures, but not when you trying to breakup your rendering code…

Thanks…

John.

ChrisBond
07-26-2002, 07:28 AM
Glad you like my idea, but be aware of its limitations. The phrase "break up my rendering code" makes me worry you're going to run out and try having two threads render to the same window at once, which is legal (when done correctly), but probably won't work the way you expect.

First off, rather than have several threads that each render different parts of the scene, I tend to have one thread that does all the actual rendering, and one or more "helpers" that do background work that requires a resource context but won't actually put anything in the framebuffer.

Actually letting two threads draw to the same window gets complicated fast (meaning, it's too tricky for me). Who decides when to call wglSwapBuffers()? Who makes sure that if part of object1 is in front of object2, then the thread drawing object1 goes first?

I'm not saying it can't be done... just that it's always been easier for our projects to let the "helper" threads make life so easy for the rendering thread that he's waiting for *them* to finish, rather than vice-versa. More clearly: our bottlenecks always wind up being in the "decide what to draw and how to draw it" process rather than the actually drawing process itself.

The typical example is a visibility test thread for my objects. Given information about the user's movement, it can do whatever needs to be done to put objects in the "to be drawn next frame" buffer. Obviously this will require calls like gluUnproject() that require a current context. So long as this thread has actually called wglCreateContext() and gotten a unique HGLRC/DC pair, it's perfectly legal to have two different HGLRC/DC pairs current in two different threads. (I say that, but older references mention that some hardware crashes if you do this... I've never had a problem, but I've only been doing OpenGl since around 2000)

BTW, it's a good idea to use the least memory-consumptive possible PixelFormat for the threads that won't be doing any drawing... gluUnProject really doesn't care whether you have a Stencil buffer or not, or what your color buffer depth is, but you'd better have an identical depth buffer.

Done correctly, the potentially frame-rate reducing job of visibility determination can be done for the next frame *while* the current frame is being rendered.

Obviously this is fudging a bit, as the cpu is really only running one thread at a time. But in my experiance, the cpu has a lot of "idle" time waiting for someone else to finish. I'd rather it spend that time doing some trigonometery to see which objects need to get drawn in the next frame.

Once you've got this working, it's trivial to extend the idea to things like occulusion tests, shdow calculations, etc. In my team's case, this approach has sped things up enough that we've been able to focus on our actual project goals rather than designing the fastest possible Quadtree traversal algorithm, or whatever.

BTW using wglShareLists() along with this approach can let you load and bind your textures to particular texture objects in one thread and actually use them in another.

Whew - more than I really intended to say. Hope it helps!

-Chris Bond

ToolChest
07-27-2002, 10:27 AM
Chris,

Thanks for the reply, I think that my pipeline may restrict me to one thread for rendering. Breaking up the code becomes an instant mess of wrong alpha blending or wrong depth testing. The problem is that the pipeline is way to order dependent. It was worth the try. http://www.opengl.org/discussion_boards/ubb/smile.gif

Thanks for the info and I wish you luck on your project.

John.

abhinav_80
07-29-2002, 09:36 PM
Thanks, Chris for your valuable suggestion, i tried it and it worked fine.
Thanks again.
Abhinav