xlib/glX dedicated rendering thread

Hi all,

my situation is as follws: My program (using Xlib) has an X window. Widgets are drawn inside the window using the XRender extension (in the main thread). Inside the window, there is a small sub-window. The sub-window has an OpenGL(R) context (glX, compatibillity-profile) and renders a single triangle (for demonstration purposes). The triangle is redrawn whenever an expose event is received for the sub-window (also in the main thread, the conext is made current for redrawing and immediately released afterwards). So far so good. Now I added a second sub-window with an equivalent OpenGL(R) context. A thread is created that makes the context of the second sub-window current and continuosly redraws the scene in a loop. A single mutex is used to ensure that only one thread at a time makes Xlib or glX calls. Using ConnectionNumber( ) and select( ), I made sure that the mutex is unlocked when waiting for XEvents in the main thread.

When using an nvidia GPU with their proprietary libgl implementation, everything works as expected. When using Mesa with an Intel GPU (optimus system), the second sub-window doesn’t display anything (only whatever was at that screen location before), two CPU cores go to 100% and when draging the main window around, it lags behind the mouse cursor.

I know that the glXMakeContextCurrent call in the second thread returns. I also know that the thread drawing loop is executed and glXSwapBuffers returns. I don’t get any errors. After reading the glX specification, I can’t find any obvious issues (obvios to me that is). Google didn’t bring up anything related to my question.

Does the discription sound like any obvious synchronisation (or whatever) problem? I can show any part of source code on request; I just didn’t post any, because we are talking about some blob of widget toolkit library here. Also, I could retry it using an AMD card with their proprietary implementation sometime this week.

I’m thankfull in advance for any insights on the issue.

David

Sorry for the double post, but apparently I cannot edit my own post anymore.

Anyway: I tried it on an AMD card today and it showed the same behaviour as on Mesa. By playing around later, I found out that it works perfectly with Mesa when binding the context within the thread drawing loop[strike], so I guess it must be some silly wait-for-this-event-first thingy.

Do I have to wait for an expose event first? Do I have to wait until the window is mapped? I would still be happy about any ideas as repeatedly binding/unbinding the context in the drawing loop is obviously not the way to go.[/strike]

EDIT: Okay, I guess I figured it out. The sub-window context was single buffered. According to the spec, this results in glXSwapBuffers being a no-op (potentially also not doing an implicit flush). Some implementations apparently stack up command queues to infinity when not given a flush command (?). Replacing glXSwapBuffers with glFlush (or creating a double-buffered context) solved the problem on Mesa (can’t try AMD right now).

BTW do you know that a GLX context is not bound to one window and, as in your case, you can use single context to draw in more than one windows by issuing glXMakeCurrent for each in turn.
I mean if that would be easier for you instead of maintaining two separate contexts (and threads).
I say this because some people seem to be unaware about it.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.