About Contexts

Hi there

I just arived in the interessting but sometimes scary world of multithreading.

Well, what i do is, i asynchronly (pre-)load data, so that the latency of disk-accesses and stuff is hidden from the user. Of course this only works, if i know in advance, which data i need.

Anyway, one thing i need to do, is to make the GL context current on different threads. Ususally it’s current in the rendering thread, but sometimes i need it in the preloading thread to upload textures.

The question is now: How important is it to keep switching the context to a minimum? My app is not so demanding at the moment, so i don’t see a difference when switching the context once every frame, or not switching it at all.

Should i consider every possibilty to only switch the context, if it is absolutely necessary, or does it not matter, if i switch it sometimes, although i don’t need to?

Thanks,
Jan.

Well, do you mean you want to switch the contexts just by pleasure ? :slight_smile:

sorry for not answering your question, but…

why don’t you just use the second thread to load the data from disk and create the texture in the main thread?

and, just for interest, what is the total amount of texture data that you have to load?

Originally posted by Jan:
Well, what i do is, i asynchronly (pre-)load data, so that the latency of disk-accesses and stuff is hidden from the user. Of course this only works, if i know in advance, which data i need.

Anyway, one thing i need to do, is to make the GL context current on different threads. Ususally it’s current in the rendering thread, but sometimes i need it in the preloading thread to upload textures.

That’s where a second context and wglShareLists come into play. Just don’t switch away from the current render context and be careful to never update a texture object which is currently in use in the renderer thread, that’s all.

Search for answers about how to do this in the Windows forum. I answered this multiple times with pseudo code.

Here’s one:
http://www.opengl.org/discussion_boards/ubb/ultimatebb.php?ubb=get_topic;f=9;t=000302#000007

My system is already up and running and it is absolutely safe. It is only for loading resources, that are not available yet, it is not for changing stuff, that is already in use.

And it is not only for textures, but for everything, that has to be loaded INCLUDING OpenGL data (textures, VBOs, shaders, …).

Therefore the shared lists won’t work.

The thing is, the moment the loading-thread and the main-thread BOTH try to load the SAME data, they will synchronize each other and the OpenGL context is given to the one, that started loading first.

However, as long as they access different data, the main-thread has priority. Therefore, the loading-thread gets the context only, if the main-thread suddenly needs the same data or if it gives away the context just for pleasure.

That would be the easy solution. The a bit more difficult solution would be to tell the main-thread, that there are loading-threads, that need the context, so it should give it away after the next frame. The thing is, as long as there is no real penalty in switching the context every few frames, i would like to keep the system simple.

So, does anyone know how heavy context-switching really is? Doesn’t one need to switch contexts all the time when doing render-to-texture with pbuffers? I’ve never done that, but maybe someone knows.

Jan.

My system is already up and running and it is absolutely safe.
Taking bets? :wink:

And it is not only for textures, but for everything, that has to be loaded INCLUDING OpenGL data (textures, VBOs, shaders, …).
Therefore the shared lists won’t work.
wglShareLists shares almost any OpenGL object including display lists, texture objects, VBOs, and shaders. (Not occlusion queries which are considered too small.)

The thing is, the moment the loading-thread and the main-thread BOTH try to load the SAME data, they will synchronize each other and the OpenGL context is given to the one, that started loading first.
My impression of this is that only one thread should be responsible for loading, so that you do not run into this “both try to load the same” case.
Only aquire the GL context for the short time it downloads the data to the GL.
Well, sounds simple enough. If you don’t do work in parallel anyway during the data transfer to the GL, what you win is the time in which you asynchronously loaded data in the thread not holding the GL context minus the switching overhead of the context.

Switching one context between threads is probably more expensive than switching between two contexts in one thread for p-buffers. Anyway it should be much faster than loading data from disk, so just give it a try.

Two threads with two shared contexts is also not a lightweight thing to begin with, but has no need for additional make current calls.

Actually, yes, i’m taking bets :stuck_out_tongue:

And yes, of course i only aquire the context for downloading the data to the GPU. All disk-accesses and other data-processing are absolutely asyncron.
There are still minor delays, because uploading a big texture takes some time (~0.2 seconds), but that’s a huge improvement, and as long as GL doesn’t support asynchronous data-uploads, it will always be this way.

When loading data in parallel you always need to be prepared for the case that the same data is called for in several threads. Especially in such a system, because its intended use is to preload data, if i know, that it will be needed shortly. And then who says, that preloading is ready, when you actually need it?
So, i would say, the fact, that my system handles this situation not as an exception but as a typical case, is a good thing.

Well then, if no one cries out loud, i should never ever do this, then i will go with my simple system.

Jan.

Deal. Send me your program and I’ll take it on. :smiley:

When it’s done. :smiley:

Generally threads will reduce the FPS not context switchings. But this might have an impact too. What I can say is that on my tests years ago, each new thread reduces the framerate whether it needs a context and use it or not. Impacts could also occur when the threads ‘anihilate’ themselves, ie, when the rendering hasn’t finished to send the commands to GL but the OS scheduler decided to switch to another thread. This is definately a bad thing. And making things atomic here isn’t doable (what I mean here with atomic is a process that is executed fully before any switches could be done). A solution would be to make the awaiting threads to sleep until the rendering is finished.

From what I understood from the posts, I guess the best thing will be to have a law priority thread that upload things without any context, and when things are loaded, then use a context and put them into GL in new objects (textures, vertex…). Once this step is done, then send an information to the rendering thread so that it could use the new stuff and let the low priority thread destroy the old ones.

Hope that helps and hope that’s understandable.

Yes, that’s understandable. It’s essentially what i’m doing.

I don’t think that there will be a problem of threads reducing the framerate, becuase the loading-threads live only for the short time they are loading something.

In the future, maybe i’ll add additional threads to do some other tasks in parallel (sound, AI, … don’t know), but only because that should be faster on multiple-core CPUs. Today, on single-core CPUs, having several threads definitley is not such a good thing.

Jan.