Is a asset-loader-thread possible?

Hello there,

i am developing a game with an open world. I want to load world-informations, new models, textures, … from another thread so the main-render-thread can keep up the framerate. So what i thought was, informing the second thread that the player is near to a new zone so it has to load a few things. The thing is, that as in the wiki described, i can not make the same context current in two or more threads. How would i do this? Or how would you guys do this? I thought maybe just loading the data and preparing it (reorder stuff), so the loader can then inform the main thread, that it has to upload the new resources in the next frame. Is this the only way to do this?

It’s not the only way to do it, but it would work.

What would also work would be to create a second, off-screen rendering context that you use for loading data. It would need to share objects between contexts, of course, so that the other context can pick them up.

Thank you for your answer. Is there somewhere a list of things, two contexts can share? And how would i do this? Do i only need to set the two contexts up to share everything they can share or would i need to actively share everything i upload to the second context?

PS: Sorry for those stupid questions, i probably could find the answer in the documentation but its hard to find if you don’t know what to search for.

PS: Sorry for those stupid questions, i probably could find the answer in the documentation but its hard to find if you don’t know what to search for.

Since you’re looking for what object types an OpenGL context can share, you could try searching the OpenGL wiki for “Context”. Or for “Object” (though to be fair, I just now created that redirect).

Wow, I just went through many permutations of “share” and “object”, and the right page only showed up in the middle of the results (if ever). MediaWiki’s search functionality is really terrible

Thank you, now i only have to find out how to create a shared (child?) context. But i think google will help me with this.
Thanks a lot.

SDL2 for example comes directly with shared GL contexts. If you don’t use SDL2 you can take a look at there code.

The things you would update/create in a separate threads would be buffers and textures. If you use persistent mapping(ARB_buffer_storage), I thing you could use the buffer memory pointers in a thread that does not have a OpenGL context.

After doing anything with OpenGL objects you always must make sure you wait for a fence before touching it in another thread!

If you use persistent mapping(ARB_buffer_storage), I thing you could use the buffer memory pointers in a thread that does not have a OpenGL context.

You don’t need persistent mapping to do that. As long as the buffer doesn’t get unmapped while the thread is filling it, you’re fine.

Isn’t repeatedly mapping/unmapping one of the extreme performance killers that you should not use it in any case?

Mapping or unmapping a buffer each frame is not free, but it’s not going to kill you. The main thing is to map the buffer with invalidate, so that the driver can do a reallocate behind the scenes for you if the buffer is still in use.

Persistent mapping effectively makes this all free, but it was certainly usable before that.

Chapter 5 of the OpenGL 4.5 compatibility profile specification says

Objects that may be shared between contexts include buffer objects, display lists, program and shader objects, renderbuffer objects, sampler objects, sync objects, and texture objects (except for the texture objects named zero).

The core profile specification omits display lists (which don’t exist in the core profile), but is otherwise identical.

It then goes on to say

Objects which contain references to other objects include framebuffer, program pipeline, query, transform feedback, and vertex array objects. Such objects are called container objects and are not shared.

Okay one more Question:
If i generate i.e. a Buffer in the loader-thread with the loader-context bound, fill it, flush it (glFlush/glFinish or whatever is best for this purpose), and i pass the GLuint then over to my main render thread, will the Gluint work there?

Why i’m asking? I am not sure how i can think about two contexts. Are they more like processes, where everyone has its own virtual addressspace and they synchronize them somehow, or are they more like threads, where they share the addressspace (so that the GLuint, whats a pointer to VRAM i guess, still points to the same data?!).

If the two threads share objects, then what you say will work. If they do not, then it won’t.

Also, there’s no need to finish or flush (unless you’re using persistent mapping without coherency, in which case you need to do some synchronization). As long as the receiving thread binds the buffer, then the contents previously set will be visible.

I am still talking about two different contexts which are set up to share objects, not one single context which i hand over to another thread.
(For me, thats secondContext = glfwCreateWindow(1024, 768, “”, NULL, firstContext); where both contexts are bound by two threads and thread#2 loads, thread#1 waits for the resulting GLuint to render).

Edit: Oh, you said “binds the buffer” not “binds the context”. Sorry, misread this :frowning:

Conceptually, a context is effectively a (large) “struct”; see the state tables in the final chapter of the specification for what it contains. However, some of the fields are “handles” to “objects” which are dynamically allocated. In the absence of sharing, each context has its own set of objects, i.e. any given object is only usable with a specific context. When two or more contexts share data, some objects may be usable with multiple contexts.

In practice, an implementation is free to duplicate data, but this should (mostly) be invisible to the client. Regardless of whether an implementation does so, the asynchronous nature of OpenGL means that changes to shared objects made from one thread aren’t guaranteed to have taken effect at the point the function returns (however, they are guaranteed to have taken effect by the time that any subsequent command on the same context is processed). If you need to ensure that they have taken effect (e.g. in the sense that commands executed in a different thread will see the updated data), you need to call glFinish() or use a fence. For more specific details, refer to chapter 5 in the OpenGL 4.5 specification.

Well, but thats really bad for me. If i load 1gb and have to use 2gb vram for this, thats a lot…

You aren’t going to see implementations duplicate large amounts of “stable” data, particularly in VRAM.

What can realistically happen is that updating data may result in both the old and new versions existing in VRAM simultaneously, so that the update doesn’t have to wait until prior commands have completed (prior commands which have been issued but not yet completed must use the data as it existed at the time they were issued).