Any reason not to share GL context's resources

I am working on a program that uses multiple OpenGL targets and came across the problem of creating/destroying GL objects (textures,VBOs,PBOs,shaders,etc.) when the correct context is active. I find particularly critical the destruction of such objects. A possible solution would be not to delete immediately such objects when requested but put them in a queue belonging to the appropriate GL context, and then perform the actual destruction of such objects next time that the GL context is activated/used. The drawback is that when you creata a resource you have to track down the GL context that it belongs to.

So I came across with the most obvious solution: why don’t I always share GL context resources using commands like wglShareLists. This way I don’t care what GL context is active when I create/destroy an objects (as long as there is at least one active).

From this very simple consideration cames my question: what are the drawbacks or “cons” when sharing GL context resources, if there are any?
Is there a reason why one shouldn’t do it, like performance penalties or undesirable side effects to be aware of?

Thanks

I do not believe that anyone actually did a study on this matters; still, list sharing is not a oftenly used feature, so I would assume there would be a performance hit associated with it. Anyway, it won’t be difficult to write a test app and benchmark the differences…

Obviously, the con is that all your contexts have to be able to share with each other. For example, when rendering to contexts with different pixelformats, glShareLists() possibly fails.

If you are shure, that this will not happen, the easiest way is to create a hidden context on app startup and share all visible contexts with it. This has the side effect, that your resources stay alive, even if all visible contexts are destroyed.

I did that sophisticated context management in the past, but dropped it in favour of using a hidden context. For me, it was not worth the effort, and now, live is easy.

Also, keep in mind that when GL3 is coming up, that whole sharing thing possibly changes. (Well, of course this seems true for any new OpenGL development, at the moment…)

CatDog

I have made good experiences with wglShareLists(), on both, ATI and nVidia. I have used it in conjunction with pbuffers (fo tonemapping, tile-based rendering and other postprocessing stuff) that were unable to directly share their rendering contexts. This way I sometimes had up to 5 contexts sharing their lists. It basically works without problems, as long as the contexts which you want them to share their lists, reside on the same graphics card.

On the other side, today with FBOs, there’s no reason to use more than one context per GPU. You either just don’t need it(no Pbuffers anymore) or got something else wrong (there’ s no point in talking to the same gfx cards from two threads concurrently).

Thanks guys,
also based on my experience it seemed to me that resource sharing is a quite safe and stable feature, so I will probably go that way. But just in case I will post another question about possible techniques to deal with the problem without sharing GL context’s resources.

It’s true that thanks to FBOs now is more unlikely the need for multiple GL contexts but if your application opens and closes multiple windows they can be still quite useful.

Thanks

Why not use the same render context for all windows? It works :slight_smile:

Errr… how exactly?

CatDog

You just rebind the context to a different DC when needed.

I don’t get it. How do you “rebind” a context to a different DC?

This is how I create an OpenGL context (on Windows):
RC = wglCreateContext( DC1 );

Now what to do to bind RC to some DC2?

CatDog

wglMakeCurrent(dc1, rc)

… draw stuff

wglMakeCurrent(0, 0)

wglMakeCurrent(d2, rc)

… draw stuff

The dc’s have to be of the same pixel format.

wglMakeCurrent(dc1, rc)
wglMakeCurrent(d2, rc)

Aaah… omg!

Thanks a lot! Gonna do some coding now. :slight_smile:

CatDog

I also didn’t know that one could do this under windows, any chance one can do something similar under Linux/X11?

Unfortunately this is not supported by GUI libraries like GLUT, Qt, wxWindows etc. as far as I know.

I just tried it out and it really seems to work. (On Windows. Sorry, don’t know about Linux, HollanErno.)

But somehow, it appears like a hack to me… :slight_smile:

CatDog

It is not. This is what MSDN says:

The hdc parameter must refer to a drawing surface supported by OpenGL. It need not be the same hdc that was passed to wglCreateContext when hglrc was created, but it must be on the same device and have the same pixel format.

The same works on Linux. But I have only tried it with direct rendering contexts on Pbuffers of the same display.

Thanks skynet it’s a very interesting feature indeed. On GLX it should also work from what the docs say:

glXMakeCurrent( Display * dpy,
GLXDrawable drawable,
GLXContext ctx);

glXMakeCurrent does two things: It makes ctx the current GLX rendering context of the calling thread, replacing the previously current context if there was one, and it attaches ctx to a GLX drawable, either a window or a GLX pixmap.

Mayb hacking with some wgl/glx conditional code I could get it to work also for Qt/wxWindows/GLUT…

CatDog, this feature is openly described in the wgl documentation; it is puzzling that that only few people appear to know about it (well, ok, it is not — who really reads the documentation anyway) :slight_smile:

Heck, of all puzzled people, I’m the most puzzled! I was pretty shure, that I read the docs.

Well, to be honest: some time ago, I wondered why on earth wglMakeCurrent() needs both the DC and the RC. This mystery is now solved. :slight_smile:

CatDog

Well, I for one, hope this whole sharing resource business is changed for the better in the future.

It’s one of those things that should be taken care in the background. I shouldn’t have to worry
about it if I want it or not.

@skynet: Having two contexts is useful when you want to do background loading (streaming) of textures and/or other objects. But it can also be a can of worms because the drivers are buggy / not really thread-safe.

In that case the modern “way to do” would be to glMapBuffer() in thread A and then hand over the pointer to thread B which in turn would then stream in the data. Or am I wrong?