Sharing Contexts

I am trying to write a program to model an aircraft system that opens three windows on three different screens of a system.

It has been suggested that I share contexts between the windows, but I need to specify a different screen. This yields unique XVisualInfo structs for each screen.

If I wanted to share the context across multiple windows so I don’t have to create a shader for text in each context, how do I reconcile the different XVisualInfo structs returned by glXChoosevisual.

the visual drives the XCreateColormap, the XCreateWindow, and ultimately the glXCreateContext.

I am not using a high level windows toolkit because there will be no user interaction via a keyboard or mouse. All input will be handled via a “GRIP” that the user manipulates as in the aircraft.

You don’t share the context. You give each window a separate context then share the data (textures, shaders, buffers, etc) between contexts, by setting the [var]shareList[/var] parameter to glXCreateContext() to an existing context (except for the first one).

When a group of contexts share data, objects created by glCreateShader, glCreateProgram, glGenTextures, or glGenBuffers are shared across all contexts in the group. This doesn’t apply to all object types, only those with what OpenGL considers “data” (as opposed to “state”). “Container” objects (e.g. VAOs and FBOs) aren’t themselves shared, even if the contained objects (VBOs and textures/renderbuffers) are. When an object is shared, both its data and state are shared; e.g. for a texture, both the pixel arrays (glTexImage) and the parameters (glTexParameter) are shared.

However: it’s unspecified whether you can share data between contexts which refer to different screens. If all screens are driven by the same video card, there’s no fundamental reason why that can’t work (if it doesn’t, you may be able to use Xinerama/TwinView/etc to merge multiple monitors into a single X “screen”). If different screens use different video cards, then it probably won’t work.

The most flexible solution is to allow for either case. When creating a context, try to share data with the existing contexts, but allow for the case where you can’t. Windows would be collated into groups whose contexts share data, and each shader, texture, buffer, etc would be uploaded to each group.

I think I am starting to understand.

All the screens are driven by the same video card, so that shouldn’t be a problem. I forced the system to create the screens separate because I didn’t want to worry about a technician changing the resolution of one screen. (When they are all on a single screen, I have to calculate the [x,y] of the upper left corner for the create window function. With separate screens, I just open a window at [0,0] of the screen.)

I integrated the more efficient text shader code into my load, and it works.

But…

It seems I must call glUseProgram every sycle before I can call render_text. Does the shader somehow get lost if I do regular glBegin/glEnd blocks? If I move the glUseProgram up a level (try to only call it once), and execute a glBegin/glEnd block, the screen will not draw at all. It says it will be insstalled in the current rendering state. Does that not span buffer swaps?

Yes, it should work, but I wouldn’t expect good performance from this.

Inherently, GPUs can only service rendering for only one context at a time. If you’ve got multiple contexts all blasting to the same GPU, expect inefficiency due to “context swapping”.

On the other hand, if separate screens (again each with their own context) were rendered by “different” GPUs, you can by contrast expect exceptional performance on some vendor’s drivers (e.g. NVidia).

The current program shouldn’t change other than by calls to glUseProgram(). But the process of rendering a complete frame typically requires multiple programs, so it’s normal to call that function prior to the rendering operations which use the program. It’s sufficiently uncommon to leave a single program active across multiple frames that if it was getting reverted by e.g. a buffer swap (it shouldn’t be), the chances are that no-one would have noticed.

Roughly speaking, for a program which displays a single “scene”, objects such as programs, textures, buffers, etc are created once at startup, then “activated” (glUseProgram, glBindTexture, etc) at appropriate points during the process of rendering a frame.

Also, it’s quite common to revert such state changes (e.g. glUseProgram(0) etc) before the end of the function which made them, in order to avoid potentially-confusing consequences from “left over” settings. OpenGL has a lot of state, and there’s a tendency to assume that any state is at its default value unless you can actually see a function which changes it.

Thanks. I will add code to revert the state change.Thank you.