Rendering via FBO to a shared cube map

As the title emplies, this topic is windows-specific so I am not sure if the thread belongs here or to Windows OpenGL topics.

I have a program which uses FBOs to render into a cube map in order to get “realistic” reflections. This works very well in windowed or fullscreen mode. However, the application is also able to render in stereo by creating two windows on two monitors (kind of side-by-side-stereo). Both windows have their own context, and they are shared contexts. This does also work well - on a GeForce 8800 card. On a GeForce 7800 card it doesn’t, the cube map is not updated on the secondary context (rendering the cube map using FBOs is done on the primary context). The cube map on the secondary context looks as if it wasn’t initialized at all, although both share the same namespace and from what I understand, should also share the same texture memory.
Is there any constraint I might have overlooked? Any ideas what could fix this (apart from rendering the cube map in both contexts, which would totally kill the performance)? Or maybe it is a driver bug (tested on the latest driver, 182.08)?

The bug can be seen on this image:

The left side shows the correctly rendered primary context, the secondary context on the right side does not use the correct cube map.

Ok what i wrote may have been a bit confusing. Actually, the part about the application being stereo doesn’t matter. What matters is that it is two different windows with their own contexts which are shared. The setup works in general, all the shaders work (although they are only initialized in the primary context) and VBOs do work, too. Only rendering to the cubemap via FBO leaves the cubemap of the secondary context uninitialized.

Another thing I just found out: It does not seem to depend on the graphics card used, as it also fails on a Quadro 4600 - which is equivalent to the GeForce 8800 where it shows up correctly.
The only difference I see at this point is that the two computers where it works are using Windows Vista 64, whereas the computers it doesn’t work are using Windows XP 32.

the two computers where it works are using Windows Vista 64, whereas the computers it doesn’t work are using Windows XP 32.

Usually it is the other way round. :slight_smile:

I hacked Q3 source code to do 2 screen stereo with 1 vertical mirror, with a setup inspired by this.
However I did it in horizontal span mode, meaning one fullscreen window spanning both screens, the 2 different views done with glViewport/glScissor. Worked great, and no “context sharing” hassle, but of course for non-fullscreen applications, it is less than ideal.

Sorry that will probably not help you.

I find your screenshots pleasing, and I don’t know why.
If it’s any consolation, I have some very strange behaviour with fbo’s and desktops in dualview on quadros. Dragging textures updated with fbo’s from one monitor to the other gives the appearance of a texture with uninitialised contents similar to your right cubemap. Not the same scenario but an indication of how flaky nvidias drivers can be. Things are made worse by having stereo enabled in the nvidia display properties. In fact, all sorts of sht happens with many and varied commercial non-stereo applications with stereo enabled.
I guess they must have unwittingly employed someone a bit crap on the driver team recently. Either that or all the competent OpenGL driver writers have walked out in protest over GL3.

I have two quadros installed and have seen something like that, too.
Imagine having 2 (non-affine) contexts with “Multidisplay Performance Mode” enabled. One context is bound to an invisible window. That context is used in a background thread which will do the main rendering stuff into a FBO-attached texture.
The other context is bound to a visible window that can be dragged around. Both contexts share their lists, thus the FBO-attached texture.

The scenario is used like this: the background thread is doing “heavy” rendering and will provide a new image through the FBO-attached texture (in fact, there are two of them, for double buffering) from time to time. The main thread is doing lightweight rendering of a few triangles and transfers newly available images of the background-thread through a shader into the visible window.

Now, what happens, if both windows reside on two separate monitors (two different GPUs)? The background thread is rendering its stuff into a texture that resides on GPU_A and tells the main thread that a new image is available. The main thread on GPU_B now renders from its local copy of the texture, but the driver forgot to copy the data over! What you get is garbage.

We now move the invisible window along withg the visible one, which circumvents the problem.

I guess they must have unwittingly employed someone a bit crap on the driver team recently. Either that or all the competent OpenGL driver writers have walked out in protest over GL3.

No, they put all their effort into implementing SSAO into their driver. WTF?! punchingforehead That is almost as stupid as back then when ATI put a sepia-shader mode into their driver.

at least it seems i am not alone with the problem. anybody knows how i can contact the nvidia driver devs so they at least know about the problem?

You can for example submit a form or access the developer site (requires registration).