View Full Version : multiple monitors and glsl

05-30-2008, 07:50 PM
Lets say I've got two video cards. One is a nvidia 8800 gt and the other is a radeon 2600 xt. Each has a flat screen attached. And mirroring is not on. Same machine.

Innocent user clicks on the opengl application. It pops open a window. Program sniffs out the renderer: Renderer: ATI Radeon HD 2600 PRO OpenGL Engine. The program takes all the necessary action to load in the correct glsl. The card on the main display says it can do fbo's with depth. Windows tend to pop open on the main display.

Now, the user drags the programs window to the other monitor. Its now using another driver with other limitations. Unfortunately, the script choice is wrong now. Or lets say the user drags the window half way onto both monitors. Is the os going to grab the compiled glsl and load it to the other driver? What if it has the wrong #ifdef _incompatible_case_42_ #else #endif in it?

Am I suppose to scan for all these displays and load in seperate glsl scripts for each display?

Also, under 10.5.3, it appears that shadow2Dproj works on my 9800 card. Yet, on the 2600 xt on another box, it does not work -- almost works but it looks like a projection without the depth test. So, I use shadow2D and do the comparison in the script. Now the 2600 works. But now there are too many instructions for the 9800 card to handle and I get a green rainbow color. :) Its funny too because I don't think these two cards can run on the same computer -- different bus?

05-30-2008, 08:03 PM
If you don't specify a renderer when you create your context, I believe the OS will attempt to migrate your GL stuff between cards at appropriate times. You receive a notification when this happens, which you could use to reload more appropriate resources. If you don't want it to happen (eg. because you've made careful decisions based on renderer capability) you should be able to specify the renderer in your pixel format, and avoid the migration, though you'll then get very slow blits when your window is on a different card to your GL rendering.

I think.

05-30-2008, 09:07 PM
Read the documentation about virtual screens (http://developer.apple.com/documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_pg_concepts/chapter_2_section_4.html#//apple_ref/doc/uid/TP40001987-CH208-SW16).

You can control the set of renderers that your application will see with pixel format attributes (http://developer.apple.com/documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_pixelformats/chapter_8_section_1.html#//apple_ref/doc/uid/TP40001987-CH214-SW9).

If you detect a virtual screen change (http://developer.apple.com/documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_contexts/chapter_7_section_3.html#//apple_ref/doc/uid/TP40001987-CH216-SW1), you should re-query the renderer version and extensions (http://developer.apple.com/documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_general/chapter_6_section_1.html#//apple_ref/doc/uid/TP40001987-CH211-SW7), and choose new rendering paths (shaders, etc.)

This heterogenous renderer behavior has been always been supported in Mac OS X, going back to Rage128 + GeForce2 in the same machine. It is up to applications to code defensively for it, though.

05-30-2008, 09:12 PM
Also, under 10.5.3, it appears that shadow2Dproj works on my 9800 card. Yet, on the 2600 xt on another box, it does not work -- almost works but it looks like a projection without the depth test.

Have you set TEXTURE_COMPARE_MODE to match the sampler type you're using?
I've tested this extensively, and I believe it works correctly in 10.5.3 on the HD 2600.
If you see otherwise, please file a bug (http://bugreporter.apple.com) and attach a simple sample program showing the problem.

05-31-2008, 03:17 PM
I've been careful but these are the settings:

//if these are not right things will not work at all

The code works on the ati 9800, nv 7300 gt and nv 8800 gt. The interesting part is if I change the glsl from:

//put it into a subroutine to avoid indirection on ati...
float shadow_sample( in float dx, in float dy, in vec4 c) {
vec4 _coord1 = c;
_coord1.x += dx;
_coord1.y += dy;
return shadow2DProj(shadowmap, _coord1).r;

to :

//the code is bigger now. So, older cards might not have enough room.
float shadow_sample( in float dx, in float dy, in vec4 c) {

//break it out and do it manually
//this is sort of what shadow2Dproj is suppose to do
vec4 _coord1 = c;
_coord1.x += dx;
_coord1.y += dy;
_coord1.xyz /= _coord1.w;
float _val = shadow2D(shadowmap, _coord1.xyz ).r;
//LEQUAL condition to pass the test
return ( _val >= _coord1.z ) ? 1.0 : 0.0;


It works on the HD 2600 and the other cards except the 9800. Think its too big. There is self shadowing. If its an object that is simply casting a shadow on a plane then there would be no problem -- projection with no need to compare. Before, on the 2600, in 10.5.2 it would always say its in the shadow. So, things are better in 10.5.3 but not quite right. My ifdef stuff is getting more complicated. Thus, the question about multiple monitors. I do add in a small depth map offset to attempt to get rid of any shadow acne.

I've read Paul's project about 80 times, the cg tutorial book etc...

05-31-2008, 04:02 PM
Hmm, I did some more testing.

I think it has to do with initially loading in one monitor then dragging that window to another monitor. In that case, shadow2DProj fails. But the manual divide works. Otherwise, it works as long as its in the initial display which is what most users have. I just got my new video card so I started using two monitors and dragging the window to test things -- which caused the freak out. :)

Note: the helicopter does not self shadow but the tanks do. Just config settings.