Multiheaded OpenGL with NVIDIA and ATI

I have a machine with dual PCIe slots. I have an ATI card in one and a NVIDIA card in another.

Is it possible to configure the X server such that I can get fully hardware accelerated OpenGL rendering to each card from the same program?

I would like to understand the runtime architecture of the X server and the OpenGL drivers from NVIDIA/ATI better, specifically how these components would interact with my program at runtime. It seems like NVIDIA and ATI each provide libraries (GL and GLX implementations) that might interfere with the operation of the other. I have been unable to track this information down, any pointers would be greatly appreciated.

On Windows, the system arbitrates access to the drivers, it will choose the appropriate driver based on where the window is located. I am able to write a Windows program that can render to both cards in an accelerated fashion. Can I do this on Linux?

Thanks,

Chris

You need to configure X so that it render to one display for a card and to another display for another card. Unfortunately I do not know how to do this. I tried to do better than this, ie one card render to one desktop and one card to another but at this time (years ago) I failed. I didn’t tried it since.

For the X architecture, I guess the official website could hlep. Yeah, having two such cards might provoke unpredictable results. I guess you can try to install gl and glx in separate directories. It might not be easy.

On Windows, I guess you use directx. On linux, Mesa might help (very unsure).

Hope that could help a bit.

Hi,

everything is possible under Linux… it only depends on how messy you want things to get… and they can get messy very quickly :wink:

The only possible option that I could think of (after only one cup of coffee on a rainy Sat. morning) is to have two full X.ORG installations and to use a very cool linux program called ‘X2X’ to control the displays and go back and forth between them.

The thing is that the libGLCore.so is vendor/glx_implementation specific and cannot be shared. Plus there no mechanism to specify more that one glx configuration. Maybe it will come. But on the other hand, its easier to recommend to buy a multi-display card than coding the functionnality in X11.

There are some issues that must be address before.

  1. xfonts must be configured via the /etc/x11/xorg.conf file instead of the xfs service
  2. creating two X config files in the /etc/X11/
    maybe /etc/X11/Xorg-nv.conf and /etc/X11/Xorg-ati.conf !
  3. starting each XServer with the proper Xdisplay (0:0 for the first and 1:0 for the second) and -config config file (just man xorg for more info)
  4. run x2x to bind the two display as one (mouse, keyboard and clipboard events)

<taken on the x2x website: http://x2x.dottedmag.net/trac/do>

x2x allows the keyboard, mouse on one X display to be used to control another X display. It also shares X clipboards between the displays. Current project aims is to add the missing features: namely, hotkeys to switch focus between screens, and proper support of X clipboard.

I would probably do the following steps to get the solution up and running…

  1. having a working X.ORG 6.8.x installation :wink:
  2. quit X and make sure that all applications that make use of X libs and closed.
  3. copy the /usr/X11R6 into /usr/X11R6-2 <carefull about symblink and do a man_page of ‘cp’ before>
  4. disable the xfs daemon (if applicable) and configure xfonts hand in the /etc/X11/xorg.conf file.
  5. create a second X config file for your second X.ORG installation
  6. start each X with the display and config_file parameters: -display X:
    -config your_vendor_card_config_file /etc/X11/ati_xorg.conf (for example)
  7. start x2x to go from one Xserver to the other.

have fun and keep us posted!

cheers!

me again!

here are the real commands to get you going:

startx -display 0: -config /etc/x11/nv-xorg.conf
startx -display 1: -config /etc/x11/ati-xorg.conf

I would suggest to start experimenting with one installation and by starting the xserver on several config files and different display number

cheers!

jide,

I should have mentioned that I was referring to OpenGL on Windows, not DirectX (though DirectX also has facilities for addressing multiple displays). Using a window located on the appropraite display and calling GetDC, I can call ChoosePixelFormat/DescribePixelFormat and I will get back an appropriate pixel format that together with the DC will allow me to create an accelerated OpenGL rendering context specific to the card where the window is located.

I’m sure I can do various Mesa configuration or software workarounds, but my objective is not just to get something to display but for it to be hardware accelerated.

I get the feeling that two independent X displays (not just different X screen numbers which would seem to be more convenient) will be required.

Thanks,

Chris

well based on my 20+ years experience in X and computer graphics… that should do the trick. As for the <I was refering to OpenGL on Windows>… hum, its the ‘OpenGL on Linux’ forum and your first post was refering to X server with does not come with Winblows by default.

The describe solution above is a full-harware acceleration solution with other people use in the Visualzation/simulation world to build 4x4 an even 8x4 displays for immersive simulation.

cheers!

Yes this is where the things hurt I’m afraid. Try to do as ameza developped but as we both stated, you’ll have 2 GUI’s (like 2 Gnomes) which might not be what you was looking for. But if you have 2 monitors, things might more go like you wish. One of the problems I encountered (also years ago) is that I wasn’t able to have 2 displays at once (I needed to switch from one monitor to the other with CTRL-ALT-F{7,8}).

One other thing you might take in account is that how the kernel will behave.

ameza,

I was sort of afraid that multiple X displays would be required. I was hoping that there would be a way to use different X screen numbers instead so though I’d need different visuals/FBconfigs I could at least use the same display.

I am interested in libGLCore, but only found bits of information. Is this a standard ABI that contains the hardware specific functionality? How does libGL interact with libGLCore and/or the X server? Does libGL (which I assume completely contains the GL and GLX functions) delegate to libGLCore to do the work? Is hardware accelerated rendering possible if one vendor’s libGL calls another vendor’s libGLCore or are libGL and libGLCore linked by implementation details? Can one libGL call multiple libGLCore implementations? Are there performance implications?

Assuming I did use two X servers to provide two X displays:

What impact would this have on how I called GL functions in my program? Normally a program links against libGL and calls the GL/GLX functions there. It seems like even if I created two indepenent displays I would need to also call the functions in the particular vendor’s library which would make all of this extremely dependent on the local configuration. At runtime I would need to get a pointer to each of the GL functions in the library associated with each screen, but how would I know which library was associated with each screen?

Even two independent programs (each of which was configured with a different DISPLAY) would have this problem of calling the correct vendor’s library. Or perhaps I am off track with libGL being vendor specific.

Also I am conerned about the peformance of the configuration, how would this affect the “directness” (glXIsDirect) and/or performance of the GLX contexts?

I know anything can be done in Linux, I’m just trying to wrap my head around how things are implemented so I know where to place my effort in experimenting. :slight_smile:

Thanks for the information and I’d appreciate any more you can give,

Chris

You guys are quick! :slight_smile:

I was responding to each individually. I really do appreciate the help.

Chris

ameza,

I’m sorry for the confusion. In my original post I was comparing what I know how to do with OpenGL on Windows to what I am trying to learn how to do on Linux.

Rest assured, I am looking for assistance with the Linux side of things.

Chris

I understand that these things are not clean cut!

Let me try to explain what we normaly do.

–> We assume that we have more that one-monitor!

First we need a mechanism to switch from one display to the other regardless if its a multi-monitor-card or multi-card setup.

If its managed by Xinerama, well then its easy, but there are no garanties that you will get hardware acceleration on all screens. This is even more true when you mix card vendors. :wink:

If its old-school then we end up with several X display (0:,1,2:,…) that can or not be logicaly connected to each other. The advantage of having them logicaly connected (via the xorg.conf) allow you to travel the mouse from one to another. But this setup gives you multiple RootWindow which can be a <beep> when you are developping an application that span across a serie of screens.

The X2X software that I was refering, allows you to connected them regardless of there logical and phycial (different card or host) locations. Have a look, its really a nice tool to have.

As for OpenGL development under X.ORG, well the current implementation supports only one GLX vendor direct-path to the hardware. When you compile an OpenGL program, you link against the libGL, libGLU, libglut libraries, not libGLCore. The libGLCore library is vendor specific and is used for GLX path to the hardware mostly.

Hope this can help and welcome into the world of computer graphics under Linux…

cheers!

Originally posted by ameza:
[b]
As for OpenGL development under X.ORG, well the current implementation supports only one GLX vendor direct-path to the hardware. When you compile an OpenGL program, you link against the libGL, libGLU, libglut libraries, not libGLCore. The libGLCore library is vendor specific and is used for GLX path to the hardware mostly.

Hope this can help and welcome into the world of computer graphics under Linux…

cheers! [/b]
When I install ATI and NVIDIA drivers, they each install a independent implementation of libGL, correct? My main question here is that isn’t my program linking against exactly one of these libraries at runtime for GL/GLX functionality? Is this a problem?

Based on your description, let me see if I understand the situation. I will use a vendor supplied libGL which will call libGLCore to make a direct context on the screens/displays driven by that vendor’s hardware (because the libGL matches the libGLCore). On screens/displays driven by another vendor’s hardware the same libGL will create an indirect context. Therefore I can use a libGL from any vendor I want, these are interchangable (assuming the same GL/GLX versions), there will just be a slight performance penalty when driving hardware from a different vendor (because the context will be indirect). Is the performance impact easy to characterize?

I think the next thing to do is dive in and try some things. I’m actually not completely inexperienced on Linux, I just haven’t tried OpenGL programming on anything more exotic than a single card before.

I’ll try out several methods.

Thanks,

Chris

It can be a problem. If I’m not wrong, those 2 libraries will override each other, at least for libGL.so which is a link to the full libGL.so.x.y.

This is why I tell you to rename the libraries, at least for one of them and then choose the appropriate library at runtime of your program (using dlopen). It will surely be the same for libGLcore, libglx and so on -check the matching documentation from nvidia and ati for more information about what libraries their drivers install and need. Also check out if you can’t install the drivers in separate directories for ensuring the latest installation won’t overwrite the previous one.

Other things to check is that if both of your drivers do not need DRI (DRI can do bad things when enabled on nvidia hardware).

From what you said about interchanging drivers, and if I understood well, I’m not sure you ought to do so. Current drivers are really specific and communicate directly to the hardware with specific addresses on the graphic card. So doing so might provoke really unpredictable (and surely bad) results on other cards. To my point of view, this turns out.

I don’t find out any other thing for the moment.

jide,

Is the hardware communication in libGL or in a subordinate library like libGLCore?

If I use dlopen on one (or more if I want to drive multiple displays simultaneously) libraries, the setup of my program would be non-trivial because this would need to be configurable and would likely break when the implementation of GL/GLX changes in the future. My goal is to be able to run on an arbitrary display configuration. One program could drive multiple displays. If I wanted to render to multiple displays I would need to load a different library for each display, but mapping a display to an appropriate library might be difficult and likely to break, which is why I’d like some system component to do that for me. I hope that there is a better way.

My understanding of how GL<->GLX<->X server communication works is fuzzy. I’ve seen DRI mentioned in doing research on the web, but I don’t understand what its role is and why it is incompatible with what nvidia does. Perhaps if I understood it better I would see the big picture.

Chris

Well, just for a recapitulation and ensuring my understanding. Basically, you would like a single program to be run twice (or even once) and run on two different cards simultaneously, all on a single monitor (so a single display). Am I wrong somewhere here ? Also, if you can give an example so that things will be definately clear to me.
Another thing: what things could you accept so that they won’t put you in a difficult situation ?

I guess libGL is responsible to communicate with the hardware. But maybe GLCore can do some stuffs too. libglx is also responsible of some things (mainly for contexts and so on). I’m personnaly not aware enough about GLCore in order to tell you more. You can checkout what function are in GLCore, but I don’t remember the command for that. I’ll try to have a look for this.

What you say about dlopen might not be true. You can make sort of a wrapper for GL, ie loading all GL functions threw glXGetProcAddress (even GL core functions) and then bound them depending on the card you have (C++'s polymorphism can help you for that purpose but is not necessary). This won’t break the future to my point of view.

When you say one program could drive multiple displays, what exactly do you mean ? Is this something like this:

./myprogram -display1 1 -display2 2

which will run out your program over, let’s say two Gnomes, so over two X connections ?
Or do you mean this program will run over a single Gnome, but will start out 2 windows, one requiring a card and the other the other card ?
If you use dlopen mapping displays to a specific library might not hurt so much (I never done it however).

For more understandings about DRI:

http://dri.freedesktop.org/wiki/
http://www.xfree86.org/current/DRI.html

(I couldn’t found the page that documented well DRI against X and glx).
For beeing quick and quite not full right, DRI is an interface that is able to directly communicate to GL without going to the underlying layers of X. This is done almost for ensuring fast queries and replies. DRI is full part of X and also the linux kernel provides facilities for DRI. This is so an 100% X/GNU framework. But some drivers are not compatible with it (let’s say they provide their own ‘DRI’), which are different. DRI can modifies some things under X and the kernel. So using DRI on a system not supporting DRI can result in very bad things.
As far as I know, ATI provides 2 drivers, one free that is supported over DRI and one other which, I guess, is like Nvidia, not over DRI.

Finally I think that what ameza said might help.

Sorry for the so bad informations. Hope that could help a bit however.

I’m glad you mentioned glXGetProcAddress, because its operation is related the questions I have. My understanding is that the pointer returned is context independent. If this is true, how does the function returned by glXGetProcAddress know which implementation to call? It would need to dispatch the call based on the current context. Since glXGetProcAddress is implemented in libGL, I assume it would be tied to working with contexts that are specific to the vendor who provided the libGL? I think this means that I would have the problem I described in loading the correct library with bootstrapping using dlopen to get the address of the glXGetProcAddress function (which could then be used to get the pointers for all of the other functions).

In terms of one program driving multiple displays, I’m still trying to determine what my prospects are so I’m not sure exactly how it would work. Given n displays (X connections) I can determine how many X screens are attached to each X display. I’d like to be able to place a window on an arbitrary X screen and X display. For my experiment purposes, I don’t need to be able to drag the windows between the screens.

./myprogram -out1 :0.0 -out2 :0.1 -out3 :4.0 -out4 myremotehost:3.0

out1 is on localhost display 0, screen 0
out2 is on localhost display 0, screen 1
out3 is on localhost display 4, screen 0
out4 is on myremotehost display 3, screen 0

Chris

Let me make a try:

namespace nv
{

// glXNVGetProcAddressARB definition

void Init()
{
   dlopen (path_to_nv/libGL.so);
   //dlsym glXGetProcAddressARB
   // get all the function addresses
}
}

namespace ati
{
// the same
}

Even if I never done such a thing, I really hope this is doable.

But by the way this comes hard, why do you need two different cards ? Why not using two geforces or two radeons ?

The path (path_to_nv) would be hardcoded (or the user would specify it). My goal would be to run on an arbitrary configuration with an arbitrary number of different displays, everything being determined at runtime. This would force the user to give a path to the library (and the mapping from the host/display/screen to the library).

./myprogram -out1 :0.0 -gllib1 path_to_nv/libGL.so -out2 :0.1 -gllib2 path_to_nv/libGL.so -out3 :4.0 -gllib3 path_to_ati/libGL.so -out4 myremotehost:3.0 -gllib4 path_to_nv/libGL.so

class glfuncs
{
public:
    glfuncs(char* host, int display, int screen, char* library)
    {
      void* handle = dlopen(library,RTLD_NOW|RTLD_GLOBAL);
      dlsym(handle,"glXGetProcAddress");
      // get the function addresses
    }

   // GL/GLX function pointers ...
};

Then I would create an instance of glfuncs for each unique (host,display,screen) triple that the user specified using the library the user specified. I’m not sure that this would work (there might still be difficulties related to collisions of the GL library implementations the only way to find out would be to test), but it’s the closest thing to working I can think of.

Without user assistance in configuring the mapping, I think the answer is that it can’t be done.

I don’t need two vendor different cards and obviously it is easier if I restrict myself to one. This is just an excercise in determining what is possible. This appears to be a limitation of the current Linux vendor implementations of GL/GLX.

Well, how does Windows do that ? Actually and to my own point of view, I guess there must have a layer in between the application and the drivers, a program that choose the appropriate driver regarding the graphic card that is aimed. But also, this implies that the dll are well nominated (libGLNV.dll and libGLATI.dll) [I don’t really remember how they could be named since it’s been years I do not use Windows anymore].

When you say it will force the user to give the paths to the libraries, it might not be fully true. When a user that has 2 kind of graphic cards installs the drivers she must take in account the paths, otherwise, as I said before, libraries will most certainly overwrite each others.

At last, you could use the default library if the user does not provide which one she wants to use with a display.

Windows does it by acting as a layer between the application and the driver (drivers from nvidia or ATI never replace the core opengl32.dll on Windows, while it seems like this may be common practice to replace libGL.so on Linux). This layer is basically the wgl relative of glX (wgl is basically fixed functionality on Windows that can’t be extended by vendors except though wglGetProcAddress). Given a window’s device context (DC), where the window is located on the desired device, the Windows functions for choosing and setting a pixel format (similar to a Visual in X) will route things to the appropriate driver.

wglGetProcAddress returns context dependent pointers (unlike glXGetProcAddress). Inconsistently, the gl functions exported from opengl.dll are context independent (which are quite old, only the 1.1 functions) and route to the appropriate driver.

I’m starting to think that the same basic thing may work on X on Linux, the difference being that only one driver can take the optimal path that avoids routing traffic through the X server. Now I just need to find some time to verify this by configuring my machine. The only thing that makes me uncertain is since I don’t understand the architecture is how well the drivers actually work in this configuration. Then there’s the question of if it is possible to optimize things further as we have been discussing by using more than one library.