Multi-GPU separate work problem

I want to finish the next work int multi-GPU:the primary GPU running some glsl shader program and get the first resulted Image,the other GPU separate running some glsl shader program at the same time,and get the corresponding resulted image,next, the other image be transformed to the primary GPU,and the primary GPU composite the images,get the ultimate image and display in the windows. My development environment is:vc6+ogl2+WindowsXPsp2. I have two GPU and one monitor. Primary GPU is 6600GT,seconed GPU is Quadro FX 540.My Mother Board is NF4 SLI.My hightway is PCI-E.My question is: 1. Can i realize this work? 2.If 1 ok,can someone give me some advice or example code or URL? Any reply is appreciated.

i’m unaware of anything in the opengl api that would allow you to target a specific gpu for commands. this is not how multicore implementations work anyway, is it? it seems likely that the driver would be running the show and doing the right thing, whatever that may be.

happy holidays,
bonehead

Nvidias GPU_Programming_Guide and code samples (http://developer.nvidia.com/object/sli_home.html). That intructions about standart modes of using SLI. So SFR mode.

thanks to all reply. my goal is using multi-GPU but no SLI mode.because after using sli mode,I can’t control the content of seperate GPU working.it will be auto allocated by GPU driver.
i give some try in this:first,i use EnumDisplayDevices function get the every GPU’s name,second,i Create DC use the seperate GPU’S name as CreateDC function’s parameter.third,i use this DC create every GPU’s OpenGL RC,and so on.i think this method make sure that the RC bind the GPU.next,in application,i will execute the shader in every GPU by change every RC to makecurrent.is right my idea? there have a error when execute the wglmakecurrent function. any body can give some improve advice? example code is:

int MaxCard=8;
int i = 0, j = 0;
DISPLAY_DEVICE Dd[MaxCard];
HDC hDc[MaxCard];
DEVMODE CardMode[MaxCard];
HGLRC CardRc[MaxCard];
PIXELFORMATDESCRIPTOR pfd[MaxCard];
for(i=0;i<MaxCard;++i)
{
	Dd[i].cb=sizeof(DISPLAY_DEVICE);
}
//Get GPU Name
i=0;
while (EnumDisplayDevices(NULL, i, &(Dd[j]), 0))
{
	//GPU Name
	if(Dd[j].StateFlags & DISPLAY_DEVICE_ATTACHED_TO_DESKTOP)
	{
		++j;
	}
	++i;
}
for(int CardId=0;CardId<j;++CardId)
{
	EnumDisplaySettingsEx(Dd[CardId].DeviceName, ENUM_CURRENT_SETTINGS, &CardMode[CardId], 0);
       //Create DC by GPU Name
	hDc[CardId] = CreateDC(Dd[CardId].DeviceName, 0, NULL, &CardMode[CardId]);
       //Set pfd parameters
       ……
	unsigned int PixelFormat=ChoosePixelFormat(hDc[CardId],&(pfd[CardId]));
	SetPixelFormat(hDc[CardId],PixelFormat,&pfd);
	CardRc[CardId]=wglCreateContext(hDc[CardId]);
}
//abover ok
//other initial work
……
//when you want some shaders execute in the i GPU Card
wglMakeCurrent(hDc[i],CardRc[i]); //??????? this operation fail, the return message is invalid operation. why? how resolve this error??????? 
//execute some shader
shader.enable();
……
shader.disable();
//other work
……  

thanks

The CreateDC looks fishy.

From the Microsoft docs on CreateDC:
Windows 2000 and later:
If there are multiple monitors on the system,
calling CreateDC(TEXT(“DISPLAY”),NULL,NULL,NULL)
will create a DC covering all the monitors.

You cannot use OpenGL on a DC which belongs to another process’ window (the desktop).
The driver has to run in a mode which supports OpenGL on both display adapters.
You would need to use a window per display.
Since the driver never knows if you’re going to move windows, the OpenGL data won’t be locked to one GPU, but you can give it a try.

I had the same idea of enumerating the display adapter & display devices connected to each adapter & use CreateDC. Unfortuantely I dint get the chance to play with it.

For Multi-GPU without SLI:-

I too have 2 GPUs and a monitor connected to each GPU. In this configuration, creating the window’s in appropriate locations will automatically map them to the corresponding GPU’s (Ex:- Window1 on monitor1 & Window2 on Monitor2). Just create DC using GetDC and likewise the RC for each Window. The only way you can communicate data between GPUs is to read from one and write to the other.

I had never tried using 2 GPUs with only one of them connected to a monitor.

dimensionX saying is correct.i had exampled use the same method ago.i also asked a nvidia devleoper.he say the second GPU is not active if it is not connect a monitor. it means the os will not recognize the second GPU if it is not active. Now,I do it as below : two GPU and one monitor,the primary GPU attached the monitor.set the desktop expand to the second monitor(the second monitor is not exist,but this can be set through the desktop property).it make the second GPU active. And cteate the second windows, set windows position in appropriate location,hide the windows. Now we can use the second GPU to do what we want. there is a minor problem. When you execute other application,you maybe can’t see the interface of it(it is display the second monitor,but there haven’t).somebody can solve this problem??

I think some where in Windows you can specify Parent/Child/Popup window placements so that it does not end up in the 2nd monitor space. I don’t remember how to do it but I am pretty sure the OS lets the user specify this functionality explicitly. Try Display properties when the 2 GPUs are active

ok,thank all replies.i will search and try it .