opengl on multiple windows bad performance

so i managed to create 2 windows and get them to render my opengl world.

only problem is that the processor takes 90% cpu-power compared to the lowly 3% with one window.

what i’ve done is that after each frame is drawn i switch the renderingcontext with the 2nd window renderingcontext.


else if(isFirstWindow)				// Not Time To Quit, Update Screen
{
    if(!wglMakeCurrent(hello1,hello2))					// Try To Activate The Rendering Context
    {
        KillGLWindow();								// Reset The Display
        MessageBox(NULL,"Can't Activate The GL Rendering Context.","ERROR",MB_OK|MB_ICONEXCLAMATION);
        return FALSE;								// Return FALSE
    }
    DrawGLScene();					// Draw The Scene
    SwapBuffers(hello1);				// Swap Buffers (Double Buffering)
    //SwapBuffers(hDC);				// Swap Buffers (Double Buffering)

    isFirstWindow=false;
}
else								// Not Time To Quit, Update Screen
{
    if(!wglMakeCurrent(hDC,hRC))					// Try To Activate The Rendering Context
    {
        KillGLWindow();								// Reset The Display
        MessageBox(NULL,"Can't Activate The GL Rendering Context.","ERROR",MB_OK|MB_ICONEXCLAMATION);
        return FALSE;								// Return FALSE
    }
    DrawGLScene();					// Draw The Scene
    SwapBuffers(hDC);				// Swap Buffers (Double Buffering)
    isFirstWindow=true;
}

hello1 is the device context handler for window1
hello2 is the rendering context handler for window2

something about the switching is draining cpu
so i wonder is there some way to just swap the buffers and make it work?

something like


else if(isFirstWindow)				// Not Time To Quit, Update Screen
{
    DrawGLScene();					// Draw The Scene
    SwapBuffers(hello1);				// Swap Buffers (Double Buffering)
    isFirstWindow=false;
}
else								// Not Time To Quit, Update Screen
{
    DrawGLScene();					// Draw The Scene
    SwapBuffers(hDC);				// Swap Buffers (Double Buffering)
    isFirstWindow=true;
}

Switching GL contexts addressing the same GPU is expensive. See this.

Your code implies you are using multiple GL contexts from the same thread. Consider using one GL context and then just flipping between windows using MakeCurrent. This might buy you some performance back. Also ensure that you minimize the number of context swaps per frame. Ideally 0…1.

Also note that when you have multiple GPUs, you can have separate threads each with their own context addressing their own GPU, and this is very, very fast and low overhead (with NVidia at least).

No it didnt really work. The correct render will display on both windows but still drag upto 90%cpu power

But is’nt there someway to just swapbuffers with the windows?
Why do i have get window handles and rendering contexts at all?
How do programs like 3dstudiomax do it with their top bottom and left and right viewport?


else if(isFirstWindow)				// Not Time To Quit, Update Screen
{
    if(!wglMakeCurrent(hello1,hRC))					// Try To Activate The Rendering Context
    {
        KillGLWindow();								// Reset The Display
        MessageBox(NULL,"Can't Activate The GL Rendering Context.","ERROR",MB_OK|MB_ICONEXCLAMATION);
        return FALSE;								// Return FALSE
    }
    DrawGLScene();					// Draw The Scene
    SwapBuffers(hello1);				// Swap Buffers (Double Buffering)
    //SwapBuffers(hDC);				// Swap Buffers (Double Buffering)

    isFirstWindow=false;
}
else								// Not Time To Quit, Update Screen
{
    if(!wglMakeCurrent(hDC,hRC))					// Try To Activate The Rendering Context
    {
        KillGLWindow();								// Reset The Display
        MessageBox(NULL,"Can't Activate The GL Rendering Context.","ERROR",MB_OK|MB_ICONEXCLAMATION);
        return FALSE;								// Return FALSE
    }
    DrawGLScene();					// Draw The Scene
    SwapBuffers(hDC);				// Swap Buffers (Double Buffering)
    isFirstWindow=true;
}

What about using only 1 GL window, split into multiple glviewport+glScissor ?
That way you have only one RC, one swapbuffers.

Well its not what i want. But consider this. If i start 2 of the same opengl_application that only uses one window they only take 2-4%cputime, yet they still manage to display what they are supposed to every frame. What makes this work?

I thought i knew how opengl knows which window to render to but i must be mistaken because my way is much slower.

i dont know about your CPU problems but with multiple double buffered windows you only get a max frame rate of

frequency of screen / number of double buffered windows

So for a 60hz display / 2 double buffered windows

You get a max of 30fps for each window. The multiple viewports on 1 window seems like a better idea.

But why is it that 2 windows created by the same process gets less fps than 2 windows created by two diffrent processes?

This doesn’t make sense.