OpenGL rendering in Windows XP with multiple vide

I’m developing an OpenGL application for Windows XP. The target
machine has 2 NVIDIA GeForce 9800GT video cards, which are needed
because the application needs to have output 2 streams of analog
video.

The application itself has two OpenGL windows, one for each video
card. Each video card is connected to one monitor. As for the code,
it’s based on a minimal OpenGL example (http://www.opengl.org/
resources/code/samples/win32_tutorial/minimal.c).

How can I know if the application is utilizing both video cards for
rendering?

At the moment, I don’t care if the application only runs on Windows XP
or only with NVIDIA video cards, I just need to know how the two are
working.

It should be enough to create each window on each screen.
Throw a bunch of large textured polygons on screen and benchmark to be sure.
Have a look at the Nvidia display control panel, to check what option are availbale to you.
I have some experience with single-card dual-head rendering, in that case the best was the ‘span’ mode with full hardware acceleration.

On Windows, you can’t select a single GPU, other than using the GPU affinity extension, which is only available for QuadroFX cards.

More detail can be found at http://www.equalizergraphics.com/documentation/parallelOpenGLFAQ.html#multigpu

Thanks for your response…

@ZbuffeR
I have already tested my application with the machine specified above, and it works. I just want to know why and how.
Dual-head is not relevant to my question since I’m using a Single-head configuration for each of the cards.

@elie
Consider the fact that two monitors are connected, one for each GeForce card. Does this mean that the OpenGL rendering will occur only on one of the cards?

Do you see a big perf difference between single window and dual window ?

What options are available to you ? Of course I do not have the same setup as yours, that is why I am asking.

on windows there is no explicit way of specifying which window maps to which video card. If you want to use both GPUs for OpenGL rendering with hardware acceleration this is what you do:

  1. Create Window1 on Monitor1 which is connected to GPU1. Windows automatically maps Window1 to GPU1.
  2. Create Window2 on Monitor2 which is connected to GPU2. Windows automatically maps Window2 to GPU2.
  3. Create DC1 & RC1 for Window1.
  4. Create DC2 & RC2 for Window2.

If you want to render to Window1 (GPU1), make (DC1, RC1) current and do what ever you want to do. The OpenGL rendering calls will be automatically forwarded to GPU1 since (DC1, RC1) is the current rendering context. Same thing with Window2 (GPU2).

You cannot make both [(DC1, RC1) & (Dc2, RC2)] current simultaneously in the same thread. You cannot also make the SAME DC & RC current simultaneously in 2 different threads.

The OpenGL commands are always send to both cards. If your window is only on one card, of course the rasterization will only produce fragments on this card.

Not quite true. The OpenGL commands are send to both cards, which is why you can drag the window around and see something on both monitors. Fragment load happens only where the window is.

I’ve seen a 15% speedup when using GPU affinity on a window which covers only one GPU of two due to the removed dispatch overhead.

@kshahar :
I dont know what kind of video processing app you are trying to make, but from my experience, one 9800 GT card is powerfull enough to process two HD video stream in realtime.
With smart pipeline you can upload images more than 2GB/sec and download ~2GB/sec. Such pipeline might have huge latency (2-3 frames).

When you get result in sysmem you can copy it to video framebuffer card.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.