PDA

View Full Version : Performance difference between desktop and target



RS3799
04-14-2009, 02:29 PM
I am using GLStudio 3.2 to generate an OpenGL 2.0 application running at 30Hz.
It is compiled under XP SP 3, Visual Studio 2005 SP1 C++
On the desktop ( AMD Athlon 64X4 Core 5200+ 2.6 Ghz, ATI X1300 ) it takes about 5% CPU.
On the target platform ( XP SP 3, Dual Core AMD Opteron 2.2Ghz, Tyan S2927 motherboard with NVidia ethernet & PCI drivers, 3 NVidia FX5200 PCI with Dual View enabled using Nvidia 175.19 driver July 08 (latest I've found ) ) one instance takes 50% CPU.

The application is an simple user interface, fairly static data and simple graphics that don't move much. It does a lot of the embedded GL Studio function for populating lists, setting highlights in the lists etc.

I've seen some postings that Nvidia Dual View has issues with OpenGL. (ref From the Nvidia developers forum
http://developer.nvidia.com/forums/index.php?showtopic=2257
)
Dumbing down the video cards to single monitor mode has not helped.


Any leads or suggestions would be appreciated.

Thank you.

Ron

Ilian Dinev
04-14-2009, 05:26 PM
Did you try also disabling "Threaded optimizations", it's an option in the nVidia Control Panel?

RS3799
04-15-2009, 04:29 AM
The FX5200 uses the nView control panel and I did not find the Threaded Optimizations there. I am getting a Forceview Control Panel and will let you know.

Searching for the Threaded Optimization info did lead to a registry hack I am also trying.
http://forums.nvidia.com/lofiversion/index.php?t17646.html
Thanks.

RS3799
04-15-2009, 04:35 AM
I also forgot a detail. The motherboard video remains enabled for maintenance access. That is a XGI Volari Z7 as the primary display.

RS3799
04-15-2009, 06:25 AM
I got the ForceWare control panel and disabled the Threaded Optimizations. No Joy.

RS3799
04-15-2009, 08:27 AM
Just found that the OpenGL on the target is 1.1, not 2.0. Would that cause an increase in execution speed ?

RS3799
04-16-2009, 04:15 AM
OpenGL 1.1 was being forced by using the XGI as the primary. I guess the primary card controls the OpenGL Level for all cards.
Still have performance problems tho.

Ilian Dinev
04-16-2009, 10:37 AM
:) you found-out the reason. Here's the HW vendor's description of their Volari: "it offers a high value, reliable graphics solution for server and cost conscious systems. Why pay for something you don't need? "

In other words, it's a RAMDAC. No acceleration whatsoever. An ATi Rage3D from 1995 will murder this card in performance.