My application runs much much slower on Linux platforms(RedHat and Suse) with the same graphics card.
My machine is dual boot.
Are there any environment variables that I can tweak to enhance performance?
Not sure if it’s the opengl driver or the compiler.
But the difference in the performance is so huge,
the compilers(Visual C for Windows and Absoft for RedHat Enterprise) I am using do not explain the gap in the performance.
I do not use any optimization on the slow linux.
i think you should create some simple programs using different techniques and compare their performance on both OS.
start with a very simple program which only clears color and depth buffer. its performance should be nearly the same on both OS, since clearing and swapping buffers takes place on the graphics hardware and cannot be influenced by the driver.
then increase the program’s complexity by either using display lists, vbos, drawing a huge number of polygons etc. try to get to the point where the performance starts to differ.
btw can you say a bit more about your app? is it pure xlib/glx or does it use lesstif/motif? what techniques does it use (lighting/texturing/display lists/vertex arrays/vbos)?
Originally posted by kobebryant: Not sure if it’s the opengl driver or the compiler.
But the difference in the performance is so huge,
That is a good sign, that you haven’t installed any graphics card drivers and the application is using software rendering…
Try this on a console window from inside your XServer and post the output: