TheDestroyer

05-18-2011, 04:37 AM

Hello guys, :-)

I've created a small OpenGL simulation that involves 3-dimensional axes and some vectors playing around. The problem is the following:

The axes include also a sphere settled at the origin (whose creation involve many edges splitting and normalizing steps starting from a tetrahedron, i.e. high load). I have the option during the simulation to hide the axes (i.e. the sphere also). When I do this, the simulation goes faster. Which doesn't make sense. The simulation's speed also depends on the computer I do it on. On my laptop it's slow, but on my desktop (which has 2x GTX 295) is extremely fast that I need to lower the speed manually to see something meaningful.

I thought in the beginning that this is due to VerticalSync, but later on it didn't change anything to disable VerticalSync from the display card's options.

Any ideas how I can make the simulation's speed independent of the load and independent of the computer?

Any efforts are highly appreciated. Thank you! :-)

I've created a small OpenGL simulation that involves 3-dimensional axes and some vectors playing around. The problem is the following:

The axes include also a sphere settled at the origin (whose creation involve many edges splitting and normalizing steps starting from a tetrahedron, i.e. high load). I have the option during the simulation to hide the axes (i.e. the sphere also). When I do this, the simulation goes faster. Which doesn't make sense. The simulation's speed also depends on the computer I do it on. On my laptop it's slow, but on my desktop (which has 2x GTX 295) is extremely fast that I need to lower the speed manually to see something meaningful.

I thought in the beginning that this is due to VerticalSync, but later on it didn't change anything to disable VerticalSync from the display card's options.

Any ideas how I can make the simulation's speed independent of the load and independent of the computer?

Any efforts are highly appreciated. Thank you! :-)