trouble with radeon cards

i wrote a terrain engine in opengl but it runs really slow on radeon cards. here are the specifics

my card:
Geforce2 MX 400 agp 64mb ddr

friends cards:
radeon 7200
radeon 8500

my engine breaks up a heightmap into polygons of varying sizes depending on how rough the terrain is on a 512x512 map it draws about 50,000 polys

after generating these polys it puts them in a compiled desplay list

the list is then called each frame and lit as well as fogged (there is no texturing involved in this)

each frame i also draw some text which requires lighting to be disabled and textures enabled, i call glEnable and glDisable about 10 times/ frame

my comp and the 8500 friends comp are about equivilent except for the cards (1.5ghz athlon, 256mb ddr, win2k)

however, while i get between 150 and 200fps, the 8500 comp gets around 20 and the 7200 gets about 3

is there some magical way i can fix this? or do i need to write it in a different way

thanks

[This message has been edited by InfestedFurby (edited 03-25-2003).]

The ATi 8500 is a better graphic card so the fps shouldbe higher. Check that OpenGL really gets hardware support. Can he play games like Quake3?

ya i know the 8500 is much better which is why im really confused.

ya he can play quake3 and warcraft3 just fine (under opengl) so i know it gets hardware accel ok

maybe there is something wrong with the fps counter.

im pretty sure the fps counter works fine but even if it doesnt you can easily tell when something is running at 3 fps. you dont need an fps counter to tell you that

Is your terrain engine fill rate or transformation limited?

Try your engine with reduced geometry and then increase in “reasonably” small increments until you get 50,000 polygons. That should help you determine your problem faster.

Try to increase the fill rate to see if the 8500 can out perform the other card.

Another common (for me at least) problem is that when you transfer your engine to your friends computer, your program probably does not prepare the rendering state properly. That is, are all data loaded properly, is the pixel mode and resolution set properly, proper depth buffer setting, stencil and other buffers enabled or disabled properly? Your friends computer probably has a different default rendering state and perhaps your program does not explicitly set all required states.

The last and “sure fire” way to solve the problem is to examine the specifications for the 8500 and see if you give it what it “likes”.

Hope at least some of the above helps.

Almost forgot,

I had the same problem with my programs when moving from nVidia to ATi. I am not sure (at the moment) if this is true, however one card may perform culling or otherwise throw out some geometry (depending if it will be visible) while the other will still transform it all. I had a situation where the nVidia card ran twice as fast but would slow down (to ATi speed) when all of the geometry was visible, while the ATi card ran consistantly slow (as if no culling was done at all).

This situation was only noticed when “display lists” were used (this is where checking the specifications for both cards would be handy).

*** Correction for above ***: The problem even occured for “vertex arrays”, I never really spent more time on it, however I did determine that my program was transformation limited so I suspect that culling may be the problem.

[This message has been edited by hkyProgrammer88 (edited 03-25-2003).]

at the moment the engine is fill rate dependant. so far it doesnt calculate the polygons dynamically as you move around, it just generates them when the engine loads and displays them each frame from a call list

the 8500 does run faster with fewer (10,000 or so) polys but still only about 1/3 as fast as the gf2

increasing the polys into the hudred thousands the 8500 actually drops in framerate much faster than the gf2 does

thanks for the tip on rendering states hkyProgrammer88 ill go give that a look

Have all machines involved got the latest drivers installed?