Performance disparity between very similar boxes

Hey everybody, I’ve been bringing in my OpenGL demo’s to show my coworkers here at the job. I work on a software test team, so we’ve got >150 machines in our test lab, including some nice beefy Windows boxes.

So I’m running my latest demo on a couple different machines here in the lab, and the performance difference is almost unbelieveable. Here’s the setups:

Both boxes:
Dual Athlon XP 1800+
1 or 1.5 gig RAM
Tyan Tiger Mobo (S2640)

Both boxes have 32 meg Radeons in them, with the same driver (6166, latest supported from Ati). OpenGL versions are identical, as are supported extensions.

The differences in the 2 boxes, one has WinXP Pro, the other has Win2k Server. The only other difference is that one is a Radeon SDR, one is a Radeon DDR.

Now, I’d expect some performance difference there, even up to 150% greater performance out of the DDR. But I’m seeing a framerate of 11 fps on the SDR, when the DDR is showing 60fps (can’t get vsync disabled on the XP box). On my Radeon 8500 at home (win2k pro, 1.1 athlon), I get ~750 fps with the same demo, so I’m sure the DDR box here at work is getting well over 60, it’s just locked by vsync.

Does anyone have an idea why the SDR would have such low performance? I mean, at least a 600% performance diff between the DDR and SDR is just blowing my mind. Is there something in Win2k server that’s the bottleneck?

Thanks for any input you might have.

Alright, now this is just plain retarded. I ran my demo on my laptop (P3-650, 384 RAM) with a Rage Mobility-M chipset, supports OpenGL 1.1.3, and it runs at ~20 fps, twice as fast as the Radeon SDR in the Win2k box! Granted, the GL_CLAMP_TO_EDGE_EXT failed, as did the cubemapping, but still, twice as fast?