I just got a new GeForce3 Ti200. The fillrate compared to a GF2MX is really amazing but I’m wondering about the triangles per second rate. I was trying out NVidia’s Benmark5 ( http://developer.nvidia.com/view.asp?IO=BenMark5 ) and I just got 17 million triangles per second. That’s not much, I think that my old MX even got 24 millions.
Hmm, I think that must be a problem with AGP. Is there a possibility to see in windows xp if 4x AGP is enabled for the card? I remember that I had a similar problem with the gf2mx where I had to enable agp manually in the driver settings but I don’t know how to do that in xp with the standard NVidia drivers.
Or could someone with a GF3Ti200 please try this BenMark5 and tell me how much tris/sec he gets?
Thanks a lot in advance and sorry for posting an off-topic question
Don’t worry about it. Neither of your GeForces will likely reach that triangle throughput. In real-world conditions, your GeForce 3 will come out ahead, since it has less of a bandwidth bottleneck.
I get almost exactly the same result with the GF3, it’s not a Ti 200, just an original GF3, but both the GF2 MX400 and the GF3 seem to get almost exactly 16M tris with this test.
BTW, I had problems with AGP 2x vs 4x on 98SE, now I’m on XP it seems to be reporting the 4x I set in my bios. I use the free version of the SiSoft Sandra benchmark utilities to check my system. I also use wcpuid3 which tells me the command mode of AGP I’m in instead ow what’s supported. It reports 4X but says fast writes and side band addressing are disabled. Hmmm… let me check my bios here.
PH, I tried the GF2 MX on an 800MHz PIII and the GF3 on a 1900+ Athlon. CPU is not the issue in this benchmark. I’m going to check if I have fast writes disabled in my bios, I’ll run again and get back to you. Your GF3 results are in line with mine, just a smidgeon over 16M/sec.
[This message has been edited by dorbie (edited 05-13-2002).]
I have fastwrites enabled on the Athlon system ( at least that’s what I specified in the BIOS. My P3 unfortunately doesn’t support fastwrites.
What about the AGP aperture size - would that be an issue in this benchmark ? It gave quite a boost in the CodeCreatures benchmark ( I changed the settings from 64 to 256 MB ). Probably due to the large number of textures … ?
Well I looked and fast writes are enabled in the bios. The WCPUID chipset utility reports it as supported but disabled. More annoying chipset/driver/graphics card quirks.
I’m sure I checked this a while back and it was reported as enabled.
I just read about this the other day when I noticed FW/SBA disabled even though I’d enabled them in the bios.
Supposedly, even if enabled in the motherboard bios, most (retail/OEM?) NVidia cards don’t support this by default because it can reduce stability. The solution is to flash the vid card bios, but this comes with warnings, since it’s a risky process. I didn’t go into much depth reading about the process and I haven’t had the time to think about doing it myself, but I will in the next few weeks probably.
Using the default 1024x768x16, and with fastwrites on, sideband disabled using 28.32 drivers I get 22.64 M tri/sec on a Duron 800 + gf2.
Using 640x480x16 I get 22.79M tri/sec, and for a laugh, I tried 320x200x16 and I get 22.97 M tri/sec.
Oh, if you want to play around with fastwrite and SBA, goto http://www.geforcetweak.com/ They have a util to toggle lots of options, works with latest drivers.
Also, I think ffish is correct, SBA is disabled by the bios of the vid card, since it caused more problems than speed gains.
On the GF3 machine i’m using 28.32 drivers.
Fastwrites and AGP4x is on, and the last VIA drivers are used.
On the GF2 machine, i don’t remember exactly which drivers was, but it was from this month, 29.xx something.