GeForce3 - Why the bad results?

(Let it be known that I am a BIG nVidia fan and I am in no way bashing the GeForce3 in this post)

I was reading about the benchmarking results done by Avault and I was not at all too impressed at their benchmark results.Maybe that’s because it was a tet board given to them by nVidia and doesn’t have all the features in it just yet.But,like Avault said and by looking at the benchmarks,I don’t see a reason to spend $500 on a card that performs poorly (relative to the GeForce2 Pro) on lower res (keep in mind,not everybody has a 19" monitor to play games at 1600x1200 res.

So I was wondering if I was missing something or if the nVidia guys here can clear things up about the benchmark results I’ve seen (This is from Avault.com and they have a pretty good reputation)

Anyway,the link to the article is here: http://www.avault.com/hardware/getreview.asp?review=geforce3

All comments are welcome of course

I assume that the selling point of GeForce3 is the image quality enhancements. It’s register combiners don’t take as big a hit as the previous GeForces. It also does vertex programs in hardware as well as texture shaders. And it accelerates all of DirectX8 (if that sort of thing is important to you). Outside of those enhancements, it isn’t much different from a GeForce2.

Oh well then I completely misunderstood the whole concept of the GeForce3

But still,one of the big features is hardware vertex and pixel shaders.Does this REALLY demand a $500 price tag? I am interested in HW shaders BUT I’m a big fan of raw speed too.Guess it’s just me.

Anything else?

Drivers are still far from perfect … performance will probably rice over time.

HardOCP did some tests that specifically targeted the new DX8 features: http://www.hardocp.com/reviews/vidcards/nvidia/gf3ref/

  • Tom

>>not everybody has a 19" monitor to play games at 1600x1200 res<<

i assure you anyone who’s got 500$US to spend on a video card has a 19inch monitor minimum

A lot of the benchmarks on that site looked CPU-limited. They couldn’t get above 90 fps in low-res Quake on a P3-800!

It’s pointless to look at CPU-limited benchmarks.

Old apps don’t use new features.

And yes, it should go without saying that things should improve in the future. The first shipping GeForce drivers were the 3.xx series. Compare 3.xx scores with 6.xx scores, and you will see a pretty big difference.

  • Matt

Matt: OK I don’t get what you mean by “CPU Limited” A P3 800 isn’t exactly a slow processor Unless you mean something else.

Since quake 3 doesn’t send enough triangles per frame to really tax a good card like the geforce2+, the only thing that can choke them is having to draw so many pixels that memory bandwidth becomes an issue.

At low resolutions, the card is sitting idle a lot of the time because it can draw 640x480 pixels without breaking a sweat. The only thing that holds framerates back at this res is the CPU…even an 800 Mhz pentium can’t throw triangles fast enough.

Take a look at benchmark pages sometime. Framerates at low res usually depend on what CPU you have. Framerates at high res depend mostly on what graphics card you have.

At high res, cards that do tricks to help memory bandwidth (usually by avoiding overdraw or compressing data) usually shine. Both the geforce3 and KyroII have some very nice tricks

– Zeno

This is just my opinion, but maybe we should look more at the features (the GeForce3 has many new and good ones) and how they improve the render quality, instead of always judge what is good or bad with only one or two benchmarks that didn’t use that new features.

  • Royconejo.

It’s not just that.It’s whether these features demand the $500 price tag.I’m really sorry to say this (and I’ll prolly make alot of enemies on this board) but I am not willing to pay an extra $200 on hardware pixel and vertex shaders and other little nifty effects when there are no games out in the market right now (or in a year’s time) that support all those features.

What I’m saying is,right now I don’t think the price tag is really justified.Prolly in a year’s time when I really do see fully GeForce3 compatible games I might buy one.Hopefully,the price of the GeForce3 would have dropped in price to around $300. (Hell I can get a 1.2Ghz T-Bird + motherboard for $500!)

Originally posted by zed:
i assure you anyone who’s got 500$US to spend on a video card has a 19inch monitor minimum

No they don’t

Originally posted by TheGecko:
It’s not just that.It’s whether these features demand the $500 price tag.I’m really sorry to say this (and I’ll prolly make alot of enemies on this board) but I am not willing to pay an extra $200 on hardware pixel and vertex shaders and other little nifty effects when there are no games out in the market right now (or in a year’s time) that support all those features.

You’re right, but you’re forgetting that this is a developer forum. Most people here are probably NOT buying a GeForce3 primarily to play games with it. If that were the case, they would take your advice and wait until the end of the year.

  • Tom

umm…
it’s not 500 bucks.
more like 600 something.
i think 630 with tax?

buy your self a ps2, a few dvd’s, a couple of games, and stick with your geforce 1x series card imho.
just my advice

laterz.

Originally posted by royconejo:
[b]This is just my opinion, but maybe we should look more at the features (the GeForce3 has many new and good ones) and how they improve the render quality, instead of always judge what is good or bad with only one or two benchmarks that didn’t use that new features.
B]

Well, there is this thing that some of us enjoy called raw performance. I would rather break 15-20Million Polygons Per Second (PPS) than have 4x-multitextured, 8-stage register cominber, 128-instruction vertex program polygons, but only be able to have 1Million PPS.

Admittedly, if all these nifty effects turn you on, feel free to pay $500 for the benifit of using them. I, on the other hand, will stick to my GeForce 2.

the problem is, 15-20 mtriangles per sec is not easy beatable… you have to push the whole data through the agp and when you do this, its just not faster to do… or you have it on the gf3ram stored…

floatingpointvertices: 12bytes per vertex… ( assume w is always 1 and not stored )
64mb ram =>5’592’405 are the top of storable vertices IN the gf3…

you can boost the gpu more and more effects in, there is no stress of doing this ( we have duron 1.6gig possible today! ), but just pushing the data throught is the huge problem… and you cant boost this much up without doing completely new technique. and this is just not here today…

so why not pushing up the quality of the current triangles WHEN WE JUST CANT GET MORE… and heh, 10meg triangles per sec are MORE THAT ENOUGH! thats 600’000 triangles for smooth grafics ( 25fps )… you can do enough with em, cant you? we are here in a devforum, as said before… so now its YOUR job to get nice grafics with these 600’000 triangles…

I’m pretty sure the AGP bus can handle 15-20Million polygons per second. A vertex (and, with good stripping, a polygon) using one texture coordinate, one normal, and one color takes up 36 bytes. For 20M PPS, that is approximately 720 MB per second. I’m fairly sure that an AGP 4X bus can handle that.

You are right, though. We are coming up on the end of improving sheer polygon throughput, thanks to the architecture of the PC. But, even so, the performance spec on nVidia’s web site doesn’t fill me with hope of seeing code running with a relatively large vertex program and lots of register combiners breaking 10M PPS.

As I said, “Admittedly, if all these nifty effects turn you on, feel free to pay $500 for the benifit of using them. I, on the other hand, will stick to my GeForce 2.”

Well, the GF3 does outperform the GF2 (including GF2 Ultra) at raw triangle rate. A GF2 Ultra can only set up 31 million triangles per second. A GF3 can set up 40 million triangles per second.

This is, of course, a peak setup rate. Real triangle rates depend on pushing/pulling vertices and indices to the HW quickly, vertex rates, vertex reuse (ranges from 3 vertices per triangle to 2 triangles per vertex), primitive lengths, etc. This also doesn’t count back-end bottlenecks (fill and memory).

I have written a program that actually does get 40 million triangles per second on a GF3, and it doesn’t even use VAR! It used CVAs, in fact.

However, I cheated. I was using CullFace(FRONT_AND_BACK), i.e., discard all triangles rather than even bothering rasterizing them. No pixels rendered. It was just to make sure we weren’t botching something and that the peak rate was actually achievable in a real GL app, though, so my cheating was excusable.

  • Matt

I don’t want to piss everyone off, but let me play devil’s advocate here for a minute.

The Gecko: People always say that they can get a xxx chip + yyy motherboard for $500, implying that this card costs too much. What is the transistor count of the athlon compared to the GF3? How much 3.8ns ddr memory does your CPU+MB purchase include? What is the bandwidth of that motherboard? Do you see what you’re getting for your $500? Maybe what you mean is “It’s not worth it to me…I don’t play games or program graphics as often as I run seti@home, so the processor speed is more important to me”.

kaber0111: The card will not be 600 something. They are currently listed on ebworld.com for $530, and if you head over to rivastation.com, there is news that Elsa will be releasing their card at $400.

Korval: Why is polygon throughput so much more important to you than how it looks? Take a look at the chameleon demo that nvidia has. Not many polygons, but LOTS of effects. Personally, I think it looks much better than a super-high poly chameleon with gourad shading would.

Matt: Don’t you think it’s a bit misleading to tell people that a card can “set up” 40 million triangles per second if you’re not actually rendering them afterwords? How often do people do this in “real” opengl apps?

Now, here’s my question: What the heck is up with all the release delays, and why doesn’t nVidia ever SAY anything about it? When will it REALLY be available? I have heard end of April, early May, and mid-May. Are you guys not actually able to make these chips? Heh, I guess there’s always that one one ebay for $1500.

Cheers,
– Zeno

Zeno: You miss my point about the motherboard+processor thing.I was merely stating that for $500 I can get either a graphics card that has features that no games in the next year will implement (ie it would be kind of useless) or I could spend my money on something else that would do me more good.A 1.2GHz TBird + mother board will do me more good in the short term (one year) than a GeForce3 card that wouldn’t be taken full advantage of until a year later.And when that year DOES come,nVidia would have created the GeForece4! See what I’m getting at?

[This message has been edited by TheGecko (edited 04-07-2001).]

Gecko -

I agree. I think there are basically 3 reasons you might want to buy the card:

  1. You want to start programming the thing
  2. You want to play games with anti-aliasing enabled.
  3. You have a crappy old video card and want to upgrade to something a bit future-proof.

(I actually fit all three categories )

If you’ve already got a decent card (gf2 line or radeon), you can already play todays games just fine. Buying a gf3 would probably be a waste of money since, as you say, the gf4 or whatever may be out when games really start needing pixel/vertex shaders.

– Zeno