PDA

View Full Version : GeForce3 - Why the bad results?



TheGecko
04-06-2001, 10:18 AM
(Let it be known that I am a BIG nVidia fan and I am in no way bashing the GeForce3 in this post)

I was reading about the benchmarking results done by Avault and I was not at all too impressed at their benchmark results.Maybe that's because it was a tet board given to them by nVidia and doesn't have all the features in it just yet.But,like Avault said and by looking at the benchmarks,I don't see a reason to spend $500 on a card that performs poorly (relative to the GeForce2 Pro) on lower res (keep in mind,not everybody has a 19" monitor to play games at 1600x1200 res.

So I was wondering if I was missing something or if the nVidia guys here can clear things up about the benchmark results I've seen (This is from Avault.com and they have a pretty good reputation)

Anyway,the link to the article is here: http://www.avault.com/hardware/getreview.asp?review=geforce3

All comments are welcome of course http://www.opengl.org/discussion_boards/ubb/smile.gif

Korval
04-06-2001, 10:26 AM
I assume that the selling point of GeForce3 is the image quality enhancements. It's register combiners don't take as big a hit as the previous GeForces. It also does vertex programs in hardware as well as texture shaders. And it accelerates all of DirectX8 (if that sort of thing is important to you). Outside of those enhancements, it isn't much different from a GeForce2.

TheGecko
04-06-2001, 10:31 AM
Oh well then I completely misunderstood the whole concept of the GeForce3 http://www.opengl.org/discussion_boards/ubb/smile.gif

But still,one of the big features is hardware vertex and pixel shaders.Does this REALLY demand a $500 price tag? I am interested in HW shaders BUT I'm a big fan of raw speed too.Guess it's just me.

Anything else?

Humus
04-06-2001, 10:59 AM
Drivers are still far from perfect ... performance will probably rice over time.

Tom Nuydens
04-06-2001, 10:59 AM
HardOCP did some tests that specifically targeted the new DX8 features: http://www.hardocp.com/reviews/vidcards/nvidia/gf3ref/

- Tom

zed
04-06-2001, 02:52 PM
>>not everybody has a 19" monitor to play games at 1600x1200 res<<

i assure you anyone who's got 500$US to spend on a video card has a 19inch monitor minimum

mcraighead
04-06-2001, 05:35 PM
A lot of the benchmarks on that site looked CPU-limited. They couldn't get above 90 fps in low-res Quake on a P3-800!

It's pointless to look at CPU-limited benchmarks.

Old apps don't use new features.

And yes, it should go without saying that things should improve in the future. The first shipping GeForce drivers were the 3.xx series. Compare 3.xx scores with 6.xx scores, and you will see a pretty big difference.

- Matt

TheGecko
04-06-2001, 06:17 PM
Matt: OK I don't get what you mean by "CPU Limited" A P3 800 isn't exactly a slow processor http://www.opengl.org/discussion_boards/ubb/wink.gif Unless you mean something else.

Zeno
04-06-2001, 06:39 PM
Since quake 3 doesn't send enough triangles per frame to really tax a good card like the geforce2+, the only thing that can choke them is having to draw so many pixels that memory bandwidth becomes an issue.

At low resolutions, the card is sitting idle a lot of the time because it can draw 640x480 pixels without breaking a sweat. The only thing that holds framerates back at this res is the CPU....even an 800 Mhz pentium can't throw triangles fast enough.

Take a look at benchmark pages sometime. Framerates at low res usually depend on what CPU you have. Framerates at high res depend mostly on what graphics card you have.

At high res, cards that do tricks to help memory bandwidth (usually by avoiding overdraw or compressing data) usually shine. Both the geforce3 and KyroII have some very nice tricks http://www.opengl.org/discussion_boards/ubb/smile.gif

-- Zeno

royconejo
04-06-2001, 07:00 PM
This is just my opinion, but maybe we should look more at the features (the GeForce3 has many new and good ones) and how they improve the render quality, instead of always judge what is good or bad with only one or two benchmarks that didn't use that new features.


- Royconejo.

TheGecko
04-06-2001, 08:10 PM
It's not just that.It's whether these features demand the $500 price tag.I'm really sorry to say this (and I'll prolly make alot of enemies on this board) but I am not willing to pay an extra $200 on hardware pixel and vertex shaders and other little nifty effects when there are no games out in the market right now (or in a year's time) that support all those features.

What I'm saying is,right now I don't think the price tag is really justified.Prolly in a year's time when I really do see fully GeForce3 compatible games I might buy one.Hopefully,the price of the GeForce3 would have dropped in price to around $300. (Hell I can get a 1.2Ghz T-Bird + motherboard for $500!)

Tom Nuydens
04-06-2001, 10:40 PM
Originally posted by zed:
i assure you anyone who's got 500$US to spend on a video card has a 19inch monitor minimum

No they don't http://www.opengl.org/discussion_boards/ubb/frown.gif


Originally posted by TheGecko:
It's not just that.It's whether these features demand the $500 price tag.I'm really sorry to say this (and I'll prolly make alot of enemies on this board) but I am not willing to pay an extra $200 on hardware pixel and vertex shaders and other little nifty effects when there are no games out in the market right now (or in a year's time) that support all those features.

You're right, but you're forgetting that this is a developer forum. Most people here are probably NOT buying a GeForce3 primarily to play games with it. If that were the case, they would take your advice and wait until the end of the year.

- Tom

kaber0111
04-06-2001, 11:26 PM
umm..
it's not 500 bucks.
more like 600 something.
i think 630 with tax?

buy your self a ps2, a few dvd's, a couple of games, and stick with your geforce 1x series card imho.
just my advice

laterz.

Korval
04-06-2001, 11:30 PM
Originally posted by royconejo:
[B]This is just my opinion, but maybe we should look more at the features (the GeForce3 has many new and good ones) and how they improve the render quality, instead of always judge what is good or bad with only one or two benchmarks that didn't use that new features.
B]

Well, there is this thing that some of us enjoy called raw performance. I would rather break 15-20Million Polygons Per Second (PPS) than have 4x-multitextured, 8-stage register cominber, 128-instruction vertex program polygons, but only be able to have 1Million PPS.

Admittedly, if all these nifty effects turn you on, feel free to pay $500 for the benifit of using them. I, on the other hand, will stick to my GeForce 2.

davepermen
04-07-2001, 02:25 AM
the problem is, 15-20 mtriangles per sec is not easy beatable.. you have to push the whole data through the agp and when you do this, its just not faster to do.. or you have it on the gf3ram stored..

floatingpointvertices: 12bytes per vertex.. ( assume w is always 1 and not stored )
64mb ram =>5'592'405 are the top of storable vertices IN the gf3..

you can boost the gpu more and more effects in, there is no stress of doing this ( we have duron 1.6gig possible today! ), but just pushing the data throught is the huge problem.. and you cant boost this much up without doing completely new technique. and this is just not here today..

so why not pushing up the quality of the current triangles WHEN WE JUST CANT GET MORE.. and heh, 10meg triangles per sec are MORE THAT ENOUGH! thats 600'000 triangles for smooth grafics ( 25fps ).. you can do enough with em, cant you? we are here in a devforum, as said before.. so now its YOUR job to get nice grafics with these 600'000 triangles..

Korval
04-07-2001, 09:03 AM
I'm pretty sure the AGP bus can handle 15-20Million polygons per second. A vertex (and, with good stripping, a polygon) using one texture coordinate, one normal, and one color takes up 36 bytes. For 20M PPS, that is approximately 720 MB per second. I'm fairly sure that an AGP 4X bus can handle that.

You are right, though. We are coming up on the end of improving sheer polygon throughput, thanks to the architecture of the PC. But, even so, the performance spec on nVidia's web site doesn't fill me with hope of seeing code running with a relatively large vertex program and lots of register combiners breaking 10M PPS.

As I said, "Admittedly, if all these nifty effects turn you on, feel free to pay $500 for the benifit of using them. I, on the other hand, will stick to my GeForce 2."

mcraighead
04-07-2001, 09:27 AM
Well, the GF3 does outperform the GF2 (including GF2 Ultra) at raw triangle rate. A GF2 Ultra can only set up 31 million triangles per second. A GF3 can set up 40 million triangles per second.

This is, of course, a peak setup rate. Real triangle rates depend on pushing/pulling vertices and indices to the HW quickly, vertex rates, vertex reuse (ranges from 3 vertices per triangle to 2 triangles per vertex), primitive lengths, etc. This also doesn't count back-end bottlenecks (fill and memory).

I have written a program that actually _does_ get 40 million triangles per second on a GF3, and it doesn't even use VAR! It used CVAs, in fact.

However, I cheated. I was using CullFace(FRONT_AND_BACK), i.e., discard all triangles rather than even bothering rasterizing them. http://www.opengl.org/discussion_boards/ubb/smile.gif No pixels rendered. It was just to make sure we weren't botching something and that the peak rate was actually achievable in a real GL app, though, so my cheating was excusable.

- Matt

Zeno
04-07-2001, 11:17 AM
I don't want to piss everyone off, but let me play devil's advocate here for a minute.

The Gecko: People always say that they can get a xxx chip + yyy motherboard for $500, implying that this card costs too much. What is the transistor count of the athlon compared to the GF3? How much 3.8ns ddr memory does your CPU+MB purchase include? What is the bandwidth of that motherboard? Do you see what you're getting for your $500? Maybe what you mean is "It's not worth it to me....I don't play games or program graphics as often as I run seti@home, so the processor speed is more important to me".

kaber0111: The card will not be 600 something. They are currently listed on ebworld.com for $530, and if you head over to rivastation.com, there is news that Elsa will be releasing their card at $400.

Korval: Why is polygon throughput so much more important to you than how it looks? Take a look at the chameleon demo that nvidia has. Not many polygons, but LOTS of effects. Personally, I think it looks much better than a super-high poly chameleon with gourad shading would.

Matt: Don't you think it's a bit misleading to tell people that a card can "set up" 40 million triangles per second if you're not actually rendering them afterwords? How often do people do this in "real" opengl apps?


Now, here's my question: What the heck is up with all the release delays, and why doesn't nVidia ever SAY anything about it? When will it REALLY be available? I have heard end of April, early May, and mid-May. Are you guys not actually able to make these chips? Heh, I guess there's always that one one ebay for $1500.

Cheers,
-- Zeno

TheGecko
04-07-2001, 11:23 AM
Zeno: You miss my point about the motherboard+processor thing.I was merely stating that for $500 I can get either a graphics card that has features that no games in the next year will implement (ie it would be kind of useless) or I could spend my money on something else that would do me more good.A 1.2GHz TBird + mother board will do me more good in the short term (one year) than a GeForce3 card that wouldn't be taken full advantage of until a year later.And when that year DOES come,nVidia would have created the GeForece4! See what I'm getting at?

[This message has been edited by TheGecko (edited 04-07-2001).]

Zeno
04-07-2001, 11:52 AM
Gecko -

I agree. I think there are basically 3 reasons you might want to buy the card:

1. You want to start programming the thing
2. You want to play games with anti-aliasing enabled.
3. You have a crappy old video card and want to upgrade to something a bit future-proof.

(I actually fit all three categories http://www.opengl.org/discussion_boards/ubb/smile.gif )

If you've already got a decent card (gf2 line or radeon), you can already play todays games just fine. Buying a gf3 would probably be a waste of money since, as you say, the gf4 or whatever may be out when games really start needing pixel/vertex shaders.

-- Zeno

TheGecko
04-07-2001, 12:02 PM
Yeah that's exactly what I was thinking.I do have a GeForce2 GTS and it still serves me well.

Now,about the 3 points you listed,they're not reason enough for me to throw away my GeForce2 and buy a Geforce3 for $500.

What I'm prolly saying is that hopefully nVidia and the card manufacturers will drop the price DRASTICALLY to about $350 (That's as much as I'm willing to pay for a VIDEO card)

But OH to program a GeForce3 would be a big dream come true! But it would out to be a pretty expensive dream http://www.opengl.org/discussion_boards/ubb/wink.gif

Anyway,as with all nVidia cards,I tend to stay away from the first generation of cards that implement a big new feature (the GeForce3 in this case since it's the first card to implement HW shaders.I stayed away from the GeForce256 and got the second generation GeForce,the GeForce2) I find it best to do it this way since I regard the first generation of cards as test subjects. And nVidia always does a great job with their second generation of cards.This prolly means that I'll most likely wait till the GeForce4 comes out.That will most likely be my next video card http://www.opengl.org/discussion_boards/ubb/smile.gif (Unless nVidia starts coming out with HW accelerated holographic projector cards!)

mcraighead
04-07-2001, 04:18 PM
It's not at all misleading to talk about setup rates. Triangle setup rate is a technical term referring to the speed of a specific graphics pipeline stage. Other pipeline stages have speeds too, but those speeds are measured in different units. The vertex unit can process X million vertices per second, where X is a function of the transform/light state. The rasterizer can output X pixels per second (X usually a constant). The pixel pipelines can output X million fully-shaded pixels per second (dependent on many factors). And the memory interface can handle X gigabytes of data per second.

Performance numbers are performance numbers. That performance number is a true performance number. That GF3 can setup 40M triangles/sec is just like claiming that a PC2100 DDR memory system has 2.1 GB/s of memory bandwidth, or that a 32-bit, 33 MHz PCI bus has 133 MB/s of bandwidth, or that a 800 MHz CPU with 4 functional units could do as many as 3.2 billion operations per second.

- Matt

Adrian
04-07-2001, 05:12 PM
Gecko, ELSA have cut the price of their GF3 from $550 to $400.
http://www.rivastation.com/index_e.htm

TheGecko
04-07-2001, 05:58 PM
But I want a Creative Labs GeForce3 card http://www.opengl.org/discussion_boards/ubb/frown.gif

[This message has been edited by TheGecko (edited 04-07-2001).]

zed
04-07-2001, 06:01 PM
40 million tris !!
i thought i read the xbox can do 125 mil tris a second. whats up?

mcraighead
04-07-2001, 07:51 PM
The XBox chip and the GF3 are not the same chip.

- Matt

royconejo
04-07-2001, 08:14 PM
I know that right now there aren't games to take advantage of the new features, but is also true that GeForce3 is the paradise for a game developer, or for an OpenGL programmer... I mean, we're talking about features implemented on hardware that could sound like science-fiction some time ago! ..well, on the PC at least, and sure at $500 http://www.opengl.org/discussion_boards/ubb/wink.gif
That's my point of view..


- Royconejo.

royconejo
04-07-2001, 08:39 PM
Originally posted by mcraighead:
Performance numbers are performance numbers. That performance number is a true performance number

You're starting to talk like my maths teacher.. http://www.opengl.org/discussion_boards/ubb/smile.gif

Seriously speaking, I'm getting 15M dynamic, lit and textured triangles (GL_TRIANGLES, not triangle strips) on my GeForce2 MX, the same number I got on the 'VAR-Fence' nvidia demo.. so that maybe the 'maximum theoretical number' could be reached by a real app.


- Royconejo.

zed
04-08-2001, 01:12 AM
not the same chip http://www.opengl.org/discussion_boards/ubb/smile.gif obviously
let me get the maths straight http://www.opengl.org/discussion_boards/ubb/smile.gif
geforce3 40 mil polygons 500$
xbox 125 mil polygons 400$
i luv it http://www.opengl.org/discussion_boards/ubb/smile.gif
yes i realise ms are gonna sell the xbox at a big loss per unit but still you've gotta laugh http://www.opengl.org/discussion_boards/ubb/smile.gif