Poor Performance of NVidia Cards

http://www.tomshardware.com/business/20030911/index.html

Any Comments?

EDIT: NVidia’s response thanks to Madman. Hopefully, the new drivers will fix performance issues. http://www.gamersdepot.com/hardware/video_cards/ati_vs_nvidia/dx9_desktop/HL2_benchmarks/003.htm

[This message has been edited by mmshls (edited 09-15-2003).]

It’s all in the shaders – and we already knew that the GeforceFX had problems there, compared to ATI. IMHO, these results come as no surprise.

Except that NVIDIA PR have done whatever they could to “demonstrate” that shader performance problems were due to the benchmarks, not their hardware. Now they have their real-world, shipping DX9 game.

somehow i don’t think the thread name you chose is gonna help convince staunch nvidiots…

Originally posted by mattc:
somehow i don’t think the thread name you chose is gonna help convince staunch nvidiots…

What’s wrong with my thread name? If it doesn’t help, that says something about nvidiots.

It was always embarassing to watch IHV dirty fights, tricks, accusations, lack of respect for competitors etc. Now when ISV joining this politics, the embarassment reaches unprecedenced levels.

>> Valve stated that the development of that special code path took 5 times the development time of the standard DX9 code path.

what’s the point of such announcement? Is this developer conference? Stinks like FUD.

>> Special optimizations for ATI cards were not nescesarry.

because they optimized for ATI from the beginning?

>> Valve was able to heavily increase the performance of the NVIDIA cards with the optimized path

Isn’t it his friggin job to do so? What’s a reason for inventing special “Mixed Mode” label for this? This conference? FUD as hell.

Anyway, it will be interesting to see the shader code when game ships.

>> but Valve warns that such optimizations won’t be possible in future titles, because future shaders will be more complex and will thus need full 32-bit precision.

And what’s a reason of such announcement there, apart from FUD? How on earth does this “warning” relate to Half-life 2 ?

>> 32-bit precision

oops, a typo? He was meaning 24-bit, of course? Some people should be more careful, as this would (if true) dismiss ATI HW for the future games… A careful reader will notice a table on previous page, saying that ATI does not support 32-bit precision, but the FUD effect remains for most people, I guess.

>> Newell strictly dismissed rumors that his comments about the disappointing performance of NVIDIA cards were based on a deal between Valve and ATi

Yes, I believe in your honesty fully. The whole conference was related purely to HL2. And the conference would be still be held even if there was no deal with ATI at all. In PR we trust.

Valve “warns”, Valve whines about “5 times development time”, it is all bent in one obvious direction. Have we ever experienced such attitude towards any hardware vendor from John Carmack? You can take him as reference point to judge Valve’s impartiality.

I’m truly disgusted. Valve, stick your FUD in your arse.

Well, if that’s true then NVIDIA sucks, but FX5900==Ti4600 is simply radiculous (can you believe it?). Moreover 16 bit mode isn’t showing any kind of real improvement, strange. Something like ATi db Valve echoes somwhere & Nvidia db ID SW. I’d like to see numbers on Matrox … For a quick comparision BTW, fixed cliplanes & so on, I guess that’s aimed at NVidia, but then they are so HELLLLOOOOOWA fast to write cheats for yet unreleased bench, I’d like to learn that too
Well, fps world is in mess, but I can’t see that bad performance on FX5200 at home 'll see

Crap, GPU world has gone terribly wrong.

“Valve stated that the development of that special code path took 5 times the development time of the standard DX9 code path”

How much time do you need to write a shader if you know the algorithm already? 10 minutes? And 50 minutes to add prefixes for register precision? It’s creasy!

And FX5600 in mixed mode is slower as in full prec??? I thought I like Valve…

Originally posted by Zengar:
[b] “Valve stated that the development of that special code path took 5 times the development time of the standard DX9 code path”

How much time do you need to write a shader if you know the algorithm already? 10 minutes? And 50 minutes to add prefixes for register precision? It’s creasy!

[/b]

uhm, to optimize for speed, you have to play around with all possible ways the shader could get rewritten, and check where you can go to what precicion, etc. they want highest quality at highest performance, that takes some time, yes…

oh, and MADMAN… haven’t i told you?

gpu world didn’t went wrong. only nvidia who based their future development effort into marketingcampaigns and cheat-coders instead of good hw-designers and driverdevelopers…

well, mostly. of course there ARE good men in nvidia, too… but i guess they got holidays for the last years…

Zengar, you need to implement your system to handle multiple code paths, and perhaps spend a lot of time figuring out where you can afford make your optimizations without losing quality & possibly alter game assets to support it. On the face of it it sounds simple, but it depends on your starting point and the complexities it introduces to your rendering system + how many effects & custom shaders you have and how programmable they are at the asset level. Then there’s the added complexity and what that means for cross platform testing & debugging. Not everyone has a monolithic shader codepath they can just swap out.

It is interesting watching the nvidiots trying to shoot the messenger. Yes this was an ATI event, but I think Valve has a bit more integrity than to be unduly influenced by that. Do they have to attend and give this talk? Hell no, and they would certainly be free to present their version of events rather than something slanted in ATI’s favor. This is a contribution to public knowledge disseminating their experience, take it or leave it, but don’t blame them. Are developers supposed to avoid discussing this kind of thing because nvidiots get upset?

Other high profile developers have corroborated some of Valve’s comments and Valve presented real data, not vague opinion. You could easily make your own measurements with a couple of graphics cards.

Originally posted by FSAAron:
[b]>> Special optimizations for ATI cards were not nescesarry.

because they optimized for ATI from the beginning?[/b]

If you’re so confident that there was optimization for ATI, would you be so kind as to list the possible optmizations that may have been implemented?

[b]>> but Valve warns that such optimizations won’t be possible in future titles, because future shaders will be more complex and will thus need full 32-bit precision.

And what’s a reason of such announcement there, apart from FUD? How on earth does this “warning” relate to Half-life 2 ?

>> 32-bit precision

oops, a typo? He was meaning 24-bit, of course? Some people should be more careful, as this would (if true) dismiss ATI HW for the future games… A careful reader will notice a table on previous page, saying that ATI does not support 32-bit precision, but the FUD effect remains for most people, I guess.[/b]

Now this is the problem when you read Toms Hardware. No mention of 32 bit precision was ever actually made. If you’ll look at the presentation slides you’ll see this:

“. . . new DX9 functionality will be able to use fewer and fewer partial precision functions”

ATI has only one precision and apparently it is sufficient. Full precision is the maximum precision allowed by the video card, which is a minimum of 24 bit – as is stated by the PS2.0 specifications.

[b]>> Newell strictly dismissed rumors that his comments about the disappointing performance of NVIDIA cards were based on a deal between Valve and ATi

Yes, I believe in your honesty fully. The whole conference was related purely to HL2. And the conference would be still be held even if there was no deal with ATI at all. In PR we trust.

Valve “warns”, Valve whines about “5 times development time”, it is all bent in one obvious direction. Have we ever experienced such attitude towards any hardware vendor from John Carmack? You can take him as reference point to judge Valve’s impartiality.

I’m truly disgusted. Valve, stick your FUD in your arse.[/b]

Heh. . . so how about the Doom 3 benchmark? Is that automatically valid because NVidia won? In any case, do these results not simply confirm -everything- we’ve seen, read, and heard regarding NVidia’s pixel shader performance? 3dMark03, Tomb Raider: AoD, ShaderMark, RightMark3D, etc. . . All have pointed to the same deficiencies. Are they all wrong?

[This message has been edited by Ostsol (edited 09-11-2003).]

Originally posted by Ostsol:
Are they all wrong?

sure, or not? and the humus demo running at 50fps on a one year old card runs on a gfFX 5200 at 3-4 fps. right/wrong? definitely WRONG. and thats why it was a big topic in here, wasn’t it?

some people will never learn

Nvidiots? Atidiots?

Bah.

Maybe if the original poster would insert some code, we can see who is better? heh.

Now where is the moderator hiding? :stuck_out_tongue:

>> Heh… so how about the Doom 3 benchmark? Is that automatically valid because NVidia won?

Well, if Doom III benchmark results were presented on nVidia PR event, with Carmack participating in speach tailored to explicitly show how ATI sucks, blaming ATI for necessity of multiple code paths, inventing special ™ names for ATI paths to show them on charts, “warning” about future games unrelated to Doom3, and having a bundle-deal with nVidia - then you would have reasons to question Carmacks crediblity. But you dont have any. Don’t compare him to Valve.

FYI, I personally believe NV won because D3 uses OGL and because Carmack has a will to fully optimise for HW architecture no matter he likes its design or not. The latter is a thing some coding fanboys keep on resisting to understand.

Originally posted by FSAAron:
[b]
>> Heh… so how about the Doom 3 benchmark? Is that automatically valid because NVidia won?

Well, if Doom III benchmark results were presented on nVidia PR event, with Carmack participating in speach tailored to explicitly show how ATI sucks, blaming ATI for necessity of multiple code paths, inventing special ™ names for ATI paths to show them on charts, “warning” about future games unrelated to Doom3, and having a bundle-deal with nVidia - then you would have reasons to question Carmacks crediblity. But you dont have any. Don’t compare him to Valve.

FYI, I personally believe NV won because D3 uses OGL and because Carmack has a will to fully optimise for HW architecture no matter he likes its design or not. The latter is a thing some coding fanboys keep on resisting to understand.[/b]

LMAO!! You are absolutely hilarious!

Hmm. . . which video card runs absolutely fine and with great performance using Carmack’s ARB2 path? Which video card -must- utilize FX12 and FP16 registers in order to achieve decent performance? Is the ARB2 path really a vendor specific path? Currently it is, but only because it is being used by default for one vendor. The GeforceFX could certainly run the game using it, but that would not produce practical framerates. The truth of the matter is that if NVidia’s floating point performance were on par with ATI’s, it would also be using the ARB2 path.

Also, consider that there is only one ATI-specific path in Doom3, but two for NVidia. Based on these numbers, who is more a cause for the game having so many render paths?

Back to Half-Life 2, since you did not specify any ATI-specific optmizations you thought Half-Life 2 might have. . . I can see only two possible ways in which Valve may have slanted the game towards ATI. First is that they used the Radeon 9800 Pro as the standard for basing what full detail should perform at (based on a predetermined feature-set). Floating point pixel shaders, textures, etc. . . all on at a particular detail level (presumably maximum) and achieving 60 fps. Either that or Valve intentionally chose to use features that the GeforceFX is known to be weak in.

The only ATI optimization they could do, would to make use of co-issue, and that’s not really an ATI specific optimization (since, AFAIK, the NV3x supports co-issue instructions too). Can’t use another precision since ATI only has one, and ATI doesn’t have any special hacks/extensions to D3D/OpenGL that would produce any greater speed increase…

its a standard dx9 game. so its an “ati optimized game”, as ati simply rocks in standard dx9 tasks

same is true for standard arb gl1.4/1.5 apps with standard arb extensions.

problem is with those “ati optimized games”… they will run good even in years, while ati could be dead then. as long as gl / dx survives.

same can not be said for all the proprietary rc/ts/nvfp/nvvp/nvtex coded paths… they wiill die out with nvidia. after that, there is no support for them. proprietary ****.

i prefer to “optimize for ati”, as at the same time it means “optimize for dx9 or arb gl” and like that “optimize for a save future”

Valve’s results match the results I get with my application when using ARB_fragment_program for floating point shaders. Radeon 9700 Pro is double or more the speed of an NV35. This is on an application that was originally developed on Geforce 4 4200 hardware using NV_register_combiners and then upgraded with ARB_fragment_program.

I’ve no reason to doubt any of Valve’s statements as they match my own experience. Those who do doubt Valve, please state your experience and how it differs rather than making accusations with nothing to back them up.

I cannot help but feel that this forum has gone down hill significantly when people start refering to others as ‘nvidiots’

Such name calling is pretty useless.

The title of this post, ‘Poor Performance of NVidia Cards’ is poorly choosen. It should be more like: “Valve Reports Poor Performance of NV3X Cards compared to ATI 9XXX cards in Half Life 2.” But, that would imply a much more limited failing of nVidia and not get as much attention, now would it.

I found Valves results to be startling, with the ATI card being 100 percent better. Nothing I’ve seen so far ever suggested that ATI’s cards were more than 20 or 25 percent better. This is extraordinary, but everyone should take it with a grain of salt because extraordnary claims require extraordinary proof.

I cannot help but feel that Valve is whining. This is the second time they have made a big deal about something in DX9. Is it fair for me to feel this way, or is Valve just standing up as a developer and saying they aren’t going to take crap from Microsoft or IHV’s anymore? i.e. They want things to be easier because it really is getting too expensive and time consuming to develop game software (not because they are lazy).