PDA

View Full Version : Thoughts on the NV35?



Lurking
05-12-2003, 12:58 PM
what do you think?

Humus
05-12-2003, 02:50 PM
9 months later, nVidia is finally on par. I wonder how long it will hold.

JD
05-12-2003, 04:18 PM
Anand I think mentioned something about Ati releasing faster 9800 soonish. Digit-life news section mentions 9800 256 meg model being a limited production. Nv35 does well so far. I'll have to check [H] review for shadermark scores as last time gffx was up to 6x slower in them compared to r300. My only problem is Ati's lack of gl support. Seems like ati is going with d3d and gl might suffer. Nvidia on the other hand favors gl much more than d3d, understandably so as they've said that nv35 is built to run doom3(super shadows) and it shows.

Korval
05-12-2003, 04:57 PM
My only problem is Ati's lack of gl support. Seems like ati is going with d3d and gl might suffer.

First of all, if it weren't for ATI, you'd have no choice but to use NV_fragment_program/vertex_program_2, and glslang would, almost certainly, never be well-supported by nVidia (compared to propriatery nVidia program extensions coupled with Cg), and therefore, by the gaming graphics community at large. ATI pushes for ARB extensions, while nVidia is perfectly willing to go for propriatery ones.

Secondly, it's about time nVidia actually game ATI some real competition http://www.opengl.org/discussion_boards/ubb/wink.gif

Thirdly, I'm not sure that it matters terribly much that the $500 card from nVidia beats the $500 card from ATI. The $200 cards will sell more, and the $100 or less cards will sell even more than that. ATI has the edge in the mid-range market, with their 9500Pro (get'em fast before they're gone. The best $175 you can spend on a graphics card) and 9600Pro. nVidia's 5200 FX, at the very least, offers DX9 support, which ATI's 9200 does not. Sure, Doom3 will kill both of these cards put together without batting an eyelash, but you get what you pay for http://www.opengl.org/discussion_boards/ubb/wink.gif

[This message has been edited by Korval (edited 05-12-2003).]

titan
05-12-2003, 07:41 PM
Originally posted by Lurking:
what do you think?

I think we need moderators if advanced OpenGL discussion is talking about how a $500 product 99% of the market will ignore from company N is better/worse than a similar product from company A.

It's a nice card. Congratulations nVidia.

More importantly thank you for 5200! Making a sub $100 card that fragmnet/vertex programs is awesome. HUGE congratulations on that!

According to Anandtech in Doom3 the at 1024x768 the 5200 scores 37fps. Quite playable. Switch to 1280x1024, enable 4x AA/8x anisotropic filtering and the performance drops to 9fps while the brand new 5900 drops to 38fps.

So you can spend $80 on a 5200 or $499 on a 5900 and get 35-40fps. That extra $400 gets you the same performance as the 5200, but you get anti aliasing and anisotropic filtering. yay.

I can't justify spending $400 for better filtering. I think I would get more enjoyment out of $400 by getting a GameCube, Mario Party, a case of beer, and having some friends over. Hell $400 will get you a weekend holiday in Mexico.

Sorry, I'm ranting. It's nice. I'd love to have one. I never will though.

JONSKI
05-12-2003, 08:18 PM
Originally posted by Korval (aka ATi fanboy):
First of all, if it weren't for ATI, you'd have no choice but to use NV_fragment_program/vertex_program_2, and glslang would, almost certainly, never be well-supported by nVidia (compared to propriatery nVidia program extensions coupled with Cg), and therefore, by the gaming graphics community at large. ATI pushes for ARB extensions, while nVidia is perfectly willing to go for propriatery ones.
Are you on the ARB, or are you talking out of your *ss?


Secondly, it's about time nVidia actually game ATI some real competition http://www.opengl.org/discussion_boards/ubb/wink.gif
They took a year off after pounding them for three years in a row. Despite ATi's recent success, their stock still stagnates.


Thirdly, I'm not sure that it matters terribly much that the $500 card from nVidia beats the $500 card from ATI.
Bragging rights are the best PR you can get.


The $200 cards will sell more, and the $100 or less cards will sell even more than that. ATI has the edge in the mid-range market, with their 9500Pro (get'em fast before they're gone. The best $175 you can spend on a graphics card) and 9600Pro. nVidia's 5200 FX, at the very least, offers DX9 support, which ATI's 9200 does not. Sure, Doom3 will kill both of these cards put together without batting an eyelash, but you get what you pay for http://www.opengl.org/discussion_boards/ubb/wink.gif
I agree.

davepermen
05-12-2003, 09:40 PM
i think its at least a good card by nvidia, something not seen over a long long time. good job nvidia.

now i want a .13 high end card by ati http://www.opengl.org/discussion_boards/ubb/biggrin.gif

JackM
05-12-2003, 09:43 PM
9 months later, nVidia is finally on par. I wonder how long it will hold.

On par in terms of what?

Features? NV30 had similar feature set.

Performance? 5900 in faster, sometimes significally (Doom 3).

Driver support? Now that's where ATI is not up to par, as my experience with Radeon 8500 shows.

My opinion on 5900? Totally overpriced for what it is. For development, the best card for the money is FX5200, and for gaming it is Radeon 9500 Pro (if you can find one).



[This message has been edited by JackM (edited 05-12-2003).]

m2
05-12-2003, 11:06 PM
I agree with JackM. Even if ATI has a better card (when you sit down for a couple of days with both of them, you have to admit that ATI's product is better overall), NVIDIA has better drivers. ATI needs to bore this into their skulls: they have to match NVIDIA's driver offering, point for point. Even if NVIDIA's solution is not perfect, it's quite acceptable. ATI's, at the present time, is laughable at best. Before fixing that, ATI poses no relevant competition to NVIDIA. Get this: if someone comes to me asking for advise, I'll recommend them to buy NVIDIA hardware, even if I think ATI's is better.

I hear the summer might hold a surprise in this department, but summer is too far away, and maybe even too late.

davepermen
05-13-2003, 01:04 AM
i just think its funny to see still everyone hail and praise the nvidia drivers, after seeing that they needed about half a year to develop a detonatorFX driver that actually works. have seen tons of people buying FX cards just seeing they crashed all around, had different grafic-errors, and where buggy, slow, and ugly.

hey dudes! they started with the "nv30 rocks" last august and NOW there is a good driver out for them!!

not that they don't build good drivers. but for one, they are not perfect (crashes at home and at work), and second, espencially for FX cards, they now really had a very very long time till they got it working!

during all this time, i had about 3 full crashes while running on the radeon9700pro, and in the first drivers some small image-glitches. quite okay..

both companies now have good drivers. cat3.4 was not ready yet for the benchmarks, that was bad..

i just somehow dislike how nvidia really had a blackout for over 8 months now, both with hw (nv30), with drivers(no detFX, no WHQL), with cg(sorry.. that thing simply doesn't..).

now it looks all quite okay. still, the list of proprieraty nv-extensions is too long for me.

Adrian
05-13-2003, 01:58 AM
Originally posted by davepermen:
the list of proprieraty nv-extensions is too long for me.

What's wrong with this? ATI feel some of them are so good they adopt them to (e.g. NV_occlusion_query). I wish they would adopt more.

So far every project I have done has been made faster because NV have an extension to help (VAR,Point sprite,Occlusion query, PDR).

The main reason I use NVidia cards is the extensions.

The fact that NVidia are pioneering these extensions and developers are using them will hopefully get them more widely supported. The functionality they have added had been very useful, to me at least.

[This message has been edited by Adrian (edited 05-13-2003).]

Tom Nuydens
05-13-2003, 02:25 AM
Indeed. Most ARB extensions are generalized versions of existing vendor extensions, and I would say that you can't have one without the other. Before there was VBO, there were VAR and VAO. Before there was ARB_vertex_program, there were NV_vertex_program and ATI_vertex_shader. Before there was ARB_fragment_program, there were reg combiners/tex shaders and ATI_fragment_shader. And so on.

Any newly released card will have more features than can be covered by the current set of ARB extensions or core GL functionality. Would you rather have it that new cards didn't expose their new functionality at all until new ARB extensions are ratified that cover it? Both vendors already support the exact same set of ARB extensions (with the exception of ARB_shadow_ambient which is not supported by NVIDIA, and ARB_imaging which is not supported by ATI). If you don't like vendor extensions, don't use 'em.

-- Tom

[This message has been edited by Tom Nuydens (edited 05-13-2003).]

Nutty
05-13-2003, 03:55 AM
So hows this new UltraShadow enabled then? http://www.opengl.org/discussion_boards/ubb/smile.gif

Nice card, but the new doom3 trailer gets me more excited! http://www.opengl.org/discussion_boards/ubb/smile.gif

Tom Nuydens
05-13-2003, 04:07 AM
Originally posted by Nutty:
So hows this new UltraShadow enabled then? http://www.opengl.org/discussion_boards/ubb/smile.gif

GL_NV_depth_bounds_test. Interestingly, it's in the NV30 emulator but not on NV30 itself: http://www.delphi3d.net/hardware/extsupport.php?extension=GL_NV_depth_bounds_test

-- Tom

KuriousOrange
05-13-2003, 04:47 AM
I have to say, I agree with adrian and tom about proprietary extensions. These extensions are the main reason for me sticking with opengl. If I used d3d, then I would have to wait for M$, nvidia, ATI, matrox and whoever else to agree on a new feature to be added to the next release of dx - there's no such thing as an extension in d3d.

davepermen
05-13-2003, 05:01 AM
i'm not talking about vendor specific extensions, but about proprietary extensions. hw near, but not at all nice to use. and often just "bugfixing" some tiny feature.. nvidia even releases extensions to their own extensions, for about every new card... 1,1_1,2,3 etc..

their extension spec is longer than the gl spec (i don't remember wich version, i think they beat opengl1.3 specs) or something like this. currently i can't see clear, that day was just much too much work. can't concentrate brain.....

KuriousOrange
05-13-2003, 05:04 AM
Originally posted by davepermen:
i'm not talking about vendor specific extensions, but about proprietary extensions.

I don't understand the difference - and I'm english! http://www.opengl.org/discussion_boards/ubb/smile.gif
They mean the same thing.

tfpsly
05-13-2003, 07:44 AM
Originally posted by JackM:
Driver support? Now that's where ATI is not up to par, as my experience with Radeon 8500 shows.

True. And that's even worse under Linux, where ATI drivers just sux : verrryyy slow and unstable.

[OT] a new doom3 trailer : http://www.gamershell.com/download_filehell.pl?hellID=2124&mirror=2&a=3137774211260.61&b=init
This is a screener, so don't expect very high quality. Still interesting and shows some new monsters and moves

NitroGL
05-13-2003, 09:15 AM
Originally posted by tfpsly:
And that's even worse under Linux, where ATI drivers just sux : verrryyy slow and unstable.

Huh? They've been VERY stable and VERY fast for me.

Humus
05-13-2003, 11:01 AM
Originally posted by JackM:
On par in terms of what?

Features? NV30 had similar feature set.

Performance? 5900 in faster, sometimes significally (Doom 3).

Driver support? Now that's where ATI is not up to par, as my experience with Radeon 8500 shows.

Features. Well, NV30 was on par already. Though the D3D drivers lacks loads of features making them useless for developers. NV35 AA still sucks. Only 4x MS and not gamma correct.

Performance. NV30 was slow, NV35 seams better. DoomIII performance is very questionable. The game is far from done, and the test was arranged by nVidia who had access to the build, while ATi had not. Far from being a meaningful test. Also, it seems nVidia's drivers are doing some ****ty stuff. xbitlabs review highlighted some oddities going on in OpenGL with AF, though not in D3D. AA in OpenGL seems to be nothing but blur, and not an AA. In fact, in some games enabling AA improved performance. http://www.opengl.org/discussion_boards/ubb/eek.gif
Also, while shader performance has improved it still does not seem to match that of a R9800pro. Shader performance is the single most important performance indicator for me as a developer.

Driver support. I keep hearing how bad ATi drivers sucks, but seldom do I hear any specifics. Interestingly though, lately I've heard more complaints about NV30 drivers, despite its market penetration being far lower than the 9x00.

Humus
05-13-2003, 11:05 AM
Originally posted by KuriousOrange:
I don't understand the difference - and I'm english! http://www.opengl.org/discussion_boards/ubb/smile.gif
They mean the same thing.

Being vendor specific means (obviously) that it's specific to a particular vendor. It does not however mean anything in terms of ownership or legal rights to implement it. Proprietary however means you'll have to licence it to implement it.

KuriousOrange
05-13-2003, 12:01 PM
Oh, so it's a legal term?
But as developers (non-driver developers) we don't need to be concerned with the legal ownership of an extension - therefore "vendor-specific" and "proprietary" mean the same thing.

Lev
05-13-2003, 01:13 PM
Originally posted by Tom Nuydens:
GL_NV_depth_bounds_test. Interestingly, it's in the NV30 emulator but not on NV30 itself: http://www.delphi3d.net/hardware/extsupport.php?extension=GL_NV_depth_bounds_test

-- Tom

Tom,

do you know if the spec is available somewhere?

Best regards,
-Lev

cass
05-13-2003, 02:13 PM
Originally posted by Lev:
Tom,

do you know if the spec is available somewhere?

Best regards,
-Lev



Hi Lev,

Not yet, but it will be soon. All the relevant information is in the optimized stenciled shadow volumes talk I gave at GDC this year. You can find that ppt on the NVIDIA web site.

Thanks -
Cass

Korval
05-13-2003, 04:30 PM
But as developers (non-driver developers) we don't need to be concerned with the legal ownership of an extension - therefore "vendor-specific" and "proprietary" mean the same thing.

Not really.

If an extension is nVidia proprietary, then someone who wants to implement it has to pay nVidia money (or get nVidia to agree to a non-money settlement). If it isn't proprietary, they don't have to do anything with nVidia to support it. Ergo, it is more likely that someone will support a non-proprietary extension than they will a proprietary one.

cass
05-13-2003, 08:20 PM
Somewhere along the way we got the mindset that proprietary == bad under all circumstances.

If you program for a living, there's a good chance you write (or have written) proprietary software.

The idea is that if you (or a company, or whatever) develop something, you own exclusive rights to that thing. It doesn't mean that licensing that thing must be difficult or expensive, it simply means that it requires a license.

ARB extensions can have proprietary technology in them, but in order for them to be ARB extensions, that technology must be licensed at no cost to all ARB members.

Also, consider that just because documents aren't marked "proprietary" doesn't mean they aren't. Floating point frame buffers are a good example of this.

Companies that develop technology usually want to retain control of it - at least long enough to recoup their R&D costs. Usually, they'd like to make a profit too. This is not inherently evil. http://www.opengl.org/discussion_boards/ubb/smile.gif

Thanks -
Cass

KuriousOrange
05-13-2003, 11:48 PM
Originally posted by Korval:
Not really.

If an extension is nVidia proprietary, then someone who wants to implement it has to pay nVidia money (or get nVidia to agree to a non-money settlement). If it isn't proprietary, they don't have to do anything with nVidia to support it. Ergo, it is more likely that someone will support a non-proprietary extension than they will a proprietary one.

I repeat, to us as non-driver developers (I take it you're using the OpenGL API, not implementing it), there is no difference between proprietary and vendor specific extensions. Hate to be pedantic, but really it's just a legal difference only of concern to the vendors.

tfpsly
05-14-2003, 12:25 AM
Originally posted by NitroGL:
Huh? They've been VERY stable and VERY fast for me.

Really? I'll have to make up my mind again maybe... I presume you're using the binary drivers from ATI. Maybe they made some good improvements lately. Last time I tried an ATI with such drivers, it was, hem, inacceptable for such a costy card. That was on a Radeon 9700 pro.

Lurking
05-14-2003, 05:25 AM
First off to judge ati's new card based on the new doom3 engine would be wrong. This is due to the fact that the new and current ati drivers didn't work with doom3 (atleast that was my impression on [H]). This only allowed half of the memory open. But as I think about it suprised me that the drivers were causing issues for the game. As I also understand the doom3 engine is about done (engine wise). I love nvidia drivers and their opengl support. Though lately I have been thinking about shifting to ati cause of the NV30 and not knowing how the NV35 would turn out. After seeing the previews I think ill wait out for the new NV35!

V-man
05-14-2003, 06:31 AM
Personally, I'm unsure of what my next card will be or which company will be the preferred choice by the masses.

Bottom line:
Nvidia has a lot more neat extensions than any other company. They obviously spend a lot in R&D. For some reason, they decided a long time ago to put together a good team of GL driver developers. And I think they encouraged ATI to pay attention to GL and create a few extensions of their own.

Too bad ATI is the only one in the game. Matrox could have been something as well.

Tom Nuydens
05-14-2003, 06:32 AM
Originally posted by Humus:
DoomIII performance is very questionable. The game is far from done, and the test was arranged by nVidia who had access to the build, while ATi had not. Far from being a meaningful test.

The performance figures may be irrelevant because of this, but I wouldn't dismiss this story altogether. After all, nor ATI nor NVIDIA ever get early access to my apps, yet I still expect them to work with every new driver release -- preferably at somewhat normal performance levels.

-- Tom

dorbie
05-14-2003, 06:48 AM
Cass when it comes to graphics interfaces we use to create portable code and abstract hardware differences, proprietary is VERY, VERY bad. Interfaces need to be both open and *the same* on different platforms, as much as they can be. This is at the core of why people use OpenGL in the first place.

And I thought it was just the coolaid at Apple that was spiked, allegedly :-)

brute
05-14-2003, 07:09 AM
I just don't get what the problem is with proprietary extensions. If you want platform independence just dont use them.
As far as i'm concerned i like checking out the latest ideas and extensions from nvidia and seeing how i could use them. I'm not really bothered about my programs not being portable, and if the extension becomes ARB someday then all the better.

NitroGL
05-14-2003, 07:22 AM
Originally posted by tfpsly:
Really? I'll have to make up my mind again maybe... I presume you're using the binary drivers from ATI. Maybe they made some good improvements lately. Last time I tried an ATI with such drivers, it was, hem, inacceptable for such a costy card. That was on a Radeon 9700 pro.

I've been using them since the first binary release (originally for the FireGL 8800, used them on my 8500), and I don't recall any major problems... Only thing I have to complain about is that there aren't any public XFree 4.3 drivers out yet.

KuriousOrange
05-14-2003, 07:41 AM
Originally posted by dorbie:
Cass when it comes to graphics interfaces we use to create portable code and abstract hardware differences, proprietary is VERY, VERY bad. Interfaces need to be both open and *the same* on different platforms, as much as they can be. This is at the core of why people use OpenGL in the first place.

Speak for yourself, not for the rest of us.
Your problem should be with the ARB being slow to act, not individual vendors, who are free to add whatever extensions they want, exposing more features and flexibility throughout the periods between arb releases (and directX releases).
Personally, it's the major reason I stick with OpenGL

dorbie
05-14-2003, 08:05 AM
Let me explain what I was talking about since you're responding to one extreme interpretation of my statement :-).

I'm saying when there is a chance for convergence of APIs it should be taken. The unfettered early access to innovative hardware features is a good thing and requires proprietary extensions, but the lack of convergence over time is one of the things we need to work against. NIH resistance to the ideas and implementations of others is something we suffer from daily as OpenGL developers.

As for the ARB's lack of motion, it is directly tied to the intransigence of vendors over proprietary extensions.

NVIDIA and ATI could agree their own defacto API extensions tomorrow, the delay is in THEM reaching agreement. It's a self fulfilling position to say proprietary extensions are good and open EXT or non proprietary ones are slow to arrive.

They're as slow to arrive as the people making the argument for proprietary babel make them.

[This message has been edited by dorbie (edited 05-14-2003).]

KuriousOrange
05-14-2003, 09:35 AM
Good point well made.


[This message has been edited by KuriousOrange (edited 05-15-2003).]

Humus
05-14-2003, 05:37 PM
Yay! Time for Quack, part II. http://www.opengl.org/discussion_boards/ubb/rolleyes.gif

NVidia caught cheating: http://www.extremetech.com/article2/0,3973,1086025,00.asp

Pretty much throws all benchmark results out of the window.

JackM
05-14-2003, 06:02 PM
Pretty much throws all benchmark results out of the window.


What benchmarks? Doom3 benchmark was written by ID itself. Do you have any proof about other benchmarks?

Using the same logic, ATI Quake hackery throws all benchmark results out of the window as well. And I'd rather have Nvidia hack the useless (and inefficient ) synthetic benchmark like 3DMark than popular games.



[This message has been edited by JackM (edited 05-14-2003).]

rgpc
05-14-2003, 06:27 PM
Originally posted by dorbie:
proprietary is VERY, VERY bad.

Dx is proprietary isn't it? http://www.opengl.org/discussion_boards/ubb/wink.gif

Humus
05-14-2003, 06:46 PM
Originally posted by JackM:
What benchmarks? Doom3 benchmark was written by ID itself. Do you have any proof about other benchmarks?

Using the same logic, ATI Quake hackery throws all benchmark results out of the window as well. And I'd rather have Nvidia hack the useless (and inefficient ) synthetic benchmark like 3DMark than popular games.

I'd rather have them not cheating at all. Cheat in one benchmark => Trust == 0. Though we don't have any proof about other benchmarks it's only reasonable to take other benchmarks with a grain of salt. Yes, when ATi cheated benchmark results in every other OpenGL app should have been questioned, and to some extent was. How could we know that it was only Q3 that was affected? Now with nVidia cheating, how do we know that this cheat is only limited to 3dmarks03? We don't.

V-man
05-15-2003, 04:09 AM
Nice article. It's a fun read.

They are suggesting that nvidia's driver is tracking 3DMark03's animation and they are optimizing by not doing complete depth clears in some areas and also inserting "clipping planes" do reduce filling.

What I said still holds. Don't trust 3DMark03 or any other benchmarks. Run a handful of your most used apps and games and see if you like the overall performance. Or ask these benchmarkers to do it.

The main problem is that those article writers are predictable.
They all use the same ****ing benchmark suit and the same ****ing games.

Most important, is to visit many of these sites and gather opinion.

Thanks again for the link Humus

nystep
05-15-2003, 05:01 AM
Hmm,


I was also impressed by the performance gain of 88% between 43.45 drivers and 44.03 drivers under ut2003 flybie. Although performance increases for other games with random views don't benefit of such performance improvement. I don't know what to think about nVidia and benchmarks.

you can check performance increases reported on this site (french link sorry, but numbers are still numbers after all):
http://www.hardware.fr/articles/462/page4.html

regards,

Robbo
05-15-2003, 05:18 AM
Benchmarking scam as seen on Slashdot is fraudulent in the extreme. 3D Mark should produce a longer demo with "average" results for a more random fly-by, otherwise its just pointless because both ATI and NVIDIA are demonstrating how good they are hacking special cases into their drivers rather than how good their hardware/drivers actually are.

dorbie
05-15-2003, 06:06 AM
I'm not convinced that this is a deliberate cheat.

It looks to me like 3DMark has some eyespace rendering that is used to clear the screen and when these guys went 'off the rails' as they say. It somehow broke the screen clear method.

It's difficult to be sure. Cheating on benchmarks is real low, but I need more than this to convince me. If they preculled the sky dome then yes it's a cheat, I wouldn't have thought they saved much but maybe they did it with the whole scene in some generic way. All those lists of which parts of the scene are never drawn (or whatever) should be in the driver if it's a cheat.


[This message has been edited by dorbie (edited 05-15-2003).]

DopeFish
05-15-2003, 06:14 AM
Hacked in clip planes... it would increase the size of the drivers substantially to do that in every frame of the demo (having fixed positions for them saved in the drivers.. as was suggested in the article).

Also, the driver would need to know when to enable and disable the planes.. this would add considerable bloat to the drivers having to have the info for this for each frame in the benchmark.

Personally I found the article to be a laugh.

dorbie
05-15-2003, 06:19 AM
Dopefish, yea the article didn't seem very good, I don't think hacked clip planes would even be a win, but what do you expect. They find some issue, poke it with a stick and yell "release the hounds". FWIW I don't think driver bloat would be the issue. ID ing of the demo based on graphics info has been done before, checks of the display list bounds in an earlier case AFAIK. The real bloat would be the culling information but I don't think that would stop anyone. A clip plane seems unlikely, but anything that's a view specific cheat would bloat the driver with the 'knowledge'.

[This message has been edited by dorbie (edited 05-15-2003).]

Adrian
05-15-2003, 12:22 PM
Originally posted by Nutty:
Nice card, but the new doom3 trailer gets me more excited! http://www.opengl.org/discussion_boards/ubb/smile.gif

You should check out the half life2 trailers then. They are even better imo.

You can get them from here. You'll need bittorrent though. http://aixgaming.com/

roffe
05-15-2003, 12:41 PM
Originally posted by V-man:
The main problem is that those article writers are predictable.
They all use the same ****ing benchmark suit and the same ****ing games.

I get sick and tired of this too. The people at digit-life tested NV35 with a beta of RightMark3D though. I don't how accurate the results are but at least it's something besides 3dmark,q3,ut. They even put the shader tests in zips so you can run them yourself.
http://www.digit-life.com/articles2/gffx/5900u.html#p5

Elixer
05-15-2003, 12:54 PM
I wonder, if they would have renamed the .exe if same thing would happen in 3dmark?

It may also be some form of caching that isn't updated correctly. We just don't know, and that web site seems to be a bit pissed off at nvidia since they didn't get the doom 3 bench demo like some other sites did. [H] talks more about this.

All benchmarks are pretty much a joke, and all of them can be 'tweaked'. It is a shame that they all use the same canned tests, and don't try something that doesn't follow the norm.

The best news that I have seen is that the 5200 support DX9 features, so people finally can drop those damn MX cards & the pre 9500 cards. (Which makes you think that ATI will soon have a answer for that.)

Nutty
05-15-2003, 01:12 PM
You should check out the half life2 trailers then. They are even better imo.

Yes, very nice.. although the lighting looks alot more static in HL2, but very groovy none-the-less.. Both fantastic games to look forward to for sure.

This nvidia driver cheating. I take it, this is prolly why Matt left the driver division. He always maintained that application specific "optimizations" were bad. I guess it was either do this hackery, or quit your position.

I have to say I'm rather disapointed in nv for it. If they managed to get the excellent doom3 performance, using id's benchmark demo, then surely the hardware must be capable of doing decently in 3dmark03, without resorting to this.

Even losing at 3dmark03, and claiming its a bad benchmark would've been better IMO.

Nutty

tfpsly
05-15-2003, 11:39 PM
Originally posted by Elixer:
The best news that I have seen is that the 5200 support DX9 features

Yeah that seems to be a nice pack of features at a low price. I'm just afraid by the performances.

About HL2 vs Doom3:
* Doom3 has nicer dynamic lighting whereas HL2 lighting seems to be more static (only moving entities have shadow volume and dynamic lighting).
* Both seems to have good physic engine
* HL2 AI seems to be at the top - from the CS like video)

knackered
05-16-2003, 12:29 AM
Who actually takes a single benchmark app on face value? Surely people average up a number of benchmarks before making a purchasing decision.

Humus
05-16-2003, 03:36 AM
I'm sure nobody in here makes a decision on 3dmarks alone, or any other single benchmark either. However, the unfortunate truth is that 3dmarks scores sells cards. The average joe sees a long bar and assumes it means a good card. Many doesn't read online reviews, but reviews in printed magazines where the review tend to be way shorter and only a couple of benchmarks are published; generally 3dmarks + one other app.

Korval
05-16-2003, 09:30 AM
Humus, people who are willing to spend $500 on a video card are, almost certainly, educated about them. These people do go and read reviews, take into account numerous benchmarks, and compare them to other things out there.

Now, if we were talking about the 5600, that's a different story. Those are more likely to be bought by the fugal gamer who might get suckered into a purchase based on 3DMark scores alone.

JD
05-16-2003, 01:22 PM
I think we all should think carefully about to which company our money should go. I research stuff before I buy. I did this with my other hobby as well and that worked great. For example, digit-life 5200fx review shows how one company is using very slow components and unsuspecting buyers probably think that each 5200 is made the same way which is untrue. It's even worse with r200. There the memory speeds differ across a single vendor and you never know what you get. You might get 183mhz or 200mhz or 250mhz you don't know until you open up the package. I read about one guy going into retail store and having the tech guy open each of the ten boxes to see if he could get a decent memory speed, all of them had the slower memory. It's not mentioned on the package as well. Take gainward as well, they don't tell you clockspeeds of their gffx cards and the packaging is extremely confusing. They have a 5600 ultra and 5600 ultra xp or fx I think. You would think right away that having an 'ultra' in the name means faster memory which is simply not true. Also, many of the 256meg models are slower than 128meg ones. This was the case for gf4ti as well. You just have to be careful and read a lot to notice these things. So just having seen a 5200fx benchies doesn't mean that every 5200 fill perform the same as some also have 64bit buses and slow memory. Caveat emptor!!!

Elixer
05-16-2003, 03:27 PM
Originally posted by Nutty:
[snip]
This nvidia driver cheating. I take it, this is prolly why Matt left the driver division. He always maintained that application specific "optimizations" were bad. I guess it was either do this hackery, or quit your position.

[snip]

Nutty

Now this is news, I didn't know he left nvidia... or did he just transfer to another part of Nvidia?

Nutty
05-16-2003, 03:47 PM
http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/009432.html

jwatte
05-16-2003, 04:05 PM
JD: there's the FX5200, and the FX5200 Ultra. Those may be the difference in speed you notice.

cass
05-16-2003, 04:38 PM
Originally posted by Nutty:
I take it, this is prolly why Matt left the driver division. He always maintained that application specific "optimizations" were bad. I guess it was either do this hackery, or quit your position.


Pretty unfounded conjecture there, Nutty. http://www.opengl.org/discussion_boards/ubb/smile.gif

Matt's working in another group because he had been doing OpenGL driver work for several years (while he was in school full time at MIT to boot) and wanted to try something different for a while. Let's not turn it into a soap opera please.

Thanks -
Cass

Humus
05-23-2003, 07:18 AM
For those who doubted, it is now official: NVidia cheated in 3dmarks. No less than 8 different cheats have been detected and the score is 24.1% higher than it is with these cheats disabled.

Info from Futuremark themself: http://www.futuremark.com/companyinfo/3dmark03_audit_report.pdf
http://www.futuremark.com/news/?newsarticle=200305/2003052308#200305/2003052308

There is also a 1.9% difference on ATi, seems to be something fishy about game test 4, though it's not been confirmed as a cheat just yet but is being investigated.

The important thing is, shader execution speed is cut in half with these cheats disabled. In other words, floating point shader performance is still crap with NV35, though benchmarks indicate otherwise.

[This message has been edited by Humus (edited 05-23-2003).]

KRONOS
05-23-2003, 08:25 AM
The important thing is, shader execution speed is cut in half with these cheats disabled. In other words, floating point shader performance is still crap with NV35, though benchmarks indicate otherwise.


Yes, that's what you get when you do thing with more precision...
Oh well...


[This message has been edited by KRONOS (edited 05-23-2003).]

MZ
05-23-2003, 08:59 AM
from http://www.futuremark.com/news/?newsarticle=200305/2003052308#200305/2003052308
We have just published a patch 330 for 3DMark03 that defeats the detection mechanisms in the driversLOL, This means war!
(not a cold war anymore)

dorbie
05-23-2003, 11:50 AM
This is very sad. I guess you can't trust any of them now. Looks like they detect rendered features and disable backbuffer clears, substitute shaders or clip pixel fill in various demos.

It puts all that sanctimonious bull5hit from NVIDIA about the validity of 3DMark03 in perspective.

Zeno
05-23-2003, 12:28 PM
I just wish my apps were popular enough for NVIDIA to rewrite my shaders for me http://192.48.159.181/discussion_boards/ubb/smile.gif.

If this weren't a benchmark, I'd say that's a fair thing to do as long as the shader produces equivalent results.

dorbie
05-23-2003, 12:56 PM
I'd say it's an optimization that should be done for all cards by the software developer in conjunction with typical development practices and the usual developer relations groups at the companies involved..... Oh wait that's what the 3DMark03 beta program was all about. Sigh.

NO, the driver *shouldn't* supplant the shader under any circumstances. If it can analyze the shader and recompile it in a way that would work in a generic way with other equivalent shaders then it should IF the results are visually equivalent. Then it's a generic optimization.

Let's not confuse what's fair game and what isn't in a driver. This was clearly WAY over the line in several ways. They didn't just rewrite the shader to something that wasn't visually equivalent, they eliminated screen clear selectively and clipped hidden overdraw, most if not all of this has nothing to do with a lack of vendor specific optimization in a benchmark, it's just a flat out low down dirty cheat. Consumer fraud in my opinion.

NVIDIA said that 3DMark03 was a misleading benchmark and then went on to make it truly misleading by ensuring they didn't draw what the benchmark asked every other card to draw.

NVIDIA is in good company of course, 3DLabs, ATI and now NVIDIA have joined the ignoble ranks of those who are prepared to cheat on benchmarks. You'd think they'd learn from each others mistakes but it looks like the memory of the last scandal fades pretty rapidly.

P.S. ah what the stuff, lie back, relax and enjoy the show, if they weren't this keen to win maybe their cards wouldn't be so cool.


[This message has been edited by dorbie (edited 05-23-2003).]

Zeno
05-23-2003, 03:00 PM
Originally posted by dorbie:
I'd say it's an optimization that should be done for all cards by the software developer in conjunction with typical development practices and the usual developer relations groups at the companies involved..... Oh wait that's what the 3DMark03 beta program was all about. Sigh.

You're right, the shaders in 3DMark03 should have been optimized better by the people who made it. Fact is, though, it looks like the Futuremark people turned out a lot of bad code.

Are you getting in NVIDIA's case for not being in the 3DMark03 beta program? I've heard that costs hundreds of thousands of dollars. I think they could spend that money in much better ways, and it seems like a rather underhanded tactic on the part of Futuremark. It's like saying "Pay us to not advertise against your product". What kind of a business model is that?


NO, the driver *shouldn't* supplant the shader under any circumstances. If it can analyze the shader and recompile it in a way that would work in a generic way with other equivalent shaders then it should IF the results are visually equivalent. Then it's a generic optimization.

So you're okay with compiler-type optimizations but not with people-type optimizations? Why? If both produce equivalent results, it seems to me that either one is fine. People can sometimes do better by optimizing on an algorithmic level instead of an instruction level.



Let's not confuse what's fair game and what isn't in a driver. This was clearly WAY over the line in several ways. They didn't just rewrite the shader to something that wasn't visually equivalent, they eliminated screen clear selectively and clipped hidden overdraw, most if not all of this has nothing to do with a lack of vendor specific optimization in a benchmark, it's just a flat out low down dirty cheat. Consumer fraud in my opinion.

My position on this is the same as on the shaders. If it has no effect on the output, then any hack/optimization/cheat that they want to perform is ok...when I look at the monitor, all I see is the output. That's all that matters.

HOWEVER, given the fact that time, energy, and money are limited resources, I wish that they would spend it on generic optimizations instead. I'd rather have a 5% all-round improvement than a 40% improvement in 3DMark or one specific game.

Hopefully the fact that 3dmark seems willing to change their benchmark to break such optimizations will steer them away from that type of thing in the future.

dorbie
05-23-2003, 06:16 PM
Hopefully the fact that they were caught red handed and it will hurt their reputation and future sales will steer them away from cheating.

Taking your argument to it's logical conclusion a high speed video playback device would be an acceptable graphics solution.

I'm over it, but I'm not going to have this called optimization in any way shape or form. Let's at least be able to recognize the difference between a cheat and an optimization. If it were just the shaders I'd have perhaps conceded that it might have been optimization on a very narrow path (I'd need more information and it's bloody unlikely) but the range and nature of the cheats paints a very different picture of orchestrated wholesale cheating on most of these tests. This has nothing to do with optimization and everything to do with simply not running the code supplied by the benchmark.

[This message has been edited by dorbie (edited 05-23-2003).]

Ysaneya
05-24-2003, 01:02 AM
So what about these clipping planes issues? Produce the same resut, but ONLY on the camera path. Enter free fly mode, and bam, 20% speed loss. Certainly doesn't sound fair to me.

The water shader made me laugh a lot. FutureMark said NVidia replaced it with a much simpler shader, that doesn't even output the same results.. hum.

Y.

t0y
05-24-2003, 04:07 AM
I have nothing against game specific code in drivers, but for benchmarking that's another story.

In no way a shader should be replaced by a "better" one, even if they produce the exact same outputs. Shader optimization should not go beyond the normal compilation process. This is not the job of a driver, but the job of game developers.

IMO it should be mandatory (pass a law on this http://192.48.159.181/discussion_boards/ubb/smile.gif ) to have a driver option that disables all App-specific code. This is the only way we can benchmark the card and drivers in a fair way.

I tend to think that driver developers spend too much time in fixed code-paths in order to get that benchmarking margin over the competition, instead of focusing on *real*, general-purpose drivers.

Just my thoughts...

zeckensack
05-24-2003, 05:11 AM
Originally posted by t0y:
[B]I have nothing against game specific code in drivers, but for benchmarking that's another story.

In no way a shader should be replaced by a "better" one, even if they produce the exact same outputs. Shader optimization should not go beyond the normal compilation process. This is not the job of a driver, but the job of game developers.It just so happens, that a 'Pixel Shader 2.0' can't be coded in the optimum way for both NV3x and R3xx chips. These architectures are just too different, you're always by default going to favor the one you developed on.

I expect any quality driver to reorder shader code in the best possible way for the target hardware, simply because I can't do that.

But the point is to make it universal and dynamic and not tied to stupid 'app recognition' crap. Intelligence beats databases, period.
NV have been trumpeting their various shader compilers for years, and now they can't reorder instruction streams? You'd think they're experienced enough by now.
On the DX 'texture blend stages' front, they're pretty far ahead in the field of valid optimizations, as far as I was told. Or are these just "Find the combiner setup for xyz.exe" table lookups too?

Humus
05-24-2003, 05:57 AM
Originally posted by Ysaneya:
The water shader made me laugh a lot. FutureMark said NVidia replaced it with a much simpler shader, that doesn't even output the same results.. hum.

Indeed. http://www.beyond3d.com/forum/viewtopic.php?t=6042

t0y
05-24-2003, 06:01 AM
Originally posted by zeckensack:
It just so happens, that a 'Pixel Shader 2.0' can't be coded in the optimum way for both NV3x and R3xx chips. These architectures are just too different, you're always by default going to favor the one you developed on.

I expect any quality driver to reorder shader code in the best possible way for the target hardware, simply because I can't do that.



Isn't that part of the compilation process?
I agree it's ok to reorder, remove, even change some instructions for the sake of optimization and/or adapt the code to a certain architecture. That's any compiler's job isn't it? Even if code gets changed significantly.

zeckensack
05-24-2003, 06:31 AM
Originally posted by t0y:
Isn't that part of the compilation process? I feel it should be. Apparently (on ATI hardware and using DX Graphics), it's not, or at least not in a way that's general enough to catch all cases. This is exemplified by ATI's scores dropping, too, with the 'cheating defeating' 3DMark patch (9ish per cent in GT4). The situation might be similar for NV3x, but the numbers are skewed by the other, more severe cheats going on there.

I agree it's ok to reorder, remove, even change some instructions for the sake of optimization and/or adapt the code to a certain architecture. That's any compiler's job isn't it? Even if code gets changed significantly.I disagree http://192.48.159.181/discussion_boards/ubb/smile.gif
I was thinking along the lines of scheduling optimizations and other architecture specific things (register allocation, color/alpha instruction pairing, fetching textures early before use to hide latency etc). However, I don't like the idea of real code changes.
A shader should do what was intended, and it should produce identical results before and after 'optimizing' (!=close enough). Just like an optimizing C compiler does. It will fold constants, remove dead code, eliminate redundant computation, ..., but it will never change semantics.

This is probably somewhat more tricky with floating point operands than it would be without. FP results can be different depending on the order of operations. a*b*c can differ from c*a*b. Personally I couldn't care less about this sort of issue, because the differences across different hardware are greater, but still irrevelant in practice.

dorbie
05-24-2003, 08:36 AM
zackensack, I don't disagree with your main point, reordering of shader instructions etc is optimization, even if it's on the fringes of what might be considered visually equivalent, but simply substituting a detected shader with a simpler one is a cheat, it's not optimization at all.

As for the ability to generically optimize shaders in the driver, this is already done in HLSL, these guys are supposed to write optimizing compilers and perform all sorts of clever tricks under the covers, but they're supposed to execute the darned shader, not one of their own they manually wrote for one case. This is blatant fraud in my opinion.

If the smarter than average folks around here can't spot a blatant cheat I guess it's open season. Tear up your benchmarks and just read the manufacturers numbers on the packaging, it's going to mean as much. You can trust these guys, right?

P.S. Zackensack, Hmm... I see you've changed your position (or clarified it) anyhoo, with unspecified precision issues in hardware etc it's not as clear cut what is permissible in shader optimization and what isn't, I'd cut a bit more slack, but in general I'd say it should be very close to mathematically equivalent, and probably has to be as close as they can make it since they cannot know the intent of the shader in the ceneral case. This is another thing that makes NVIDIA's mendacity so pernicious, even if they cut corners in one benchmark the same modification could be inappropriate in another and they'd never do it.


[This message has been edited by dorbie (edited 05-24-2003).]

t0y
05-24-2003, 08:42 AM
This is what I mean:


Therefore, any code optimization performed on a function that does not change the resulting value of the function for any argument, is uncontroversially considered a valid optimization. Therefore, techniques such as instruction selection, instruction scheduling, dead code elimination, and load/store reordering are all acceptable. These techniques change the performance profile of the function, without affecting its extensional meaning.
-- Tim Sweeney

from http://www.beyond3d.com/forum/viewtopic.php?topic=6041&forum=9

Many of these operations must be performed anyway since no card can execute PS2 or GL shaders natively (unless in cases where you use proprietary exts).

dorbie
05-24-2003, 08:55 AM
toy, we don't need a law, they got caught and they're suffering the consequences, what we need is for people to stop acting as apologists for cheats.


[This message has been edited by dorbie (edited 05-24-2003).]

V-man
05-24-2003, 09:26 AM
Dorbie, it was you who said that this kind of cheat would mean the driver would be bloated a bit.

I think nobody mentioned that this is the case.
At some point (29.xx) it was like 9MB the setup file, then 11MB (3x.xx), then 14MB, then it got smaller (I think down to 10MB, 4x.xx) and the current 44.xx is like 18MB

Very disapointing and a very stupid move.
They should can the guy responsible for this.

But since the FX5200 is so cheap and they have some solid drivers and so many features, it's like candy. I dont think they will lose market share.

Zeno
05-24-2003, 12:12 PM
Originally posted by dorbie:
Taking your argument to it's logical conclusion a high speed video playback device would be an acceptable graphics solution.

Interesting point, Dorbie. I guess for my view to be consistent I'd have to say that was 'ok' to replace 3dmark with a bunch of movies with different res/AA/AF settings.

Of course I won't say that and I think doing that would not be fair at all, not to mention the effect it would have on the size of the drivers http://192.48.159.181/discussion_boards/ubb/wink.gif.

Perhaps my opinion should be changed to this:

Any optimization that could be done by looking at the OpenGL/DX rendering code only is ok. Any optimization that depends on specific inputs to this code is NOT ok.

Therefore, I say that it is fair game to replace shaders with mathematically equivalent ones, but it is NOT ok to throw in clip planes whose positions are only good for certain sets of input. The latter is cheating.

Does this make sense? Any ridiculous cases of abuse that this position doesn't take into account? http://192.48.159.181/discussion_boards/ubb/smile.gif

Humus
05-24-2003, 02:07 PM
I'm with dorbie on this one. Ripping out a shader and replacing it with another ala
if (3dmarks03 && strcmp(shader, refShader) == 0){
shader = handWrittenOptimizedShader;
}
should be considered a cheat. On the other hand, if the driver team looked at the shader provided and found that there are cases that can be optimized and wrote a generic optimization that can take that non-optimal code and replace with better code than everything is alright. If simply swapping registers disables the "optimization" you either have a serious bug or have cheated (more likely).

Dr^Nick
05-24-2003, 02:07 PM
http://www.rage3d.com/articles/atidawning/
This pretty much settles the issue for me.

kehziah
05-25-2003, 11:28 PM
Official statements by ATI & NVIDIA

Guess what? They handle the 'problem' differently...

http://www.tomshardware.com/technews/20030523_192553.html

Korval
05-26-2003, 09:44 AM
nVidia takes an opportunity to plug their 5900 card, and, of course, to ignore the benchmark itself. Meanwhile ATi actually responds to the question at hand. Not only that, ATi says they are removing what I consider to be a legitimate optimization from their drivers, just because of this crap from nVidia spilled over into them.

Zengar
05-26-2003, 11:33 AM
Maybe it's paranoid, what I'm going to say, and I don't want you guys think that I really believe in it, no, it's only a speculation. It's curious, why no one spoke out this idea in the thread yet, so I'l take a risk to do it. I don't want to offend neither ATI nor Futuremark with this, but:

What if Futuremark and ATI are mutually trying to remove Nvidia from the market? ATI was supporting 3dmark, while Nvidia isn't so fond of the benchmark. It's really funny, because althought NV35 is slower in 3dmark, it's faster in APPs like games(and this are demanding APPs) - what the tests definitelly show. Maybe NV3x was really developed for real-life applications, so Nvidia has nothing left to do but cheating(as someone said already: 3dmark scores sell cards).

As I said above, it's only a speculation. I wanted to point out one possible stand of things. I sincerely ask you not to take this post too seriuosely.

dorbie
05-26-2003, 11:37 AM
What ATI are admitting to sounds like exactly what I meant by 'narrow path optimization'. It's borderline, but if it's triggered generically by the shader content and remains mathematically equivalent ATI should leave it in and perhaps try to broaden it's scope.

As for NVIDIA, I'm not saying they don't have a more general point, (they'd have to get more specific), but WOW, this emitomizes "The best means of defense is attack.". They have completely ignored the fact that they just got caught applying the most extensive and coordinated set of cheats in industry history. Those games don't use particularly advanced shaders IMHO. If I were at Futuremark I'd be getting pissed off right about now. Imagine if Microsoft put code in a library to execute another codepath than the one YOU issued and only for YOUR application, after attacking your business in public statements. Not the same you might say except that this really undermines Futuremark's business and their code is Copyright work. Then NVIDIA are caught red handed and just blame Futuremark!? This is quite a show. NVIDIA is saying they have a right to cheat because of their perceived injustice.

Since NVIDIA are saying futuremark is written to be slow on NVIDIA hardware we need another datapoint (surely they mean shaders here), look at the "Dawn" demo now ported to ATI using wrappers (sorry but this response from NVIDIA deserves further analysis). Dawn appears to run faster on ATI cards where the shaders running on the NVIDIA card were hand written by NVIDIA engineers, that seems to torpedo the claim that NVIDIA are generically faster than ATI when you start to get into advanced shading, and here ATI has a disadvantage due to the stubbing of stuff like nvfence. When NVIDIAs own demos run faster on ATI cards they should be a bit more reticent about slinging mud at Futuremark, especially when they've effectively modified someone's copyright code without permission with deleterious results IMHO, they're taking a risk going on the offensive.

This debacle just get's worse, NVIDIA put down your shovel, please.

[This message has been edited by dorbie (edited 05-26-2003).]

Korval
05-26-2003, 12:59 PM
Maybe NV3x was really developed for real-life applications, so Nvidia has nothing left to do but cheating(as someone said already: 3dmark scores sell cards).

No, they had another option: bite down on the fact that their card just doesn't run complicated shader programs (which, if I understand correctly, is what 3DMark2K3 tests) very fast. That means, they accept whatever the 3DMark score is. That is the correct choice. They chose otherwise.

If they want, they can post Quake3/UT2K3/Doom3 performance benchmarks on the boxes of their cards.


It's borderline, but if it's triggered generically by the shader content and remains mathematically equivalent ATI should leave it in and perhaps try to broaden it's scope.

Actually, I think it sounded more like a "peephole" optimizer. This is an optimizing stage of most CPU compilers that walks over code and looks at it, say , 5 instructions at a time. If these 5 instructions fit within one of several patterns, it will change these instructions into a more optimized form that does the same thing.

At the very least, ATi said that it was generic, not 3DMark-specific.

As for nVidia's hardware, why don't they just come out and admit what we all know? That, clock-for-clock, their 32-bit float processing speed is simply not up to the speed of ATi's 24-bit float processing. And, because both D3D's PS2.0 and ARB_fragment_program default to 32-bit floats on GeForceFX's, any ALU intensive fragment programs (outside of NV_fragment_program) will be slower on FX hardware.

nVidia made a mistake with their FX hardware (relying on 16-bit floats when all the API specs required greater-than-16-bit accuracy). This mistake is akin to their FX launch problems that gave ATi both a head-start and credibility. Unlike that mistake, however, they seem to be doing everything they can to hide this one.

As for Doom3 performance... who cares? Because they use stencil-shadows (which requires breaking up long fragment shaders into smaller ones simply because of the 1-pass-per-light principle), they can't stress the fragment programability of the hardware much more than UT2K3 on any individual pass. At that point, you're looking at vertex transfer/transform speed and driver efficiency.

Oh, and ATi hasn't had a look at Doom3 code since they accidentally leaked an early build of it. Meanwhile, nVidia has been all over it, and knows the code path inside and out.

Somehow, I don't think it is any coincidence that Half-Life 2 (which definately stresses all kinds of hardware, from the CPU to the fragment units) was shown at ATi's booth at E3, and not nVidia's.

[This message has been edited by Korval (edited 05-26-2003).]

Nutty
05-26-2003, 01:42 PM
The ATI optimization was not a generic optimization. It was a specific hack for 3dmark03, which replaced the shaders to slightly better arranged ones.

If it was a fully automatic and general optimization, why would they have any cause to remove it?


The 1.9% performance gain comes from optimization of the two DX9 shaders (water and sky) in Game Test 4 . We render the scene exactly as intended by Futuremark, in full-precision floating point. Our shaders are mathematically and functionally identical to Futuremark's and there are no visual artifacts; we simply shuffle instructions to take advantage of our architecture. These are exactly the sort of optimizations that work in games to improve frame rates without reducing image quality and as such, are a realistic approach to a benchmark intended to measure in-game performance. However, we recognize that these can be used by some people to call into question the legitimacy of benchmark results, and so we are removing them from our driver as soon as is physically possible. We expect them to be gone by the next release of CATALYST.




Dawn appears to run faster on ATI cards where the shaders running on the NVIDIA card were hand written by NVIDIA engineers, that seems to torpedo the claim that NVIDIA are generically faster than ATI when you start to get into advanced shading, and here ATI has a disadvantage due to the stubbing of stuff like nvfence. When NVIDIAs own demos run faster on ATI cards they should be a bit more reticent about slinging mud at Futuremark

It is my understanding that the wrapper prevents it rendering as fully intended. The majority of ppl who compare the images seem to suggest it looks alot better on the gf-fx.

Unless you have proof to the contrary?

Nutty

[This message has been edited by Nutty (edited 05-26-2003).]

Zeno
05-26-2003, 03:26 PM
Oh, and ATi hasn't had a look at Doom3 code since they accidentally leaked an early build of it. Meanwhile, nVidia has been all over it, and knows the code path inside and out.


I've seen this type of thing said over and over, but is it true? Do we have confirmation that the alpha leaked from ATI? And, if so, do we have confirmation that id is "punishing" them for it in some way?

Links please http://192.48.159.181/discussion_boards/ubb/smile.gif

Nutty
05-26-2003, 03:52 PM
I also find it hard to believe that ATI don't know how to optimize for doom3. Its not that complicated an engine really. Its beauty comes from the consistant lighting model applied to everything. I'm sure they know as much about the doom3 engine as nvidia do.

dorbie
05-26-2003, 11:12 PM
Nutty I don't think you're correct w.r.t. ATI, they look for a shader instruction combination and replace with something mathematically identical AFAIK. The issue for me is just exactly how narrow this test was. Clearly there's a big difference between this approach and one that replaces shaders with something different and does many other nasty things that are unambiguous cheats.

I wish it hadn't happened, but it's disturbing that blatant NVIDIA cheating is being smoked screened by borderline ATI infractions and worse that NVIDIA is attacking futuremark.

Let's call a cheat a cheat, at least doing that makes it less likely in future.

Humus
05-26-2003, 11:56 PM
Originally posted by Zengar:
What if Futuremark and ATI are mutually trying to remove Nvidia from the market? ATI was supporting 3dmark, while Nvidia isn't so fond of the benchmark. It's really funny, because althought NV35 is slower in 3dmark, it's faster in APPs like games(and this are demanding APPs) - what the tests definitelly show. Maybe NV3x was really developed for real-life applications, so Nvidia has nothing left to do but cheating(as someone said already: 3dmark scores sell cards).

As I said above, it's only a speculation.

ATi obviously would prefer to push nVidia off the market in good old capitalistic style. Futuremark on the other hand has zero interest in forcing vendors off the market. In fact, it's more in their interest that the competition is more balanced since that makes their benchmark more interesting and useful.

Humus
05-27-2003, 12:01 AM
Originally posted by Nutty:
It is my understanding that the wrapper prevents it rendering as fully intended. The majority of ppl who compare the images seem to suggest it looks alot better on the gf-fx.

Unless you have proof to the contrary?

I've heard the exact opposite, that it renders just as good if not better on ATI cards. I've run it myself and it looks just as good as the screenshots from GFFX AFAICT.

Humus
05-27-2003, 12:09 AM
Originally posted by Zeno:
I've seen this type of thing said over and over, but is it true? Do we have confirmation that the alpha leaked from ATI? And, if so, do we have confirmation that id is "punishing" them for it in some way?

Links please http://192.48.159.181/discussion_boards/ubb/smile.gif



It is true. An ATI employee confirmed on Rage3D that they did not have access to the Doom III build used in the benchmarks.

"Anyways.... Doom III.
Interesting little game, even more interesting that reviews would appear on an unreleased game. All I can say at this point is that we have not had that particular benchmark before the review sites started using it. What a shame that people are getting to look at something that we havent had a chance to play around with a bit.

Anyways please dont pay any attention to that benchmark. It is on an unfinished product that we have not looked at yet. Wait till it really comes out and we will be ready to rock." http://www.rage3d.com/board/showthread.p...om&pagenumber=2 (http://www.rage3d.com/board/showthread.php?s=&threadid=33685071&perpage=20&highlight=Doom&pagenumber=2)

During my time at ATi I also got it confirmed that indeed the leaked Doom build had come from ATI. Something they hoped us new employees would not repeat.

There is no confirmation though that id is trying to punish anyone though and I think that's a little paranoid to believe. It's probably just that Carmack always have had a little special relationship with nVidia.

harsman
05-27-2003, 03:42 AM
No, the demos doesn't look entirerly correct on ATI cards. Dawn's eyelashes doesn't render with the wrapper for one.That's probably because of bugs in the wrapper rather than ATI's fault though. Something else which is more likely to be an ATI bug (though you can't really be sure) is that Dawn's hair doesn't fade out correctly. My guess is nvidia uses SAMPLE_ALPHA_TO_COVERAGE_ARB or something similar to get the line strip hairs to feather out correctly at the tips. This doesn't work on the Radeons it seems. Should be easy enough to test under more controlled conditions but I'm too lazy to do it http://192.48.159.181/discussion_boards/ubb/smile.gif If you want to compare screenshots look here (http://stl.caltech.edu/nv30/full/dawn6.png) for GeforceFX/correct hair and here (http://www.rage3d.com/articles/atidawning/) for Radeon/incorrect hair. Besides the hair and eyelashes stuff, the ATI version looks just as good.

dorbie
05-27-2003, 06:11 AM
Well spotted on the Dawn demo, and good theory on why. I doubt fixing it would make much difference to overall performance.

Nutty
05-27-2003, 09:58 AM
Nutty I don't think you're correct w.r.t. ATI, they look for a shader instruction combination and replace with something mathematically identical AFAIK. The issue for me is just exactly how narrow this test was. Clearly there's a big difference between this approach and one that replaces shaders with something different and does many other nasty things that are unambiguous cheats.
I wish it hadn't happened, but it's disturbing that blatant NVIDIA cheating is being smoked screened by borderline ATI infractions and worse that NVIDIA is attacking futuremark.

Let's call a cheat a cheat, at least doing that makes it less likely in future.


Well exactly.. Looking for small often used groups of instructions and replacing them with more efficient versions is quite ok, as this is bound to work across other games and apps, and therefore is not misleading as such.

However, scanning an entire specific shader used, isnt exactly an optimization technique in my book. It relies on the shader being fixed to work. And therefore I consider it a cheat.

Personally I question Futuremark myself. If it wasn't for synthetic fixed rail benchmarks we'd probably not have this problem.

They take payment from companies like ATI and nvidia to be a member to their program. Now what is the one and only reason that ATI and nvidia would ever need or want to pay this money to Futuremark? To win benchmark wars of course.

dorbie
05-27-2003, 10:29 AM
Hook line and sinker, how does NVIDIA's bait taste? :-) I would say having a developer like Futuremark keep you in the loop helps you to plan future hardware and driver optimizations and get the jump on the competition. For example, perhaps ATI's relationship with Futuremark made the real world precision requirements of game shaders clearer to them sooner with the now obvious advantage (OK it's a stretch but so is NVIDIAs ludicrous attack). Futuremark don't sit in a bubble, they have a different focus from IHVs and that can be valuable and they try and track things they think are relevant to them as software developers, you may think it takes a load off your own marketing organization trying to track developer & DX trends. Mistakes and misjudgements are always possible and can have drastic results that last 2 or 3 years if you miss something, Futuremark may be seen as insurance against those very expensive slipups. Marketeers in a company tend to drink the local coolaid, design options narrow early, another external objective source of input about what's important to software developers is valuable.

It is also advantageous to have cool demos that run on your hardware and a software company that targets blue sky advanced shading taking the lead where game companies will follow in future. It helps 'raise the bar' and sell more better hardware sooner. That can be worth paying for if you think it may not happen as quickly otherwise, it could be make and break for a future generation of card and is worth the spend. I think you'd be surprised at how much these companies spend on trade shows, launch events and other promotional stuff. The relative spend on Futuremark participation is not difficult to justify.

On the issue of shader substitution, if ATI scan for an entire shader then that is different, it's not clear to me they do, obviously optimizations should be more generic, but again there are degrees here and we're comparing an orchestrated concerted cheating effort to something a bit different IMHO and it's just clouding the issue. Even a substitution and instruction reordering that's equivalent isn't as bad as a rewrite, although it should be generic I agree.

Benchmarks need to be on rails because 1) people insist on real world scenarios so you need game-like geometry and rendering, and 2) results and comparrisons need to be fair and reproducible. All objective measurements need to compromise like this, come up with alternatives that make sense (I don't think they exist). Even game benchmarks are on rails, this is required for obvious reasons.



[This message has been edited by dorbie (edited 05-27-2003).]

Korval
05-27-2003, 11:51 AM
It relies on the shader being fixed to work. And therefore I consider it a cheat.

Consider this: ATi is removing this cheat, regardless of how much of a real cheat it is. Did nVidia ever, ever even so much as admit that they had done something wrong, let alone promise to fix it? ATi is repentant about a borderline case; nVidia chooses to attack the veracity of the benchmark itself, rather than realize that their actions threaten the veracity of that benchmark.


Personally I question Futuremark myself. If it wasn't for synthetic fixed rail benchmarks we'd probably not have this problem.

I see. So, you're perfectly willing to excuse cheating simply because you don't agree with the validity of the test? That is, pretty much, nVidia's argument. And, considering that they were the one's caught grossly cheating here, why is it reasonable to attack the party being cheated on (the benchmark and, ultimately, us)?

[This message has been edited by Korval (edited 05-27-2003).]

Nutty
05-27-2003, 11:59 AM
I would say having a developer like Futuremark keep you in the loop helps you to plan future hardware and driver optimizations and get the jump on the competition.

So you're saying Futuremark help IHV's understand what's needed from the gfx-card in the future? Bull. I've not seen anyone praise Futuremarks algorithms as being the way to go. Why pay a company to tell you this, when for free you can have JC tell you, _AND_ create an engine that will undoubtably be used by many games. I dont see many games being planned around Futuremarks engine.


It is also advantageous to have cool demos that run on your hardware and a software company that targets blue sky advanced shading taking the lead where game companies will follow in future. It helps 'raise the bar' and sell more better hardware sooner. That can be worth paying for if you think it may not happen as quickly otherwise, it could be make and break for a future generation of card and is worth the spend. I think you'd be surprised at how much these companies spend on trade shows, launch events and other promotional stuff. The relative spend on Futuremark participation is not difficult to justify.

True, but then why did nvidia pull out of the beta developer program?


Benchmarks need to be on rails because 1) people insist on real world scenarios

Real world games are not on rails.

I agree benchmarking hardware should use consistant data.. But I'd prefer it if review sites made their own demos, which would stop hardware companies analyzing the path. Its not hard.

I'd like to see some sort of correctness test. i.e. a full precision software renderer, that can be used to compare the pixel outputs of the hardware, to mathematically correct software outputs. This kind of thing would make nvidia's dodgy water shaders stand out like a sore thumb.

I'd prefer it if review sites just dropped 3dmark altogether. Both ATI and nvidia have shown it can be cheated rather easily. A little more thought, and these cheats would be very hard to find.

Nutty

Nutty
05-27-2003, 12:11 PM
Consider this: ATi is removing this cheat, regardless of how much of a real cheat it is. Did nVidia ever, ever even so much as admit that they had done something wrong, let alone promise to fix it? ATi is repentant about a borderline case; nVidia chooses to attack the veracity of the benchmark itself, rather than realize that their actions threaten the veracity of that benchmark.

Rumours around, that nvidia are due to make an official announcement on the Futuremark thing very soon.
http://www.nvnews.net/vbulletin/showthread.php?s=&threadid=12452


I see. So, you're perfectly willing to excuse cheating simply because you don't agree with the validity of the test? That is, pretty much, nVidia's argument. And, considering that they were the one's caught grossly cheating here, why is it reasonable to attack the party being cheated on (the benchmark and, ultimately, us)?

Cheating is cheating. Weather its 24% or 1.x%, it dont make a difference. The point is 3dmark is like a race with no marshalls. Everyone in it is cutting corners, until every once in a while someone sticks there head in, looks at the person cutting a huge corner and has a go at them, while everyone else quietly moves back towards the regular running lanes to avoid attention.

Why dont they make a benchmark where clip planes wont make any difference? Where shader output is compared against what it should be? Where it uses randomly arranged instructions to guage average shader speed?

IMO 3dmark is just a pretty demo for gaming junkies to have bragging rights over. Both main IHV's have just proved scores can be fudged, ppl should just stop taking it soo seriously, and Futuremarks ego should size down a bit. "Industry Standard Benchmark" indeed.. yeah right.

dorbie
05-27-2003, 12:56 PM
Nutty, it's up to NVIDIA where they want to invest their cash, I'm not in the business of explaining it, not all businesses make the same decisions. Futuremark has obvious value beyond the implication of extortion. That some think it's worth the investment and others don't isn't surprising.

Yes I agree a cheat is a cheat, and 8 cheats are 8 cheats. I'm less convinced about the ATI cheat, but I agree it does seem fishy, see I can be persuaded. I need more details before I'd wholeheartedly call ATI a cheat on this and I was prepared to cut NVIDIA the same slack, see my earliest comment on narrow path optimization.

On NVIDIA's response to Futuremark, I joked recently that someone probably called up NVIDIA and got the intern on the phone instead of the VP of marketing who decided to attack Futuremark instead of offering a more deliberate response. It was entertaining, but come on it's time to put this to rest.

Unless this next response is a bit more apologetic than the last we should probably line up Jerry Springer to host a special.

How would it go... NVIDIA as the jilted former aquaintance of Futuremark who cheated on them, and ATI as the new interest. NVIDIA says Futuremark was plotting behind their back the whole time just as Jerry reveals ATI cheated too. Is ATI the real father of Futuremark's ba5tard shaders?

[This message has been edited by dorbie (edited 05-27-2003).]

Korval
05-27-2003, 01:02 PM
Why pay a company to tell you this, when for free you can have JC tell you, _AND_ create an engine that will undoubtably be used by many games.

That, of course, would bind us all to the opinions of one man. One man whom, as far as I'm concerned, just had the graphics of his newest game beaten into submission by Valve.

Two different schools of thought on where to take graphics: full shadowing via stencil shadows vs. virtually everything else. These will require different methadology in optimizing: stencil shadows need efficient multi-pass, while most of HL2 looks like it needs good per-fragment ops and render-to-texture. And that's just from two studios.

At least 3DMark tries to paint a broad spectrum of what is out there, in terms of graphical possibilities. Maybe they're right and maybe they're wrong. But, compared to most games out there, at least they're trying to push the new hardware. Note that there is no single, finished, game that uses advanced D3D 9.0 shaders. What else besides a synthetic benchmark are you going to use to benchmark performance? And, in order to learn something of value, you want it to look, at the very least, marginally similar to an in-game situation (as opposed to spewing random shaders at the hardware). It may not be the future, but it's a lot better than the alternatives.


I'd like to see some sort of correctness test. i.e. a full precision software renderer, that can be used to compare the pixel outputs of the hardware, to mathematically correct software outputs.

Unfortunately, you can't do that. To get pixel-perfect results, the internal hardware would have to match your CPU floating-point hardware. This explains why ATi and nVidia, for even non-shader circumstances, produce slightly different images, byte-wise. OpenGL, let alone D3D, isn't a strong enough spec to force output conformance to any real degree.


Rumours around, that nvidia are due to make an official announcement on the Futuremark thing very soon.

Oh, and their last annoucement went so well http://192.48.159.181/discussion_boards/ubb/rolleyes.gif.


Cheating is cheating. Weather its 24% or 1.x%, it dont make a difference.

Actually, it does. Consider that the difference between 24% and 1.9% is well over an order of magnitude. Consider that nVidia's cheating completely reverses the results of the 5900 vs 9800 tests (putting the 5900 unfairly on top), but ATi's gives them only a marginal improvement. It only doesn't make a difference if you're already predisposed to a particular outcome (ie, wanting nVidia to win, or not liking 3DMark) to begin with.


Why dont they make a benchmark where clip planes wont make any difference?

Because such tests test the effective performance of z-tests. If you're going to test how well hardware culls out unseen pixels/objects, you have no choice but to create a situation where a well-placed/timed clip plane could work wonders.


Where shader output is compared against what it should be?

Why don't they make a crack-proof system, while their at it?

There are always going to be ways around the prevention cheats. You can make it more difficult to make them, but you can never make it impossible.


Where it uses randomly arranged instructions to guage average shader speed?

That doesn't even come close to approximating an advanced game-world situation.

l33t
05-27-2003, 01:47 PM
w33 h4v3 m1nd c0ntr0l l4z3rz

Zengar
05-27-2003, 03:14 PM
Korval, after Kant if you cheat you cheat and it makes no difference how much you cheat. If 3dmark should produce correct results, no one is permited to optimise it's shaders - everithing else is considered cheating. Nvidias cheating is simply too offensive and Ati's cheating is so "small" that it simply couldn't be checked by a human eye.

Coriolis
05-27-2003, 03:31 PM
Originally posted by Korval:
That, of course, would bind us all to the opinions of one man. One man whom, as far as I'm concerned, just had the graphics of his newest game beaten into submission by Valve.


I've seen both first-hand, and Half Life 2's graphics are not better than Doom 3's by any stretch of the imagination. Take any game that's come out since Quake3, then add a cool water shader and some slightly higher detail world textures, and what you end up with is a HL2 clone.

zeckensack
05-27-2003, 07:09 PM
Originally posted by Zengar:
Korval, after Kant if you cheat you cheat and it makes no difference how much you cheat. If 3dmark should produce correct results, no one is permited to optimise it's shaders - everithing else is considered cheating. Nvidias cheating is simply too offensive and Ati's cheating is so "small" that it simply couldn't be checked by a human eye. Optimizations are not cheats. NVIDIA didn't just optimize, they replaced the shaders with *entirely different* ones. Beyond 3D had the evidence. For the record, I define an optimization as a technique that produces the guaranteed same results after as before, only faster.
I don't care if the code gets turned upside down. As an extreme example, I certainly don't consider rendering on Kyro chips cheating, and that involves a lot of magic. It works as expected under all conceivable circumstances, so why should I care?

Korval
05-27-2003, 07:36 PM
Take any game that's come out since Quake3, then add a cool water shader and some slightly higher detail world textures, and what you end up with is a HL2 clone.

This is getting a little off-topic, but HL2 also happens to understand that, if you go outside in the daytime, it is actually bright. Doom3, and virtually every other PC game, neatly ducks this fact by making sure that the entire game takes place at night/dawn/dusk/overcast.

All Doom3 has are hi-res textures (just like HL2), bump maps, and shadows. HL2 simply lacks the shadows; it has everything else that you could want out of a core rendering library (plus a game that's actually likely to be worth playing).


NVIDIA didn't just optimize, they replaced the shaders with *entirely different* ones.

They did a lot more than that, as evidenced by Futuremark's internal audit. They use clip-planes in various places to eliminate overdraw (because they know that their hardware can't compete with ATi's in this arena). And, they didn't just replace shaders with different ones; they replaced shaders with worse-looking versions. They fundamentally changed algorithms, not just the hardware implementation of those algorithms. Not only that, they detected 3DMark itself for the purpose of switching their fragment programs to 16-bit mode; something that an app can't do unless it's under GL using NV_fragment_program.

dorbie
05-27-2003, 07:47 PM
zeck*, the issue is the trigger that applied the optimization and how applicable it is to general OpenGL code. Yes NVIDIA's is clearly a cheat, but if ATI just went and hand coded the instruction reordering and did a substitute on exact full fragment program match then it's hardly indicative of any improved general optimization. Yep it's better than a complete functional rewrite (or hosing the precision of a high quality rendering test), but it's more of a fragment program specific hand coded tune. I would prefer they spent the time tuning games but when the discussion last came up (when futuremark first came out) I had no idea this (and worse) was what these driver guys were calling "DRIVER optimization", what a joke.

Moving from fixed function pipelines to developer coded fragment programs I expect optimizations in general will be less broadly applicable if this is the way they'll be conducted. Benchmarks of all stripes (including real games) seem set to be less relevant than ever. These guys should be working on automatic optimizers & optimizing compilers, not rewriting the vertex and fragment programs of every game that comes along. Maybe this will stop it, I doubt it though.

If they're gonna tune for games like this I suppose they can tune benchmarks just the same, as long as they don't change the math or functionality. I instinctively want to say I'd rather they didn't do any of this but then I'm calling for lower fps in real games on real hardware and that's not really a tenable position. So in the end I'm drawn to the conclusion that what ATI did was really OK, IF they sustain a development effort that does the same for real games.

Yep, I'm surprising myself with that conclusion, I really don't want to side with ATI if they triggered a full shader substitution on the complete shader code even if it was functionally identical, but if that's the way they're going to tune for real games in the ongoing future then it's valid (at least for gamers intending to buy 'major' titles or titles on 'major' engines). It stinks compared to any automatic global optimization process or optimization on smaller code fragments. I'd say one is real driver optimization the other is merely application specific tuning trojan'd through the driver, there should be a distinction drawn between these, and there is still a third class that should still be called cheat.

OTOH how many games can you trojan shaders or shader snippets in your driver for, it get's riddiculous real fast. It just makes sense to try and make these more generic. Oh no I'm changing my mind again :-)


[This message has been edited by dorbie (edited 05-27-2003).]

zeckensack
05-28-2003, 01:11 AM
dorbie,
I agree with everything you said in your last post.

Hand tuning is such a waste of energy. That should've been spent on the real thing instead. How many valid combinations does the pixel shader 2.0 interface yield (or ARB_fragment_program for that matter)? Millions? Billions? How many of them will there be room for in drivers before end users need them shipped on several CDs?

Hand written replacement code is destined to fail.

Nutty
05-28-2003, 02:33 AM
I'm sure if Matt were here he'd say App specific checks are bad. I dont agree with them either. I'd be interested to hear his views and thoughts on the whole thing.

Perhaps we need to move away from popular benchmarking tools, as they're the kind of programs IHV's are hand tuning. Hell, allegedly nv have a whole team tuning the drivers and output when Splinter Cell runs.

Its a real gray area, deciding whats okay, and whats not. If the IHV's take it upon themselves to make sure games run okay on their hardware, does that make it okay for them to hand tune, and modify the drivers working when the apps runs? Is it acceptable for them to lower quality, and improve performance for the majority of ppl who wont notice the lower quality?

At what point of pixel error across the scene, does an optimization/approximation of what the developer wanted become unacceptable?

So many questions like these need to be addressed and upheld by IHV's.

I have an old 3dlabs Oxygen VX1 or somat, (picked up from work during a clearout) in my 2nd pc, and right clicking on the taskbar for the driver lists a whole load of applications (including Quake3 hehe http://192.48.159.181/discussion_boards/ubb/smile.gif ) that you can put the driver into optimization mode for.

Its no different now, except NV and ATI hide these optimizations away from us, and turn them on without our knowing.

Nutty

M/\dm/\n
05-28-2003, 04:15 AM
Dawn can be as fast as on FX because it's runing in 96vs128 bit color http://192.48.159.181/discussion_boards/ubb/frown.gif
But actually what are we arguing about, even FX5200 is enough for most of todays games maxed out. Do you folks really need 1000000X1000000+64xAA+128xAF?
This is the 'first' generation of fully programable GPU's & they'll be replaced in 6 months for sure, in meanwhile I like the NV support for developers & ease of use (especially for OGL). And don't forget one more thing 3DMark uses back to front rendering so making huge overhead & eliminating use of Z-oclussion culling! In real world scenario it could be different story.

Yeap, talking about angel ATI, how it comes that their 2% optimization doesn't work in 3dMark build 330? I see only one answer, it's handmade as NVIDIAs one http://192.48.159.181/discussion_boards/ubb/biggrin.gif

[This message has been edited by M/\dm/\n (edited 05-28-2003).]

kehziah
05-28-2003, 05:21 AM
Comments from Carmack :
http://www.tomshardware.com/technews/20030528_044243.html

Only deals with shader 'optimizations' and precision.

Obviously, THG's journalist is not well aware of what's going on (see his statement at the end of the article).

dorbie
05-28-2003, 06:43 AM
Hey, another black mark on the THG copybook, add it to the list.

Carmack IMHO is pretty much saying what's been said here. The screenshots do indicate that NVIDIA fell short in maintaining visual equivalence (I wonder if he's seen them). He seems to agree full shader test and substitution is "grungy" but borderline acceptable if it's functionally equivalent to the limit of what can be displayed (give or take), anything less is out of order including straight rewrites, and automated optimizations are a good thing.

The fixed path clip planes and selectively disabled screen clear are totally lost on THG. We've all been talking about shader subtleties so long they've forgotten all about the other indefensible view specific hacks added to the driver.

Korval & Carmack are about the only two who have actually mentioned the half vs 32 vs 24 bit precision issues and one reason for NVIDIA's dilema here (shader *only*).

BTW these shaders were D3D not ARB_fragment. This is significant w.r.t. compliance with what Futuremark wrote.

[This message has been edited by dorbie (edited 05-28-2003).]

Ostsol
05-28-2003, 08:02 AM
THG is such a messed up site. . . It's unbelieveable that they can honestly say and believe that using a type of "optimization" that is absolutely impossible to use in any game is justifiable. Replacing shader programs can be done for any game (that uses them) and can even be considered acceptable as long as the result is indistinguishable from the original, but static clip planes? Um. . . no.

[This message has been edited by Ostsol (edited 05-28-2003).]

Ostsol
05-28-2003, 08:33 AM
Originally posted by M/\dm/\n:
Dawn can be as fast as on FX because it's runing in 96vs128 bit color http://192.48.159.181/discussion_boards/ubb/frown.gif
Actually, Dawn is mostly using FX12 and FP16, most of the time. There's actually not alot of FP32 in there.

Source: Beyond3D (http://www.beyond3d.com/forum/viewtopic.php?t=6072)


Yeap, talking about angel ATI, how it comes that their 2% optimization doesn't work in 3dMark build 330? I see only one answer, it's handmade as NVIDIAs one http://192.48.159.181/discussion_boards/ubb/biggrin.gif
ATI already admitted to that, so this is no revelation.

[This message has been edited by Ostsol (edited 05-28-2003).]