PDA

View Full Version : nVidia drivers & 3dmark2003



Pages : [1] 2

GT5
05-25-2003, 04:49 PM
seems like FutureMark confirmed the cheating
by nVidia's new drivers which increases performance by 24%...
http://www.theregister.co.uk/content/54/30860.html

Why do nVidia have to cheat for synthetic benchmarks and deceive the gaming community

SirKnight
05-25-2003, 05:00 PM
Well it's not like they are the only ones. Every other graphics card maker out there has proven to have done the same kind of thing. Nvidia was just the last company to do it, well as far as we know. But I could care less about 3dmark. What I look at is performance in games, which is where it really counts. I'm not going to sit there and "play" 3dmark. http://192.48.159.181/discussion_boards/ubb/smile.gif As long as they fix this cheat I'm ok.

-SirKnight

[This message has been edited by SirKnight (edited 05-25-2003).]

GT5
05-25-2003, 05:07 PM
Well FutureMark has filed this report:
http://www.futuremark.com/companyinfo/3dmark03_audit_report.pdf

and they have released a new patch as well.

jubei_GL
05-25-2003, 06:20 PM
why would you care about 3dmark scores, when every review clearly showed 5900Ultra ahead of 9800 in games .. What are you going to play more ? games or some Banch program ?

GT5
05-25-2003, 06:43 PM
jubei_GL,
I'm just pointing out that it is wrong to cheat just to increase the scores.

I dont care whether GeforceFX 5900 is the king or not.

and FYI, i dont play games

DopeFish
05-25-2003, 06:49 PM
ATI still have cheats in their drivers in one form or another, and this has been shown with the nVidia fairy demo being run on the Radeon cards. Renaming the executable of the demo to 3dmark's filename, or to quake3.exe, will produce different results to each other, as will they produce different results to using the demos normal filename.

jwatte
05-25-2003, 06:58 PM
Can we at least keep this discussion in ONE thread? It's been all over the NV35 thread for the last week.

harsman
05-26-2003, 01:38 AM
I hate to fuel this thread further but... Dopefish you haven't actually ran the fairy demo have you? Sure renaming the exe causes "different results" but not of the kind you're talking about...

Tom Nuydens
05-26-2003, 02:07 AM
Originally posted by harsman:
I hate to fuel this thread further but... Dopefish you haven't actually ran the fairy demo have you? Sure renaming the exe causes "different results" but not of the kind you're talking about...

No? They're ignoring certain rendering calls in order to improve performance, aren't they? http://192.48.159.181/discussion_boards/ubb/smile.gif

-- Tom

harsman
05-26-2003, 02:16 AM
Nah, I think it's meant to distract so you don't pay attention to fps at all http://192.48.159.181/discussion_boards/ubb/smile.gif hehe.

M/\dm/\n
05-26-2003, 06:15 AM
It's bad that because of s**t like 3DMark companies are trying to cheat & so driver sizes are getting bigger and real problems are forgot.
In 44.03 (WHQL!!!) GFFX5200 has texture releated bugs in Comanche4, IGI2 & polygon bugs in GTA3:ViceCity. And that's what matters, not freakin' 3D Marks (they could better hack the final score http://192.48.159.181/discussion_boards/ubb/biggrin.gif ). It's real pain that FX line drivers are optimized for benchs, because card is simply super http://192.48.159.181/discussion_boards/ubb/frown.gif
And things like Cg helps a lot.
So for now I'm waiting for the next 22MB driver pack :'(

Obli
05-26-2003, 09:31 AM
Originally posted by M/\dm/\n:
... companies are trying to cheat & so driver sizes are getting bigger and real problems are forgot.
This is THE problem of this!

Personally I wouldn't care about what bench is being used. Not too much at least.
The point is having companies cheating MAY actually make developer's work harder.
I have mixed feeling on the last generations of video cards and the fact the drivers were 'optimized' simply confuses me even more.

I personally don't like benchmarks however. Performance is not everything. I really would like customers to understand that. No matter what company you like. It applies to all.

EDIT: added a line about the quote.

[This message has been edited by Obli (edited 05-26-2003).]

davepermen
05-26-2003, 10:05 AM
i'm very dissapointed. a benchmark is like a sport-event at olympic games. and there, cheating would lead to discualification. fair sports!
same should happen with benchmarks, but no, companies like this time nvidia shown to be capitalistic thinking, so they need to cheat to look cool in the benches.

thats so primitive and dissapointing.


and btw, nvidia, your 22MB drivers simply SUCK. i mean, HEY. 22MB!! whats in there? cheats for about all programs, or what? this driver is bigger as some full server software. i don't get what you put in there, but its definitely not just sweet optimized driver code.

anyways, nvidia has shown to not be trustable, relyable, or actually "good" (in good or evil) at all anymore.

nvidia, you should really fokus on get what you get called actually: best drivers in business. you haven't provided them for a long long time, still, tons of peoples believe you do. now quickly get back on the standards you had.

btw, dawn on the radeon looks great http://192.48.159.181/discussion_boards/ubb/biggrin.gif

Zengar
05-26-2003, 11:41 AM
This is THE problem of this!

Personally I wouldn't care about what bench is being used. Not too much at least.
The point is having companies cheating MAY actually make developer's work harder.
[/B]

That's the problem: you don't care about benchmarks, I don't care about benchmarks, no developer would care much about benchmarks. GFFX for developers is simply süppi! But gamers, who actualy buy such cards do care about benchmarks and 1% more performance would be enought to make a desision for them. And Nvidia must sell some cards or ... hmm... the NV40 will never appear. What can they do but cheat?

Nutty
05-26-2003, 01:49 PM
and btw, nvidia, your 22MB drivers simply SUCK. i mean, HEY. 22MB!! whats in there? cheats for about all programs, or what? this driver is bigger as some full server software. i don't get what you put in there, but its definitely not just sweet optimized driver code.


The majority of the size comes from all the language pack files. Removing these generally brings the drivers down to about 9meg, as some sites release these cut down versions.

V-man
05-26-2003, 02:13 PM
It's funny, but I was just reading this article and the guy says :

If you're a 3DMark freak (it's ok, I am one too) and must have the absolute highest score among your circle of friends, .....


from
http://www.pcstats.com/articleview.cfm?articleID=1392

How very smart of him for beeing a "3Dmark freak".


As for 22MB of driver files. I doubt that the "language packs" double the file size. And even if it did, it wastes bandwidth. Actually, it was 18MB when I downloaded. Weird...

richardve
05-26-2003, 03:06 PM
Originally posted by davepermen:
and btw, nvidia, your 22MB drivers simply SUCK. i mean, HEY. 22MB!! whats in there? cheats for about all programs, or what? this driver is bigger as some full server software. i don't get what you put in there, but its definitely not just sweet optimized driver code.

Perhaps it's that nview thingy and all of that other extra stuff that's making the file so large?
NVIDIA Linux drivers are just 7 or 8 MB afaik and does not include the things mentioned before.
(I'm not sure because I don't download drivers anymore.. 'nivia-installer --update' rulez http://192.48.159.181/discussion_boards/ubb/wink.gif)

This is, once again, another reason for switching to Linux http://192.48.159.181/discussion_boards/ubb/wink.gif


anyways, nvidia has shown to not be trustable, relyable, or actually "good" (in good or evil) at all anymore.

Just like ATi and nearly every other company on this planet.

btw. If someone could make Dawn run on Linux, I'll buy a GeForceFX http://192.48.159.181/discussion_boards/ubb/wink.gif

Ostsol
05-26-2003, 09:11 PM
Originally posted by jubei_GL:
why would you care about 3dmark scores, when every review clearly showed 5900Ultra ahead of 9800 in games .. What are you going to play more ? games or some Banch program ?
Well. . . you'll have to note that every one of those games are DirectX 8 (or OpenGL equivalent) or older games. The result is that if those games use pixel shaders, it's integer precision shaders; if not, then it's hardware T&L. The GeforceFX has a dedicated T&L unit and performs integer instructions at at least twice the speed of floating point instructions. These conditions combined with the high clock speed makes it obvious that it'll win against the Radeon 9800 Pro in such games. If a game were to use floating point precision in shaders, however. . . the 9800 Pro would most likely be well in the lead.

Basically, the GeforceFX rocks for DirectX 8 generation and older games. ATI's cards and their very good performance with floating point precision are a bit more future-proof, though.

davepermen
05-26-2003, 09:31 PM
Originally posted by richardve:
Just like ATi and nearly every other company on this planet.

btw. If someone could make Dawn run on Linux, I'll buy a GeForceFX http://192.48.159.181/discussion_boards/ubb/wink.gif

dawn runs on the radeon as well, hehe http://192.48.159.181/discussion_boards/ubb/biggrin.gif (see the main page)

well, at least ati officially agrees we did those things and we'll remove those things. nvidia instead flames other companies for being evil and directly attacking them. futuremark has nothing to do that the gfFX is simply crap hw for dx9..

oh, and yes, seeing gfFX perform well in benches runable on my gf2mx as well doesn't show me much future proof http://192.48.159.181/discussion_boards/ubb/biggrin.gif same for gf3 benches.. where i can see if a card is suitable for the future is currently stuff like 3dmark, like shadermark, and others.

and, if nvidia cheats in 3dmark, who knows how they "optimize" for other benches. i mean, no 24% performance gain.. but a little "tweaking" if some ut-bench for example runs would not really be noteable but give some 5% advantage without bigger problems.

and those 22MB are not only language packs, are they? well.. even if they are.. they should get some workaround.. thats the benefit of unified drivers: all in one. even while you only need the part for your gpu for your language, you get it all..

and the nView and all that could be optionable, too. not everyone needs or wants that. i, for example, have no use for it..

well, anyways.. i have to install the radeon now at this work pc to test if its my mobo at home wich sucks.. wish me luck http://192.48.159.181/discussion_boards/ubb/biggrin.gif (at least humus did not had the bug i have..)

FXO
05-26-2003, 09:31 PM
Is there no way of enabling floating point precision for older games on those new cards?

I would really like to get the last bit of IQ out of games like Quake3.
With the newest cards there is some performance to spare for IQ, so it would be a good idea IMO.

Humus
05-27-2003, 12:21 AM
Originally posted by SirKnight:
What I look at is performance in games, which is where it really counts. I'm not going to sit there and "play" 3dmark. http://192.48.159.181/discussion_boards/ubb/smile.gif

If you're a gamer, then sure, games is what matters. For us developers though 3dmarks tests can be quite useful. Especially the individual tests, like shader performance etc.

richardve
05-27-2003, 12:54 AM
Originally posted by davepermen:
dawn runs on the radeon as well, hehe http://192.48.159.181/discussion_boards/ubb/biggrin.gif (see the main page)

But the Radeon doesn't work well on Linux, hehe http://192.48.159.181/discussion_boards/ubb/biggrin.gif
(afaik)

Humus
05-27-2003, 12:58 AM
Originally posted by Zengar:
And Nvidia must sell some cards or ... hmm... the NV40 will never appear. What can they do but cheat?

It's not like nVidia is in economical trouble or so ..

Humus
05-27-2003, 12:59 AM
Originally posted by richardve:
But the Radeon doesn't work well on Linux, hehe http://192.48.159.181/discussion_boards/ubb/biggrin.gif
(afaik)

It works just fine in Linux. Though I wish ATI would release Linux drivers more frequently.

davepermen
05-27-2003, 02:20 AM
Originally posted by Humus:
It's not like nVidia is in economical trouble or so ..

not as if that would legalice it anyways..

hm, humus. the radeon works well here at the work pc.. looks like i really need a new mobo http://192.48.159.181/discussion_boards/ubb/frown.gif money everywhere, but not in my pockets..

M/\dm/\n
05-27-2003, 04:17 AM
Yeap, the unified drivers s**k, both for ATI and NVIDIA. Why the hell I have to download all the TNT, TNT2, GF2/GTS/PRO/ULTRA/MX/Ti, GF3, GF4/Ti/MX, FX/5200/5600n5800/5900 stuff (to get 3D mark opt http://192.48.159.181/discussion_boards/ubb/biggrin.gif ) when I need only one of them? The same for ATI, but they don't have range that wide http://192.48.159.181/discussion_boards/ubb/wink.gif
I came to conclusion, that NVIDIA is working for developers & ATI for performance.
As for me I'll stick with NVIDIA, you can flame me, but I just love their HW.
Anyways, people who wants max possibilities for min cash will go for FX5200, & that's what sells best.
And actually that fight is pretty cool, you can get HOT, POWERFULL hw in few months for acceptable price http://192.48.159.181/discussion_boards/ubb/biggrin.gif Only if they wouldn't cheat http://192.48.159.181/discussion_boards/ubb/frown.gif

namespace
05-27-2003, 04:35 AM
Humus what drivers do you use for Linux?
Schneider or Ati?

Do they support the new extensions?
vertex/fragment programs, buffer objects...

I must buy a new gfx-card now (have geforce 2 ti, ever run vps in driver-emulation? http://192.48.159.181/discussion_boards/ubb/biggrin.gif) but the new fx is no longer my preferred choice.

Im using Linux and you here everywhere hat ati linux support is bad. I dont know what to do...

J_Kelley_at_OGP
05-27-2003, 05:55 AM
I wouldn't normally get involved in an argument like this but I did some checking just to follow up on someone's hypothesis about the size of the driver download from NVIDIA. If you look at the directory extracted from the single downloadable executable you'll find that, extracted it is approximately 44 meg. If you take out the help files and the language specific resource files the directory size drops down to 19 meg. The actual driver DLL's included in the package (that aren't UI for the control panel or NView or whatnot) are small enough. I personally find it refreshing that I don't have to go searching for the proper driver for my card every time a new version comes out. And when I finally dump my old GF4 MX I know that I'll have the current drivers downloaded already for my upgrade (if I go with NVIDIA).

M/\dm/\n
05-27-2003, 06:24 AM
Yes, you're right about that size, those 22MB's are playing on emotions http://192.48.159.181/discussion_boards/ubb/biggrin.gif
But is there a way to download only necessary files, without waiting 3DGuru or someone else to do 3rd party cutting?

Humus
05-27-2003, 06:58 AM
Originally posted by namespace:
Humus what drivers do you use for Linux?
Schneider or Ati?

Do they support the new extensions?
vertex/fragment programs, buffer objects...

Schneider currently, though as far as I understand things are just newer version of ati's driver that just aren't available from ati.com at the moment.
Yes, they support vertex and fragment programs. The old ATI drivers from like 6 months ago did too. There's no support for VBO yet though, that's a big minus, but it's not that hard to fall back on VAO on the other hand.
Admittedly, I'm not using Linux primarily, though my demos tend to always work whenever it should (that is, it's not using anything that I still haven't added support for in my Linux backend of my framework, such as rendering to texture).

Humus
05-27-2003, 07:00 AM
Originally posted by M/\dm/\n:
Yes, you're right about that size, those 22MB's are playing on emotions http://192.48.159.181/discussion_boards/ubb/biggrin.gif
But is there a way to download only necessary files, without waiting 3DGuru or someone else to do 3rd party cutting?

Bah! Get broadband http://192.48.159.181/discussion_boards/ubb/wink.gif

GPSnoopy
05-27-2003, 08:35 AM
I love the 3DMark serie 'cause of the nice demos. And now with all that nonsense about fair benchmarks 3DMark demos runs slower! http://192.48.159.181/discussion_boards/ubb/frown.gif

davepermen
05-27-2003, 08:45 PM
Originally posted by J_Kelley_at_OGP:
I wouldn't normally get involved in an argument like this but I did some checking just to follow up on someone's hypothesis about the size of the driver download from NVIDIA. If you look at the directory extracted from the single downloadable executable you'll find that, extracted it is approximately 44 meg. If you take out the help files and the language specific resource files the directory size drops down to 19 meg. The actual driver DLL's included in the package (that aren't UI for the control panel or NView or whatnot) are small enough. I personally find it refreshing that I don't have to go searching for the proper driver for my card every time a new version comes out. And when I finally dump my old GF4 MX I know that I'll have the current drivers downloaded already for my upgrade (if I go with NVIDIA).

well... i normally don't switch my languages all month, or gpu's all month (but drivers actually quite often anyways due updates http://192.48.159.181/discussion_boards/ubb/biggrin.gif).

so.. a simple downloadpage, with 2 simple dropdown boxes, one "language", one "gpu", and a click.. then an additional click "nview", one for "keystone", one for this, one for that additional feature (any use for keystone if you don't have a beamer?!?! any use for nview if you don't have multiple screens? well i haven't any..). those things are addons, NOT driver features..

and, for the ones that want, the "Detonator Combo", all the things in one.


anyways, i think the "unified driver" part (means the same for all cards) is not just different files for different cards in one package, but really one file for all of them.. so you can download once, and use all the time. even if its just the driver without extra****, in one language.

oh, and i do have 512kb adsl, and it takes ages to download 22MB.. compared to drivers i normally download wich make swoosh and done at least..

heath
05-27-2003, 09:34 PM
PBuffers on ATI are a complete and utter mess, with their roll it themselves approach and GLX 1.2 support. You have to hand it to NVIDIA, they have significantly better and more mature OpenGL drivers on both Windows and without a shadow of a doubt on Linux.

M/\dm/\n
05-27-2003, 10:20 PM
Saw a nice idea in net (www.guru3d.com) http://192.48.159.181/discussion_boards/ubb/biggrin.gif


For the bigger part of it you should not blame FutureMark for this though, but blame the parties that started cheating. These are both nVIDIA and ATI and I don't care wether its a 2% or a 25% difference, cheating is cheating. I actually applaud nVIDIA for the way they did it, if you do it then have the b@lls to do it well.

Korval
05-27-2003, 11:14 PM
a simple downloadpage, with 2 simple dropdown boxes, one "language", one "gpu", and a click.. then an additional click "nview", one for "keystone", one for this, one for that additional feature (any use for keystone if you don't have a beamer?!?! any use for nview if you don't have multiple screens? well i haven't any..).

Someone who is doesn't even really understand what a "driver" means doesn't want to sit there trying to understand 1001 different options. They want one file to download and fix the problem (and they only do this to fix probelms; they never get new drivers just because there are new ones).

As it turns out, these people don't care that it's a 22MB file, nor how long it takes to download. They just want the problem fixed, quickly without a lot of fuss.

tfpsly
05-27-2003, 11:36 PM
Originally posted by M/\dm/\n:
I actually applaud nVIDIA for the way they did it, if you do it then have the b@lls to do it well.

LOL! http://192.48.159.181/discussion_boards/ubb/biggrin.gif

Ysaneya
05-27-2003, 11:51 PM
They just want the problem fixed, quickly without a lot of fuss.


Quickly ? I wouldn't say that, as not everybody has broadband. If you've got a 56k, you'll care a lot if the file is only 8 Mb, or 22 Mb. At least i know i did, when i had a 56k before :)

Y.

Nutty
05-28-2003, 02:38 AM
http://www.3dchipset.com/index.php


This was released a couple of days ago, but as our new policy is, we aren't going to throw it up online until our FireKat fixes them up! Well he has just done that and now this package is down to 6.06Mb instead of 22Mb. Want to test out these 44.10 drivers? Check out the info below:



The drivers can only be installed via the Device Manager due to all the language files being taken out
All language and help files have been removed to ease Dial-up users
All nVidia cards are supported in this package
No word yet on performance or compatibility
Files are dated: May 5, 2003


[This message has been edited by Nutty (edited 05-28-2003).]

Humus
05-28-2003, 03:09 AM
I actually applaud nVIDIA for the way they did it, if you do it then have the b@lls to do it well.

http://192.48.159.181/discussion_boards/ubb/frown.gif
I'm losing my faith in humanity.

kehziah
05-28-2003, 03:14 AM
Originally posted by Humus:
http://192.48.159.181/discussion_boards/ubb/frown.gif
I'm losing my faith in humanity.

Dismaying indeed...

M/\dm/\n
05-28-2003, 03:48 AM
The fact is - ATI guys were unable to push those 24% http://192.48.159.181/discussion_boards/ubb/biggrin.gif
Anywayz, game test 4 is quite stupid, as it's rendering everything in back to front order, so eliminating all the Z-prerendering stuff, not exactly the best thing to show all the technology for FX. And I can't imagine, where you are going to render sky (fullscreen) in pixel shader, then occlude 75% of it with grass and so on, that also takes a lot of worthy ps/vs time.

namespace
05-28-2003, 04:18 AM
Thx Humus!

But I'll stay with Nvidia. They need my support/money right now http://192.48.159.181/discussion_boards/ubb/wink.gif

Just hope, that there will be a fx with passive cooling soon. I HATE fan-noise...

davepermen
05-28-2003, 05:14 AM
Originally posted by Korval:
Someone who is doesn't even really understand what a "driver" means doesn't want to sit there trying to understand 1001 different options. They want one file to download and fix the problem (and they only do this to fix probelms; they never get new drivers just because there are new ones).[B]
of course. but i just think such people that are able to download and install some driver are as well able to choose their language. and if thei're not sure, they click the "not sure" button and get simply everything http://192.48.159.181/discussion_boards/ubb/biggrin.gif


[B]
As it turns out, these people don't care that it's a 22MB file, nor how long it takes to download. They just want the problem fixed, quickly without a lot of fuss.
yeah, quickly. with a 22MB file, this is NOT quickly. quickly is a mather of maximal 2 minutes. including installation and rebooting.
IFF they don't want to know how to update, please make an auto-update funktion. 22 MB for a driver are stupid. final point. espencially as i can see they got hacked together to 6mb or so in the end.

anyways, nvidia lives in a broadband only world (unlike me..), and they love to put huge downloads online, from cg stuff, to sdk stuff, to demos, to whatever. huge files..

oh well.. i don't need to bother about that. i'm actually more interested why there is no law against how nvidia tries to cheat their customers all the way..

davepermen
05-28-2003, 05:23 AM
Originally posted by M/\dm/\n:
The fact is - ATI guys were unable to push those 24% http://192.48.159.181/discussion_boards/ubb/biggrin.gif
Anywayz, game test 4 is quite stupid, as it's rendering everything in back to front order, so eliminating all the Z-prerendering stuff, not exactly the best thing to show all the technology for FX. And I can't imagine, where you are going to render sky (fullscreen) in pixel shader, then occlude 75% of it with grass and so on, that also takes a lot of worthy ps/vs time.

if i remember right you're the nvidia fanboy 've seen somewhere else..

anyways. its not a mather if 3dmark is stupid or not. its a quest. and nvidia showed to not be able to solve the quest in a fair and simple way fast. they had to cheat to get a good result. this is poor.

like the 1337 counterstrike-cheaters who are bether than the others because they're so cool they can play against the rules given to them.

nVIDIA, thats NOT the way its ment to be played!

its btw a nice feeling to just overclock my radeon and voilà, i've beaten the fx5900 WITH cheating! and i run the patchversion..

even more fun, thats still a first-edition 9700pro. quite an old card now http://192.48.159.181/discussion_boards/ubb/biggrin.gif and quite cheap to find at some places..

oh, and humus. give up the believe in good. i've lost it the moment terrorism got more than a counterstrike group, the moment war is the way to solve peace, and the moment cheating is the way to solve bad hw problems.

fairness isn't cool in this world anymore. all that mathers is YOU stand on top of ALL THE OTHERS. no mather how, no mather if they're all dead in the end so there is no other. the main point is you have to be the highest.

fun is, nvidia does not give any real statement.. or did they? anyone seen nvidia dudes posting in here? noooo http://192.48.159.181/discussion_boards/ubb/biggrin.gif too bad.

the best move nvidia did (marketingstyle) is to release the funny video. showing hey, we can joke about ourselves, we're bether now.

while thei're not.. http://192.48.159.181/discussion_boards/ubb/biggrin.gif

M/\dm/\n
05-28-2003, 06:14 AM
Actually I'm turning into NVidiot in the eyes of others, because I've found some stupid things that are done through a$$ and I can't understand why.

I guess you still can't answer why 3DMark==tomorrows 'sunrise', is doing the stuff through a$$. The Game test 4 IS DEFINATELY STRESSING FP, AND THE CARD WITH LOWER PRECISION WILL WIN NO MATTER WHAT! ESPECIALLY WHEN THERE IS A LOT FP WORK, THAT IS ++75% HIDDEN, AND ALL THIS S**T WORK IN FAVOR FOR ATI. BRING Z CULLING IN AND THERE'S UNREAL STORY as unneeded fp computations will be dropped and main workload will lean toward vertex computation and then Nvidia with 500MHz or 425MHz core will go up. So I THINK 3D Mark IS shifted towards ATI.
And if FX5900 is capable of keeping up with ATI at fp with their higher precission fp then my sympathies goes to NVidia. Unfortunately, end users can't even imagine the difference between 96-bit computations an 128-bit computations & even worse they don't see it that easy (though you CAN fell it a bit) and you can't push all the backtofront frontoback stuff in their heads that easy, result => ATI almost wins http://192.48.159.181/discussion_boards/ubb/biggrin.gif

kehziah
05-28-2003, 06:34 AM
Originally posted by M/\dm/\n:
And if FX5900 is capable of keeping up with ATI at fp with their higher precission fp then my sympathies goes to NVidia.

And if 9800Pro can beat FX5900 with its lower clock speed, then my sympathies goes to ATI http://192.48.159.181/discussion_boards/ubb/rolleyes.gif

Strong arguments, really.

M/\dm/\n
05-28-2003, 07:09 AM
With doing things that you can do really fast, but that are obstructed later anywayz http://192.48.159.181/discussion_boards/ubb/biggrin.gif At least, cards that tries to do the same UNNEEDED stuff more precisely (as they are forced to skip Z-test) are kicked away http://192.48.159.181/discussion_boards/ubb/biggrin.gif
The situation is: you must walk 10 meters to get to the target (fast), but you are forced to do 20 spins at the beginning, ATI can spin werrrrry fast, but clumsy, Nvidia does the same slower & a bit cleaner, but in the end everything that matters is time you've spend in the way http://192.48.159.181/discussion_boards/ubb/biggrin.gif I prefer card that can walk fast, as I'm not going to watch it spinning on the place I'll (drop that step with Z-oclussion) http://192.48.159.181/discussion_boards/ubb/biggrin.gif

[This message has been edited by M/\dm/\n (edited 05-28-2003).]

DopeFish
05-28-2003, 07:54 AM
Originally posted by harsman:
I hate to fuel this thread further but... Dopefish you haven't actually ran the fairy demo have you? Sure renaming the exe causes "different results" but not of the kind you're talking about...

My gf4mx wont run it, so no, I havent done it personally. My radeon9500 crapped out, so Ive ordered a second one from ATI, and am still waiting for it after 2 months. They havent shipped it yet, and they keep giving me dates further and further away.

My personal experiences with ATI drivers when my old card worked.. crashing.. incorrect rendering results.. texture corruption... infact swapping 2 lines of code around that had absolutely nothing to do with rendering changed the results of how some things rendered. And this was all with the very latest drivers at the time.

As for the results of the dawn demo... search a little and youll find a lot of first-hand verifications of it.. whether they are true or not I dont know for sure, but the sheer number of them certainally seems to say so.

Ostsol
05-28-2003, 08:29 AM
M/\dm/\n:

1) Do you believe that 3dMark is meant to be a test of a video card's rendering capabilities or how well the IHVs "optimize" their drivers to render the scene?

2) Do you think it's appropriate to incorporate an "optimization" that can never be used in any game (I'm talking about the static clip planes)?

My own answers to these questions:

1) It is a test of rendering performance. Everyone has to render the scene the same way, so it's a fair comparison, regardless of how efficient or inefficient code is.

2) Any optimizations that are added to drivers should be those that can be used in games. Also, the end result of these optmizations should be indistinguishable from the output intended by the application's developers.

harsman
05-28-2003, 09:20 AM
Alright, I'll make it clear: If you change the name of the exe to quake3.exe or 3dmark03.exe the fairy loses her clothes. She gets nekkid. That's what all the juvenile giggling was about, that's what all those posts you've seen refer too. Lighten up http://192.48.159.181/discussion_boards/ubb/smile.gif

GPSnoopy
05-28-2003, 09:26 AM
Btw, where can you find a detailed description of 3DMark2003 tests?
I mean, how is it known what's the rendering order of 3DMark or the shadows methods used. I've often seen discussions here about how 3DMark algorithms work, but I wanted to find a reliable source.

I've found a whitepaper on Futuremark website but it's just marketting crap hidden by some technical terms.

matt_weird
05-28-2003, 09:50 AM
..i remember i saw some transcription of some TV-show with JC/nVidia/ATI/FutureMark involved (just don't remember where http://192.48.159.181/discussion_boards/ubb/frown.gif )... And isn't that was nVidia attacking ATI by the reason of supposed ATI cheating under that benchmark? ??? And isn't that was nVidia so mad about that synthetic game test in there?? ??? And isn't that was nVidia considering the actual game tests as a better way to go with the video-hardware testing? ???

DopeFish
05-29-2003, 01:47 AM
Originally posted by harsman:
Alright, I'll make it clear: If you change the name of the exe to quake3.exe or 3dmark03.exe the fairy loses her clothes. She gets nekkid. That's what all the juvenile giggling was about, that's what all those posts you've seen refer too. Lighten up http://192.48.159.181/discussion_boards/ubb/smile.gif

Its not what the change is that is the point tho, its the fact that there is change.

M/\dm/\n
05-29-2003, 02:29 AM
I can't remember exactly where, but one of Futuremark's beta members pointed out that in game test 4 rendering is made back to front, and that all buffer is filled with sky, and then occluded, so making this test real stresser for FP in other word for NV(AND THAT'S WHY NVIDIA IS TRYING TO CHEAT AS THEY CAN'T MATCH IN SPEED IN FP BECAUSE OF FP24 vs FP32)!
I can't accept when fp cycles are wasted on unneeded job to show how fast it is, if they can't write the shaders that shows real difference, but that are not overwritten anyway, then they are unneded it's something like:
for(int i=0;i<500;i++)
a=sin(i);
in fp, the results are skipped, but cycles wasted, and if card tries to calculate sin more precilesly it gets shot in leg MORE.
So in test 4 it works like
calculate sky in CARDS MAX PRECISION THAT IS >FP24 write to framebuffer, then calculate grass, ground trees in same vp/fp and overwrite 75% of the tuff jobe done, trash it!
But in realtime environment I wouldn't go this way, as Z-oclussion in such cases can make rendering A LOT faster, as you are doing a lot of unneeded job, and because of such approach ati clearly wins because of 96-bit format.

BTW, talking about cheating, I ran 3DMark on FX5200@1024x768+everything maxed out in both first build and 330 and the funniest thing was that with all settings equal, build 330 scored 1 3D Mark more http://192.48.159.181/discussion_boards/ubb/biggrin.gif

davepermen
05-29-2003, 04:02 AM
madman. THATS WHAT A BENCHMARK IS FOR! NOT TO BE INTELLIGENT CODED, BUT TO TEST OUT IF SOME HW CAN PERFORM WELL!!
what you think pc-benchmarks work like? possibly exaclty a simple for(i from 0 to 500000) x = sin(x);
3dmark looks nice, but in the end its just that: DRAWING DRAWING DRAWING. its NOT important if they draw useless stuff. they could draw white textures as well, and you would not see anything.

3dmark does draw some stuff. its the gpu's and drivers job to do that work, and to show they can do it fast. nvidia FAILS to show they're able to do it fast, so they cheated to look like thei're able to do it fast. they simply CAN NOT. and that is the fault of NVIDIA. NOT 3dmark. 3dmark gave a quest. nvidia cannot win that.

and the fp24 against fp32. thats a bad choise nvidia did. its known now for more than 1 year that the minimum requirement for next hw will be fp24. NVIDIA CHOOOSE to NOT support fp24 but only fp16 and fp32. everyone KNEW that fp16 will be too low to get accepted, its under the minimum requirements. and everyone KNEW nvidia hw will be slower on fp32 than ati on fp24. but it was THEIR choise!

nvidia did a lot wrong with the nv30++ hw. but it was their choise. they don't have to blame now 3dmark that THEY designed hw that is NOT able to solve todays requested tasks fast. any good gpu can handle 3dmark very well. just not nvidia cards.

and if you stop nvfanboying, but really look back, you would note that the same was even true for gf3 and co. those gpu's don't follow the dx8 line really, but the additional features are not usable in dx.

result: they are much worse performing than a real dx8.1 hw for example, namely radeon8500+.

the nvidia cards DO have power. but not where they need them.

same as not supporting general floating textures.. bull**** that is.

Humus
05-29-2003, 04:37 AM
Originally posted by DopeFish:
Its not what the change is that is the point tho, its the fact that there is change.

It's the wrapper that does the change!!! Not the driver.
Why don't you just read up on it instead of making baseless claims?

V-man
05-29-2003, 05:22 AM
>>>result: they are much worse performing than a real dx8.1 hw for example, namely radeon8500+.

the nvidia cards DO have power. but not where they need them.<<<

That's more or less true. The 8500 was a good competitor to the Gf3 line but I wouldn't put my money on the 8500. It's not as simple as "the 8500 beats the Gf3 on every point, therefore the 8500 is superior" or vice-versa.
For that geenration, the Nvidia cards had better driver support and more interesting extensions.
http://www6.tomshardware.com/graphic/20010814/radeon8500-14.html

and see the related pages there.
Bottom line : the 8500 you say? **** that!

It's only the R300 that caught my eye. Now ATI impresses me.

M/\dm/\n
05-29-2003, 05:29 AM
But this time Futuremark is doing TOTALLY synthetic bench in favor for ATI, they are testing ATI's strong sides, & undoubtfully NV's will be faster in DOOM3 and other upcoming games, though most likely OGL ones.
If there would be stress for ATIs hierarchial Z buffer and NVIDIAs Z oclussion culling, I fell there would be another story, but WE CAN'T DO THE TEST THE WAY IT HURTS OUR BETA MEMBERS, CAN WE? Especially knowing the fact that those marks influence sales a lot.

DopeFish
05-29-2003, 06:44 AM
Originally posted by Humus:
It's the wrapper that does the change!!! Not the driver.
Why don't you just read up on it instead of making baseless claims?


Originally posted by DopeFish:
As for the results of the dawn demo... search a little and youll find a lot of first-hand verifications of it.. whether they are true or not I dont know for sure, but the sheer number of them certainally seems to say so.

I should rest my case there, but shall go on:

And tell me, what reason would this wrapper have to change how it is rendered based on executable filename? The wrapper which was made to run the dawn demo on ATI hardware having special cases for quake3.exe and 3dmark.exe?

I have read around, and everything that Ive read has said that changing the executable filename results in different rendering results. Perhaps you should read my post instead of making baseless claims.

MZ
05-29-2003, 07:40 AM
THATS WHAT A BENCHMARK IS FOR! NOT TO BE INTELLIGENT CODED, BUT TO TEST OUT IF SOME HW CAN PERFORM WELL!!Let me remind you:
a) 3Dmark shows of "Simulating gaming environment" label
b) 3Dmark consists of:
. - Four "Game" Tests
. - "Theorethical" Tests
c) 3Dmark (by design!) does not include results of the "Theorethical" tests in final result.

The "stressing hardware" in Futuremark's way, means running "stupid" (naively inefficient) algorithms - just to give more (useless) workload for GPU.

This design is in obvious contradiction with the "Simulating gaming environment" slogan. Real games don't do this (at least not intentionally http://192.48.159.181/discussion_boards/ubb/smile.gif).

There is supposed to be excuse for it: the benchmark is said to try to give premise of performance also in *future* games. But this claim is obviously invalid - why should games start to use "stupid" algorithms in future? There will *always* be better uses for spare GPU power than wasting it.

Conclusion
===========
If there exists any place for "stupid" algorithms, then it is in "Theorethical" test. Not in "Game" tests. 3Dmark'03 is simply inconsistent with its own assumptions.


3dmark gave a quest. nvidia cannot win that
I think it would be easy to create benchmark which "proves" nv3x is faster than R3xx. it could use shaders with complex swizzles (this would increase ins. count) , or use screen space derivatives, or "require" >=33 constants, or "require" >=33 texture samples (each would force R3xx to multipass).

Finally, at the end of the frame, it could cover half of the screen with DX7/8 game scene http://192.48.159.181/discussion_boards/ubb/smile.gif

But this is not the point. Real game engine programer would not "give quest" to IHVs, but would do completely opposite: in his own interest he would do his best to design and optimize code for HW he is targeting.

If "Game" benchmark is meant to be valid and fair, it should follow this way. Which 3DMark's "Game" test doesn't.


[This message has been edited by MZ (edited 05-29-2003).]

NitroGL
05-29-2003, 07:49 AM
But think about this:
If an IHV could get the card to perform at high speeds with an inefficiently designed benchmark, just think how fast it would be with an efficient one!

Edit:
Before I get beat in the face with a trout, I know that this isn't the point of all this talk. I'm just saying.

[This message has been edited by NitroGL (edited 05-29-2003).]

Tom Nuydens
05-29-2003, 08:03 AM
Originally posted by DopeFish:
And tell me, what reason would this wrapper have to change how it is rendered based on executable filename? The wrapper which was made to run the dawn demo on ATI hardware having special cases for quake3.exe and 3dmark.exe?

Yes! Have you even seen what these "special cases" are?

Without renaming the executable, Dawn on a Radeon looks more or less normal, except for the issues reported earlier with the hair and the eyelashes. When you rename the executable to Quake3.exe, Dawn's leaves disappear. When you rename it to 3DMark03.exe, both the leaves and the wings disappear, leaving you with a naked human.

If you rename the executable when running on an NVIDIA, nothing happens. Ergo, it must be the wrapper that's doing the filename detection, and not the demo itself.

Get it now? The hair and eyelashes artefacts on ATI cards occur regardless of whether the executable has been renamed or not, and the whole renaming thing is nothing more than a nude patch built into the wrapper.

-- Tom

GPSnoopy
05-29-2003, 08:42 AM
What MZ just said is along the lines of what I was thinking.

Sometimes I even wonder whether Futuremark ever knew what they really wanted to do with 3DMark.

"Lets make a synthetic benchmarks that runs games tests and gives a synthetic result that will tell how well all other games run. Oh, and wouldn't it be nice if it could make your coffee in the morning too?"

Ostsol
05-29-2003, 08:59 AM
I still don't understand how GT4 is biased towards ATI. . . GeforceFXs have poor floating point performance, but complaints regarding the requirement of FP precision would be better directed at Microsoft -- or at NVidia for designing the GPUs that way. In any case, the scene must be rendered the exact same way regardless of what video card is used, so how is this not fair? Also, if GeforceFXs are horrible at FP precision, doesn't that just plain make them bad for PS2.0? It's not bias, simply observation of the obvious.

Korval
05-29-2003, 09:01 AM
undoubtfully NV's will be faster in DOOM3 and other upcoming games

JC doubts it. Indeed, he said that, under the ARB_vp/fp path, ATi wins, but under the NV-path, nVidia wins. Why? 24 vs 32-bit floats.


If there would be stress for ATIs hierarchial Z buffer and NVIDIAs Z oclussion culling, I fell there would be another story, but WE CAN'T DO THE TEST THE WAY IT HURTS OUR BETA MEMBERS, CAN WE?

The jury is still out about whether or not ATi's z-test method is better or worse than nVidia's. Indeed, some posts on this board have asserted that ATi's z-tests are better. Historically, in game benchs that have low overdraw (generally, Unreal-based games), ATi's cards tend to do much better than in low-overdraw circumstances.

I don't know where the rampant nVidia fanboy-ism is coming from. Quite frankly, the GeForceFX line before the 5900 was significantly weaker than the ATi equivalents. Only the 5900 gives ATi any real competition.

As for 3DMark not using benchmarks that hurt their beta members... this doesn't somehow justify nVidia's cheating. If the test is unfair, show that it is unfair. Keep harping on the idea that it is unfair. However, nVidia must consider it a legitimate test, since they devoted driver development resources to cheating on it.


If there exists any place for "stupid" algorithms, then it is in "Theorethical" test. Not in "Game" tests. 3Dmark'03 is simply inconsistent with its own assumptions.

So, how does this justify nVidia's cheating?

If nVidia is able to detect 3DMark, they should also be able to prevent rendering in the application, too. As such, they should just do that to prevent people from benchmarking 3DMark.

Ostsol
05-29-2003, 10:04 AM
However, nVidia must consider it a legitimate test, since they devoted driver development resources to cheating on it.
I wouldn't go that far. . . I'm guessing that NVidia just felt that 3dMark03 scores would have enough of an influence on sales as to justify spending time and resources developing ways to use 3dMark to their own advantage, despite their assertions against the benchmark's validity.

[This message has been edited by Ostsol (edited 05-29-2003).]

AndrewM
05-29-2003, 11:53 AM
Korval:

I Like how you assume (as are a lot of people) that it took a _whole_ lot of effort for nvidia to do these cheats. I doubt they spent more than 2 days on this.

davepermen
05-29-2003, 12:54 PM
Originally posted by MZ:
I think it would be easy to create benchmark which "proves" nv3x is faster than R3xx. it could use shaders with complex swizzles (this would increase ins. count) , or use screen space derivatives, or "require" >=33 constants, or "require" >=33 texture samples (each would force R3xx to multipass).
well. and then explain me how to fit that into a dx9 test. you could as well write a test wich uses floatingpoint textures, and voilà, nvidia could not run any. thats no benchmark for dx9 then. 3dmark is. and the test shows, nvidia sucks at it. last time ati did. this time nvidia does. its fun how last time ati got blamed to suck, but this time you all stand behind nvidia, and bitch over futuremark.

their test is valid, and their test is good. there is no problem with 3dmark at all. it does not need to be a supertweaked optimized till the last engine. it runs fine on my radeon the way its made. i don't except anything more. i buyed a dx9 card, i run a dx9 test, and it runs well. tell me any problem i shall have.


btw, the nude dawn is funny http://www.opengl.org/discussion_boards/ubb/biggrin.gif

*Aaron*
05-29-2003, 02:47 PM
OK, I have to put my two cent in.

I'm not defending nVidia's actions, but am I the only one who sees a conflict of interest in Futuremark's beta developers program? They take large amounts of money from hardware manufacturers whose products their software evaluates. Would it be appropriate if a hardware review website accepted hundreds of thousands of dollars directly from the companies whose hardware they were reviewing? And why do companies need a sneak peak at the 3dmark software (and the code!) before it is released? So they can tweak their drivers to suit the tests, that's why. IMO, 3dmark has exceeded its usefulness.

And shame on you, nVidia. Not so much for cheating, but for doing it so stupidly. How could they not have been caught? Perhaps not as easily spotted as executable filename identification, but still pretty bad. It's like writing the answers to a quiz on the top of your hand.

There, now I've posted my obvious and redundant comments, and all I have to show for it is $-0.02 http://www.opengl.org/discussion_boards/ubb/wink.gif

MZ
05-29-2003, 03:36 PM
Originally posted by davepermen:
well. and then explain me how to fit that into a dx9 test.
By using ps_2_x.

john
05-29-2003, 03:51 PM
3DMark is a toy benchmark; I don't pay too much attention to how a synthetic benchmark comes up with its own set of magic numbers. You can run any number of synthetic benchmarks and prove almost anything you want because it all comes down to how you collapse different results from different tests into the final number. Ever seen the Simpsons epsiode to see who's the best bar keeper? The chick won the first two tests but lost the third test; but because the third test was weighted 95%, Moe won in the end. It's the same story.

I think graphics vendors face an up-hill battle. I'm sure everyone would agree that graphics chips are complicated devices, but the tests are trying to enforce a uniform execution model across all hardware vendors, and that is, in my opinion, Not A Good Thing.

For example, CPUs are also complicated devices but CPU tests are ~targetted~ for a particular architecture. How so? Well, some the tests involve compiling code on a CPU and running it to see how well the code executes. The code that is being used to test the cpu has been ~scheduled for that cpu~ by the compiler. Instruction scheduling (+ cache line prefetching + a myriad of other optimisations that rely on knowing the architecture of a CPU) is ~not cheating~, but it IS hardware specific optimisations. I am not saying that this is a bad thing for CPUs. My point is that 3DMark's approach to trying to come up with the *same* execution flow to all hardware pipes is not a fair test.

At the end of the day, the only results a buyer should be interested in is how well the card performs for what they ask of it.

FYI, I use both an ATI and an nVidia card.

cheers
John

M/\dm/\n
05-29-2003, 09:46 PM
Finally we are close to conclusion!

One thing is for sure, the more complicated pipes we get, the harder becomes task to write unified tester, it's a bit like Intel Celeron vs Intel Pentium vs AMD MHz value (and matching FSB's values), but the pipe structure for CPU is a bit more clearer than one of GPU.

Talking about unsuccesfull FX line, I'd prefer FX5200 over Radeons entry lavel card, & FX 5600 isn't that bad either (especcialy with new drivers), check tech-report, for more info, reliable site based not only on 3DMark.

[This message has been edited by M/\dm/\n (edited 05-29-2003).]

Ostsol
05-29-2003, 10:21 PM
I say NVidia should have just kept it simple and done everything in floating point precision. Just stay with two precisions, as is supported in ARB-extended OpenGL and in DirectX. FP16 may not be quite as fast as FX12, but at least you have the advantage of a high dynamic range.

dorbie
05-29-2003, 10:46 PM
Geeze madman, try not to live up to your name.

This isn't about one vendor vs another.

ALL vendors are tainted (IMHO not by this episode but by others).

There's no need to act as an apologist for anyone, it doesn't matter what hardware your current www.opengl.org (http://www.opengl.org) html is being rendered on, screw it, you don't owe these guys a dime. Just hitch your wagon to the fastest tow and enjoy the ride. That's what these guys set out to create, take them up on the offer. Pony up $500 and get the bad assest graphics known to man, love it, REALLY love it but don't take sides, because any side would tear you a new one financially if they thought they could. Love the competition, that's your REAL friend.

If someone cheats then call them on it, yea there's some shady practices but I've been introspective and thoughtful in other threads already(don't laugh), why bother here, you know where I stand on this.

[This message has been edited by dorbie (edited 05-30-2003).]

Tom Nuydens
05-29-2003, 11:29 PM
Originally posted by john:
For example, CPUs are also complicated devices but CPU tests are ~targetted~ for a particular architecture. How so? Well, some the tests involve compiling code on a CPU and running it to see how well the code executes. The code that is being used to test the cpu has been ~scheduled for that cpu~ by the compiler. Instruction scheduling (+ cache line prefetching + a myriad of other optimisations that rely on knowing the architecture of a CPU) is ~not cheating~, but it IS hardware specific optimisations.

Is the D3D HLSL compiler built into D3D itself, or is it in the drivers? If it's in the drivers, using HLSL would go a long way towards this goal, no? Of course we don't know if 3DMark uses it or not...

-- Tom

knackered
05-30-2003, 01:30 AM
I like your new attitude, dorbie!
Pretty soon you will succumb, and join C++ and myself on the dark side. http://www.opengl.org/discussion_boards/ubb/smile.gif We're having jelly for tea.

harsman
05-30-2003, 01:32 AM
The Direct3d HLSL compiler lives inside the microsoft written D3D-runtime, the driver just gets the pixel/vertex shader assembler. One of the big differences between the Direct3d HLSL and glslang.

Humus
05-30-2003, 03:34 AM
When glslang comes around which directly targets the underlying hardware we have the perfect tool for fair comparisons. It's all up the the hardware vendors to make it run as fast as they can on their platform.

matt_weird
05-30-2003, 03:54 AM
knackered, jedi's light saber is looking for yer traitor's soul! http://www.opengl.org/discussion_boards/ubb/mad.gif (OT)

Coconut
05-30-2003, 03:59 AM
Oh no, not again. Sooner or later, knackered will claim he got a bigger and longer light saber.

dorbie
05-30-2003, 04:42 AM
Originally posted by knackered:
C++ and myself <snip> We're having jelly for tea.

KY Jelly?

Tom Nuydens
05-30-2003, 05:07 AM
Originally posted by Humus:
When glslang comes around which directly targets the underlying hardware we have the perfect tool for fair comparisons. It's all up the the hardware vendors to make it run as fast as they can on their platform.

Ah, but it won't allow them to sort the geometry front-to-back, so they'll still need to insert hardcoded clipping planes!

-- Tom

dorbie
05-30-2003, 06:54 AM
Ouch Tom, don't forget the intelliclear (tm) capability.

BTW I do have a possible cool idea for speeding up the screen clear. It would eliminate the need to 'optimize' the screen clear out of benchmarks and work on real world stuff too (a small bonus).

The basic idea is to defer the actual pixel color clear until the end of the frame and limit it to unwritten pixels.

One implementation would be to do a tag clear at the start with no color clear then at the end of the frame (whenever a swap or pixel read happens) you use 'improved' coarse z hardware to tell you which cells (a cell is what I'm calling a coarse z region) have unwritten color pixels since the last clear call, and you go in and clear the unwritten pixels in those cells. This would possibly assume zbuffer rendering and clear at the far plane or use the tag clear z information to only clear unwritten pixels. Certain types of rendering would force it to bail.

This would only work for opaque writes, any blended writes to a cell would trigger the clear in mid render but *only for that cell*. Either that or you build the clear color into the fragment rasterization stage hardware and just pipeline the "clearcolor blend on first write".

It would be a win for some apps, I don't know how much modification coarse z & screen clear hardware would need to support this, but the outline is there.

Another implementation; Depending on the hardware it may just make sense to clear an entire cell in hardware whenever any pixel it contains is written to for the first time, then clear unused cells at the end. Hmm.... I'm thinking that a region based fetch from the framebuffer could clear the fetch cache to the clear color on first fetch of any cell instead of actually fetching, all you have to do is have a flag for each cell, indicating written or unwritten, then at the end you only write clears to the unwritten cells. All you really clear when a clear is actually issued would be unwritten/written list. It should work for depth, stencil & alpha information too. Since the clear is on chip and you're saving a read & write on 'touched' pixels (plus a bit) it should be quite fast.

All very architecture dependent.


[This message has been edited by dorbie (edited 05-30-2003).]

Nutty
05-30-2003, 08:09 AM
Why even bother clearing the screen at all?? Most games dont. Unless you have parts of the screen that are not-rendered to at all, its pointless.

matt_weird
05-30-2003, 08:11 AM
http://www.opengl.org/discussion_boards/ubb/rolleyes.gif

matt_weird
05-30-2003, 08:21 AM
...someone was asking for a fast motion blur -- here ya go http://www.opengl.org/discussion_boards/ubb/tongue.gif

Humus
05-30-2003, 08:51 AM
Originally posted by Tom Nuydens:
Ah, but it won't allow them to sort the geometry front-to-back, so they'll still need to insert hardcoded clipping planes!

-- Tom

Well, fair under the assumption there's no cheats involved of course http://www.opengl.org/discussion_boards/ubb/smile.gif which may be a too wild assumption unfortunately http://www.opengl.org/discussion_boards/ubb/frown.gif

Ugh, possibly related, somebody reported that performance in my Mandelbrot demo is divided by a factor of three by simply uncommenting one of the instructions I had commented away in the end of the shader.
Edit: On the GFFX that is, R9x00 saw nearly no difference.

[This message has been edited by Humus (edited 05-30-2003).]

Humus
05-30-2003, 09:01 AM
dorbie,
I think something along that line is already implemented in the R300. There is/was a FastColorClear registry entry, though I have never really fooled around with it to see if it actually does something. Maybe someone with more direct info can fill in some details? (Yes Evan! I'm looking at you http://www.opengl.org/discussion_boards/ubb/smile.gif)

dorbie
05-30-2003, 10:43 AM
Humus, best start another screen clear thread, nobody from a card company is going to post to this thread :-). BTW I think Tom was being sarcastic.

SGI had a tag clear when you could simply invalidate the depth buffer where you could guarantee that you were going to touch every screen fragment. Software has obviously tried other approaches even on vanilla OpenGL like skybox depth writes instead of a clear.

As for making clear fast Nutty, sure when you touch every pixel you wouldn't need to clear anyway, you still need to clear depth though, and that might still be slow. A simple depth clear would still benefit from my approach, you wouldn't need a tag clear if you took the second option of generating the region buffer on chip on first region fetch. Making it more general and winning on color & stencil clear only makes sense.

[This message has been edited by dorbie (edited 05-30-2003).]

Coriolis
05-30-2003, 12:21 PM
Simple depth clears are already fast with hierarchical Z.

It seems like your fast color clear could very simply be done in unextended OpenGL by drawing a full-screen quad with glDepthRange(1, 1) and glDepthFunc(GL_EQUAL), so long as none of your screen geometry draws exactly at the far clip plane. Even if it does, a suitably tweaked depth range for normal geometry would make this work.

Nutty
05-30-2003, 12:39 PM
You should never clear your buffers with a big quad. If its faster, then the driver writers want a slap.

Adrian
05-30-2003, 01:08 PM
Originally posted by Nutty:
You should never clear your buffers with a big quad. If its faster, then the driver writers want a slap.

Funny you should say that... a quote from one of my programs

// glClear has a 10ms stall, whereas although drawing a quad takes 10ms, it returns immediately without stalling.

For clearing small buffers it can be quicker to use a quad.

When I say small I mean <256x256.


[This message has been edited by Adrian (edited 05-30-2003).]

Humus
05-30-2003, 01:39 PM
10ms stall? That would mean you can never go above 100fps.

Adrian
05-30-2003, 01:52 PM
Originally posted by Humus:
10ms stall? That would mean you can never go above 100fps.

Sorry, I meant 10us.

Nutty
05-30-2003, 02:53 PM
On what hardware?

dorbie
05-30-2003, 03:26 PM
Coriolis, I'd actually thought of that and you are correct, but it doesn't handle most fragment blending well, I also thought zfar would be an issue but I don't think so now that I think about it. Performance is the greater concern. Obviously for opaque depth written stuff it would work functionally. As a first blush attempt you could do this in the driver for some scenarios. My main concern over the depth tested polygon fill approach would be that in the absence of a real implementation (as described originally) it would be slower comapred to a real clear in some circumstances by the time you throw your other buffer clears in there anyway and where you don't have a lot of fill. It would depend on the existing implementation and how much non cleared screen coverage you had, but on the 3DMark 2003 space stuff a polygon implementation would be more of a loss than a win (absent the human heuristics added to the fixed path:-), a full hardware implementation would be a win (IMHO).

The other issue of course is what resources an implementation would take relative to existing screen clear hardware.

Maybe it's worth it maybe it isn't I just figured with someone hacking a screen clear disable in the driver you might want to try and get that kind of benefit fair & square without any artifacts or pitfalls.

For all I know they may already do something like this.

Nutty, to be fair to Coriolis, the buffer big quad clear suggestion would rely on coarse z to reject most of the fill in the scenario he's thinking of, but as I said it would be very tricky to see a win in the general sense. You need heuristics there that can only exist in the application and that's no use when you're trying to accelerate an implementation unless you're prepared to ahem.... well we've covered that already.

[This message has been edited by dorbie (edited 05-30-2003).]

Adrian
05-30-2003, 04:00 PM
Originally posted by Nutty:
On what hardware?



GF4600 + AMD XP2000.

knackered
05-31-2003, 09:43 AM
Originally posted by Nutty:
You should never clear your buffers with a big quad. If its faster, then the driver writers want a slap.

If it's faster, then the driver writers would do it that way in the first place.

It's incredible that the speed of a buffer clear still gets talked about on newsgroups.

dorbie
05-31-2003, 09:56 AM
When one manufacturer sees fit to hand code selectively disabling screen clear in a fixed path scenario then a discussion of how to effectively accomplish that optimization without cheating is obviously appropriate.

Adrian
05-31-2003, 04:27 PM
If it's faster, then the driver writers would do it that way in the first place.

It's incredible that the speed of a buffer clear still gets talked about on newsgroups.

A 10us advantage in clearing small buffers is probably not an issue for 99.9% of apps but when you need to do it 20,000 times per frame, for radiosity, it is.

[This message has been edited by Adrian (edited 05-31-2003).]

zeckensack
05-31-2003, 07:31 PM
You can't do it any faster than hardware logic ...

The NV2x and up apparently directly use their memory controllers to perform any buffer clears. That means that these chips easily exceed their fillrate limits on a clear, which you simply can't do with a quad. A Geforce4Ti4200 clears any buffer right at its 8GB/s bandwidth limit
Ie, in 16 bits mode, color clears operate at 4 Gpix/s. Clearing only an 8 bit stencil buffer goes up to 8 Gpix/s.

(this has some interesting implications that do not match the popular "Tiling is bad" statement too well ...)

R200's color clear operations are fillrate bound. Same thing if you clear only the stencil buffer.

A combined depth/stencil clear clocks in at an 'effective' 100GB/s, the same speed is reached with only a depth clear (though it's of course only 75GB/s then). That's 25 Gpix/s, if you prefer, on a card that has a fillrate of only 1 Gpix/s.

(the above measured with a Radeon8500LE; 250/250MHz)

Same thing basically holds true for R300 ... apart from a little driver strangeness ...

And also, color clears are vastly improved on R300. I've measured 157GB/s for a pure color clear on a Radeon 9500Pro. Not sure how they managed to do that.

So, looking forward, IMHO this is pointless.

[This message has been edited by zeckensack (edited 05-31-2003).]

[This message has been edited by zeckensack (edited 05-31-2003).]

Adrian
05-31-2003, 11:10 PM
I've never said the fill rate of drawing a quad is higher than that for glClear just that drawing a quad behaves more async than glclear. There is a higher initial fixed cost with using glclear which only shows itself when clearing small buffers. All I can say is that when switching between glclear and drawing quads to clear zbuffer/color, quads are faster for small buffer clears where fill rate is not the main issue.

dorbie
06-01-2003, 01:16 AM
Even if your memory controller performs the clear, the approach mentioned here could eliminate 2 framebuffer transfers to / from memory, I don't care how clever your memory controller is it ain't magic.

Yes looking forward this is pointless, but if it is pointless it will be because someone looked at the problem and dealt with it, not because they scoffed and did nothing about it.

[This message has been edited by dorbie (edited 06-01-2003).]

jwatte
06-01-2003, 09:32 AM
Dorbie,

Suppose the DRAM had a "clear entire page" strobe. Thus, the memory controller just had to open the correct page, strobe it ONCE (one cycle), and then open the next page. In fact, it probably already pipelined the open of the next page :-)

This would only work when clearing to 0, or whatever they managed to wire into the RAM chip, though. Perhaps someone could measure whether clearing to, say, 0x305070, is slower than clearing to 0?

dorbie
06-01-2003, 10:12 AM
Interesting, that's really getting beyond my technical ability.

However, even if you could black clear the screen instantly, my suggestion of generating an unwritten region's (or even fragment's?) clear on chip on a first write would be a win I think, and not just because it can support a colored clear. You don't have to read the cleared color back from the memory for the first fragment operation on it, and it could work for color. Several vanilla operations in OpenGL could set this scheme in motion according to obvious heuristics, one would be a swap, a clear could also 'invalidite' some regions as chip generable, a clear after a swap would simply set the generation color on chip and possibly invalidate z etc.

Various other operations would flush it, for example a readpixels, swap, or changing the clear color (actually just a clear issue after changing the clear color and then only if it doesn't clear the entire context's buffer).

Is it worth it? 50fps * 1600*1200 * 64 / 8 (approximately) comes out at .7 gigabytes/second one way and that's a fairly modest pixel format, padded (z + stencil + 8 bit per component color), just for that first read (plus anything you save on a clever clear implementation).

I'm sure there are clever tricks already done, maybe we're just rehashing earlier work, I don't know. We really don't know what levers the man behind the curtain is pulling on our cards.


[This message has been edited by dorbie (edited 06-01-2003).]

jwatte
06-01-2003, 11:49 AM
Another option is hierarchical Z. If you clear the top level in the right way, I suppose they don't need to clear more than that. I don't know how many levels they go, but even a single level on 8x8 blocks would go a LOOONG way. This is before taking into account Z compression.

V-man
06-01-2003, 12:41 PM
Originally posted by jwatte:
Suppose the DRAM had a "clear entire page" strobe. Thus, the memory controller just had to open the correct page, strobe it ONCE (one cycle), and then open the next page. In fact, it probably already pipelined the open of the next page :-)

How about NOT refreshing the page with a strobe, if that is possible. It might take a few cycles for the memory cells to drain.

It would be a nice way to clear huge quantities of RAM.

*Aaron*
06-01-2003, 05:18 PM
I remember an old technique for software depth buffers where, instead of clearing, you add an offset to the z value for each pixel before the depth test and depth write. The offset is decremented by the maximum possible z value (before the offset is added) each frame. If you allocate a 32-bit buffer, and use 24 bits of resolution, the buffer only needs to be cleared once every 256 frames. This would cause problems with depth buffer reads, so it would need to be an extension. Probably not worth the trouble, since is would only give a small frame rate boost, and it wastes memory.

Wow, this thread has really gotten off topic. It's supposed to be an ATI vs. nVidia flame war, remember?

Humus
06-02-2003, 01:01 AM
Originally posted by V-man:
How about NOT refreshing the page with a strobe, if that is possible. It might take a few cycles for the memory cells to drain.

It would be a nice way to clear huge quantities of RAM.

If I understand what you're proposing correctly, then I think what you're suggesting would not be a clear to zero but rather end up as a clear to random.

matt_weird
06-02-2003, 06:05 AM
Originally posted by Humus:
If I understand what you're proposing correctly, then I think what you're suggesting would not be a clear to zero but rather end up as a clear to random.

consider that clearing the memory to zero or either to any other number would take the same amount of time. rather consider the idea of switching OFF the power supply of the RAM chip for a moment to RESET it -- it looks like it would take less time than clearing the memory cells(not sure though how stable would that be to get zeroes at every cell by this; also of course there's no guarantee that this process will be going too fast, but the RAM vendors could work on this to get proper results, i think)

M/\dm/\n
06-03-2003, 02:32 AM
Ha, seems that NVIDIA didn't cheat after all http://www.opengl.org/discussion_boards/ubb/wink.gif www.tech-report.com (http://www.tech-report.com) www.nvnews.net (http://www.nvnews.net)
"Futuremark now states that NVIDIA's driver design is an application specific optimization and not a cheat."

dorbie
06-03-2003, 03:50 AM
ASCII is cheap.

The precision of shaders was reduced selectively and it impacted quality, now your opinion may differ from mine on that particular issue (the framebuffer precision is irrelevant when you're hosing the precision of an important *vector* in a shader IMHO this is a "big deal"), but there's a contradiction in claiming that quality is the important factor then turning around and saying it's fair game to application specifically reduce the precision of shaders. The only way I'd swallow this is if it was triggered by the selection of the desktop level user selectable quality setting.

There were apparently viewpoint dependent optimizations placed in the driver related to fixed path motion. If you're going to claim this was all legitimate optimization you need to explain what happened when the screen clear was selectively disabled and occluded geometry got 'clipped' when the eye went 'off the rails'.


[This message has been edited by dorbie (edited 06-03-2003).]

Humus
06-03-2003, 04:13 AM
Originally posted by M/\dm/\n:
Ha, seems that NVIDIA didn't cheat after all http://www.opengl.org/discussion_boards/ubb/wink.gif www.tech-report.com (http://www.tech-report.com) www.nvnews.net (http://www.nvnews.net)
"Futuremark now states that NVIDIA's driver design is an application specific optimization and not a cheat."

I'm quite amazed and annoyed at that we at this time and age, in the 21th century, still don't have a legal system where he who is right will win but rather he who has the most money. I'm not sure what threats nVidia sent to Futuremark, but it's remarkable that later the same days as rumors started to leak out about nVidia evaluating the possibility to sue FM, this crap is released. NVidia has no legal ground for suing FM, but would they have gone ahead with it it would have been too costly for FM and they could be forced off the market. FM had to choose survival over principles. It's quite disgusting really that such considerations must be taken. It's a ugly world.

[This message has been edited by Humus (edited 06-03-2003).]

MZ
06-03-2003, 04:17 AM
hierarchical Z, DRAM strobe... *yawn*.
Going back to the spirit of the thread: http://www.opengl.org/discussion_boards/ubb/smile.gif http://www.rage3d.com/board/showthread.p...20&pagenumber=1 (http://www.rage3d.com/board/showthread.php?s=6c2549e3f04bb820896c17227a71d175&threadid=33686668&perpage=20&pagenumber=1)

dorbie
06-03-2003, 04:44 AM
Humus, I have some serious concerns over this press release. It obviously ignores some of what we know to be the facts. I would like to know why Fururemark chose to issue a press release that so selectively deals with a subset of issues in the case and is devoid of any of the substance of their earlier detailed report.


[This message has been edited by dorbie (edited 06-03-2003).]

matt_weird
06-03-2003, 05:29 AM
Originally posted by MZ:
Going back to the spirit of the thread: http://www.opengl.org/discussion_boards/ubb/smile.gif

ha, that's really funny, MZ!!!!!! http://www.opengl.org/discussion_boards/ubb/biggrin.gif


[This message has been edited by matt_weird (edited 06-03-2003).]

M/\dm/\n
06-03-2003, 05:33 AM
YEAP, the fact that these 'optimizations' are not cheats is funny http://www.opengl.org/discussion_boards/ubb/frown.gif But in the same time those 3D Marks are equally funny, and Futuremark should be sued for their arguments of fair benchmarking http://www.opengl.org/discussion_boards/ubb/mad.gif .
Only thing I like (partially) in 3D Mark is PixelShader 2.0, VertexShader & Ragatroll tests, if we compare fps in each test between cards we can get approximate performance results. And still these tests are partially correct, as architectures are totally different! Writing shaders for ATI is not the same as writing them for NVIDIA ( instruction parity, swizzling, pixel formats, etc. etc. all are factors that are unique for each architecture).
And what we get, HUUUUGE SOAP OPERA & a lot of cards Ultra/Pro/Non-Ultra/Non-Pro * 9500/5600......... that dies when they must perform accurate raytracing in raltime even at 800x600 http://www.opengl.org/discussion_boards/ubb/mad.gif .
And if this war is going to stay, then soon we'll see games that run only on one platform EA/UBI/ID/... on NVIDIA VALVE/... on ATI http://www.opengl.org/discussion_boards/ubb/biggrin.gif .
NVIDIA had nice idea to have their own compiler to compile vp/fp for their hw in the way they run in fastest possible way. They also stated that other vendors are welcome to write their own compilers for their own hw, as only they knows all the circuits of their own board. Other vendors decided to stick with old style vp/fp and f**k the Cg idea http://www.opengl.org/discussion_boards/ubb/biggrin.gif
So we are in situation that companies have huge support from NVIDIA, when they write shaders for games/simulation progs, but tests 'strangely' are writen in old, good, straightforward way. As Carmack stated NV30 path seems to win ARB2/R(2/3)00 path. And thats possible only because of different shading approach.
So nexst step will be when DX shading language will clash with GL shading language, as shaders for first will be written in strightforward style (precompiled) & for second, driver will handle the optimization. So NV30/35 has no chance of beating ATI in DX HLSL, but with nice drivers NV30/35 will win in GLSLANG & I guess in games.
One thing is for sure I'm not going to spend my time on assembly, as that takes a lot of time & crack, the line count increases terribly fast. But if I leave everything for compiler HLSL/GLSLANG/CG, then I have no f**n difference how the code is optimized, only thing I care, is that code must be optimized (well, somtimes I can help a bit like in C) http://www.opengl.org/discussion_boards/ubb/smile.gif
So the question is do we need one ARB path & card that runs there fast OR we use GLSLANG & compilers/drivers takes the optimization stage & do the same suff on hw native path? I doubt we will be sitting on assembly for a long time, so compilers/drivers will take a huge role (future of GPU programming I guess => http://www.opengl.org/discussion_boards/ubb/biggrin.gif Future'mark' http://www.opengl.org/discussion_boards/ubb/biggrin.gif ) & if so it doesn't matter anymore which path is used http://www.opengl.org/discussion_boards/ubb/wink.gif

MZ
06-03-2003, 06:09 AM
There were apparently viewpoint dependent optimizations placed in the driver related to fixed path motion. If you're going to claim this was all legitimate optimization you need to explain what happened when the screen clear was selectively disabled and occluded geometry got 'clipped' when the eye went 'off the rails'.
These imperfections are result of reverse engineering of the scene, and thereby lack of control of the scene. If these clipping & clearing optimizations were done by 3DMark internally, they would look perfect, I suppose. And the point is that these optimizations *should* have been done by 3DMark, as they would be by any competent engine developer.

Quotes from 3DMark's whitepaper:
Futuremark’s latest benchmark, 3DMark03, continues this tradition by providing a state-of-the-art Microsoft® DirectX® 9 benchmark.
(...)
We hope to give the user a view into state-of-the-art 3D graphics not only today, but also up to one and half years into the future.Would anyone expect omiting such TRIVIAL optimizations in the "state-of-the-art" benchmark?

And now we read this (from the link above):
3DMark03 is designed as an un-optimized DirectX test and it provides performance comparisons accordingly
So, which company was first to bull$hit?

Tom Nuydens
06-03-2003, 06:20 AM
Originally posted by M/\dm/\n:
One thing is for sure I'm not going to spend my time on assembly, as that takes a lot of time & crack

Have you ever tried it without the crack? http://www.opengl.org/discussion_boards/ubb/smile.gif

-- Tom

dorbie
06-03-2003, 06:26 AM
MZ, I disagree, these cheats were view dependent and look like they added path specific knowledge. Such "optimizations" cannot simply be added to free moving eyepoint scenarios. It is utterly ludicrous to suggest that any benchmark, synthetic or real should exploit human inserted knowledge to optimize for a fixed path scenario. The fixed path scenario should run the same codepaths as the free roaming scenario, the intent is to measure the performance of the free roaming scenario under controlled conditions, not measure some completely irrelevant movie playback.

Doing anything less is a cheat, this is not a grey area, there's no wiggle room here, view dependent optimizations that rely on a fixed path in a benchmark are a flagrant cheat. Beyond this if you're going to trim hidden fill in a benchmark then the application should do this so that it applies to all cards equally.

As for shaders, it's been discussed to death here and in other threads. The screen shots speak for themselves, it's clear what effect this had on the rendered quality. It was a state of the art benchmark because it implemented high quality DX9 shaders, it ran the same shader on all hardware to be fair. If you could implement the same shader through optimized platform specific codepaths and have it be of equal quality that might be a reasonable alternative. BOTH approaches are valid and have merit because titles use both. The whole point of DX9 shaders is write once run anywhere, that's the essence of the DX9 and even ARB_fragment sales pitch. Now, if a vendor can optimize a shader this is a good thing and should be done by the app developer really, but not while completely hosing the quality, especially in a benchmark. If you're going to tune to specific platforms you have to at LEAST aim for visual equivalence, and I personally want functional and mathematical equivalence. Anything less is meaningless in terms of producing a numerical metric representing relative performance.

[This message has been edited by dorbie (edited 06-03-2003).]

matt_weird
06-03-2003, 06:28 AM
Originally posted by Tom Nuydens:
Have you ever tried it without the crack? http://www.opengl.org/discussion_boards/ubb/smile.gif


it doesn't go without it http://www.opengl.org/discussion_boards/ubb/tongue.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif

MZ
06-03-2003, 07:05 AM
It is utterly ludicrous to suggest that any benchmark, synthetic or real should exploit human inserted knowledge to optimize for a fixed path scenario. The fixed path scenario should run the same codepaths as the free roaming scenario, the intent is to measure the performance of the free roaming scenario under controlled conditions, not measure some completely irrelevant movie playback.
I didn't suggest what you wrote. I said I believe these view-dependant 'cheats' could be turned into valid, view-independant 'optimizations', when done on application side. I don't think that launching most advanced shader at areas known to be occluded later is really required for view-independence of the effect. I'd rather believe it is ill chosen way to "stress the HW". The 'clearing' issue is much less clear to me, I simply extrapolate this is an analogous scenario.

Of course, I didn't cover shader precision issues. But my intention wasn't defending cheating, but questioning validity of 3DMark'03 design.

dorbie
06-03-2003, 07:15 AM
OK, then we agree about more than I thought. Those optimizations are hard to implement for a free roaming view. Just because some unscrupulous driver adds view dependent fixed path driver cheats to a demo, does not mean that the application has done a bad job. Those cheats are much easier to add than genuine application level optimizations for the general cases an application must handle.

The primary purpose of a fixed path is testing free roaming rendering under controlled conditions, not rendering the contents of the fixed path as fast as possible. No optimizations should be applied that rely on knowledge of the path. This is clear, such hacks are cheats that make any comparrative and absolute measurement irrelevant.

How can this not be absolutely clear to all? This is about benchmarking graphics performance. i.e. drawing the same stuff on multiple cards and seeing which one draws it fastest. Even if you think an application draws too much stuff for example, or even redundant rendering of stuff. You should not put heuristics in there that exploit application specific knowledge to reduce what is drawn. It's the absolute antithesis of what benchmarking is about.

[This message has been edited by dorbie (edited 06-03-2003).]

Ostsol
06-03-2003, 08:41 AM
http://www.beyond3d.com/forum/viewtopic.php?t=6230

Patric Ojala - 3DMark Producer:

First I must admit that there is very little I can comment about the joint statement between Futuremark and Nvidia due to legal aspects. What I can do is answer some frequently asked questions about this and quote some parts of the statement.

Please read the statement well and do not post hasty conclusion after reading only the first two paragraphs of the statement.

Q: Does this mean what you called originally as "cheats" actually were acceptable "optimizations", and that you made a wrong decicion in releasing Patch 330 and the Audit Report?
A: By the definition of our benchmark and process, the optimizations are not acceptable. 3DMark scores are only comparable if drivers perform exactly the work 3DMark instructs them to do.

The statement also says:
Quote:
Because all modifications that change the workload in 3DMark03 are forbidden, we were obliged to update the product to eliminate the effect of optimizations identified in different drivers so that 3DMark03 continued to produce comparable results.


As earlier stated, we recommend using the latest build 330 of 3DMark03, with the 44.03 (or 43.51 WHQL) Nvidia drivers, or the Catalyst 3.4 ATI drivers. This way obtained 3DMark03 results are genuinely comparable as far as we know.

Q: What is the reasoning behind this statement?
A: Both companies want to end the public dispute that has been going on since we launched 3DMark03 in mid-February this year.

Q: Did NVIDIA pay you any money to make this statement?
A: No, they did not. Our companies had a mutual desire to end this dispute, and we are very pleased that we reached this agreement.

Q: Does this mean that in the future you will not make patches for 3DMark03 (or 3DMark2001) in order to reveal cheating?
A: We might release further patches to 3DMark03, if a need for preventing driver optimizations appear in the future.

The important section is highlighted in bold. Regardless of what you want to call what NVidia did, "cheat" or "valid application-specific optmization", FutureMark deams it unacceptable in 3dMark.

*Aaron*
06-03-2003, 11:46 AM
Allow me to translate some of this statement:


Q: What is the reasoning behind this statement?
A: Both companies want to end the public dispute that has been going on since we launched 3DMark03 in mid-February this year.
Translation: nVidia paid up.


Q: Did NVIDIA pay you any money to make this statement?
A: No, they did not. Our companies had a mutual desire to end this dispute, and we are very pleased that we reached this agreement.
Translation: Yes. Lots.


Q: Does this mean that in the future you will not make patches for 3DMark03 (or 3DMark2001) in order to reveal cheating?
A: We might release further patches to 3DMark03, if a need for preventing driver optimizations appear in the future.Translation: It depends on how fast the offending company pays us.

Disclaimer: The above statements are satire. And if Futuremark will just pay me $100000, I won't make fun of them anymore.

Elixer
06-03-2003, 12:16 PM
Originally posted by *Aaron*:
Allow me to translate some of this statement:


Q: What is the reasoning behind this statement?
A: Both companies want to end the public dispute that has been going on since we launched 3DMark03 in mid-February this year.
Translation: nVidia paid up.


Q: Did NVIDIA pay you any money to make this statement?
A: No, they did not. Our companies had a mutual desire to end this dispute, and we are very pleased that we reached this agreement.
Translation: Yes. Lots.


Q: Does this mean that in the future you will not make patches for 3DMark03 (or 3DMark2001) in order to reveal cheating?
A: We might release further patches to 3DMark03, if a need for preventing driver optimizations appear in the future.Translation: It depends on how fast the offending company pays us.

Disclaimer: The above statements are satire. And if Futuremark will just pay me $100000, I won't make fun of them anymore.

More like a Commando team of lawyers swept in and stomped some onions... http://www.opengl.org/discussion_boards/ubb/smile.gif

Humus
06-03-2003, 12:32 PM
Yup.
http://webpages.charter.net/tates/MO/Lawyermark.gif http://www.opengl.org/discussion_boards/ubb/wink.gif

FXO
06-26-2003, 05:47 AM
^^ - Good one :P

I'm a little confused by Futuremarks sudden
turn on the issue, they had a lot of arguments before..

davepermen
06-27-2003, 02:17 AM
Originally posted by Humus:
Yup.
http://webpages.charter.net/tates/MO/Lawyermark.gif http://www.opengl.org/discussion_boards/ubb/wink.gif

i'm interested how nvidia hd to cheat to get to those lawyermark results!!! (both in joke and in real... a good joke, a sad reality..)

jebus
06-27-2003, 02:56 AM
why is this even called cheating?!? because they traded image quality for framerate? i guess because 3DMark scores framerate ... personally i'm with the crowd that says who gives a fart! it's not worth selling my nVidia chipset video card. next thing you know people will be ditching their windows OS when they learn that Microsoft cuts corners! http://www.opengl.org/discussion_boards/ubb/biggrin.gif

jebus

M/\dm/\n
06-27-2003, 03:20 AM
HAAAAAAAAA, and smashing their CPU's because of cheated Sandra scores http://www.opengl.org/discussion_boards/ubb/biggrin.gif (that's allready happening with Apple's G5-> www.tech-report.com) (http://www.tech-report.com))

Funy, bu FX5900 looses R9800 http://www.opengl.org/discussion_boards/ubb/biggrin.gif, www.ocaddiction.com, (http://www.ocaddiction.com,) however I'd go with FX from theese two, only if those cards wouldn't be so f**n expensive http://www.opengl.org/discussion_boards/ubb/mad.gif

And one more thing, 3DMark produces different results with AA+AF even with build 330 if renamed to 3dsomething http://www.opengl.org/discussion_boards/ubb/biggrin.gif.

tfpsly
06-27-2003, 04:12 AM
Originally posted by jebus:
why is this even called cheating?!? because they traded image quality for framerate?

I guess you would play a game where you would not be able to turn and go wherever you want because the card manifacturer decided to clip most of the world because it was not seen in a given path :-P

M/\dm/\n
06-27-2003, 04:16 AM
But CRACK it, it's not a game & you are not runing around http://www.opengl.org/discussion_boards/ubb/wink.gif

Anyway, why this thread still lives?

dorbie
06-27-2003, 07:44 AM
Jebus, this is not just about sacrificing image quality. IMO it's about deceptive practices where they gain an unfair advantage in a benchmark through not performing all the rendering requested by the benchmark. They did not simply sacrifice quality, they effectively rewrote parts of the benchmark and trojan'd the rewrites in through their driver IMHO. This is so obviously underhanded in several ways and their cheats so extensive that you'd have to be a raving NVIDIA fanboy to approve of them all even if you consider one or two to be borderline (I don't). If you're even half way reasonable you'd agree that their conduct was bad. FWIW I know developers who have sworn off of NVIDIA hardware now, not because of their cheats but because of the way they apparently muscled Futuremark.

This started out bad and just got worse, finally it went off the deep end with the joint Futuremark press release.

Now we have imbeciles running around saying that it's all ambiguous and you can't really say what's fair and isn't and the same morons saying that benchmarks are useless now and we need to be careful in future. All they need to do is grow a spine, get a clue and call a cheat a cheat. If we can't do that then benchmarks of any stripe are utterly useless. They're made useless by these vacillating fools. We have NVIDIA actually having the gall to stand up and say what they did was OK (quite a change in position since writing quackifier), I guess that means they'll be cheating again in future.

It's completely open season now thanks to NVIDIA, and if they get caught cheating and committing consumer fraud (IMO) that'll be just fine by them. They can even rewrite your copyright software through driver trojans without a license and undermine your business and if you speak out maybe they'll sue your ass unless you kiss theirs. It's bloody brilliant, we need more of this in future, the boot isn't stomping on our faces hard enough yet.

[This message has been edited by dorbie (edited 06-27-2003).]

Nakoruru
06-27-2003, 09:48 AM
Wow dorbie, you accuse people of going of the deep end and here you go accusing nVidia of being a gestopo, grinding our faces into the ground with their shiny (bump-mapped) boots.

As long as nVidia has competition there is absolutely no threat that any boot is going to be anywhere near anybody's face anytime soon.

Reading your posts (for the past year or so), I can say that you could be accused of being an ATI fanboy (not that I'm doing that).

Picking sides in this fight is pointless. All three sides are less than trustworthy and all three of them would step all over each other to pick up a dollar laying on the sidewalk. All three of them will lie to protect their self interest.

I am not saying 'what is the big deal' or 'who cares', I am just saying that the one you pick today will be the one screwing you over tommorow so look out for yourself and yourself only and buy the best graphics card for what you are doing and be thankful that you can afford a graphics card with 125 million transistors without refinancing the house.

I am so sick of seeing hardware reviews declare something the 'winner' when it has a 10 percent lead over the competition. Give me a break.

The real story here is how FutureMark managed to get everyone to believe that how many 3D marks a graphics card scores is important. Their acheivement is an amazing marketing feat, but how many people would argue that their technology is that great?

matt_weird
06-27-2003, 10:00 AM
*starts a bloody war against nVidia; turns into Darth Vader and cuts all the nasty chips with a lightsaber!* http://www.opengl.org/discussion_boards/ubb/tongue.gif

MUHHAHAHAHAAA! http://www.opengl.org/discussion_boards/ubb/eek.gif *breathin'*

[This message has been edited by matt_weird (edited 06-27-2003).]

scurvyman
06-27-2003, 10:30 AM
Originally posted by Nakoruru:
The real story here is how FutureMark managed to get everyone to believe that how many 3D marks a graphics card scores is important. Their acheivement is an amazing marketing feat, but how many people would argue that their technology is that great?


Seriously. I wish that, years ago, I had come up with the idea to write what will be pointed to as the end-all 3D gaming performance benchmark. Because today, I could be writing software whose internal workings bears little resemblance to those of actual modern games, convincing millions of consumers that the performance of graphics cards in these arbitrary tests is a good measure of their worth, and demanding exorbitant fees of developers who want access to the source code so as to assure their hardware will get good ratings in my tests.

In all seriousness, what is the benefit of 3DMark? If you're worried about gaming performance, benchmarks in real games would give you a better idea about that. "But there aren't any games out yet that take advantage of DX9-level hardware..." True, but what about writing shaders that do, and measuring raw performance? Some foreign hardware site that somebody here linked not too long ago did just that.

Not to mention that time that hardware companies spend optimizing for 3DMark is time NOT spent optimizing for real game engines that will almost certainly proliferate within the next couple years, i.e. Doom3, HL2, next Unreal iteration. And for that matter, it's time spent not even just working on quality drivers, or creating more developer examples, etc.

NitroGL
06-27-2003, 10:46 AM
Originally posted by heath:
PBuffers on ATI are a complete and utter mess, with their roll it themselves approach and GLX 1.2 support. You have to hand it to NVIDIA, they have significantly better and more mature OpenGL drivers on both Windows and without a shadow of a doubt on Linux.

Do you know WHY nVidia's OpenGL ICD is "better"? It's because they have a lot of people that use to work at SGI (at least *I* think that's the reason).

Ostsol
06-27-2003, 11:34 AM
My big complaint about NVidia and the 3dMark optimizations/cheats/whatever isn't the pixel shader replacement, but the inclusion of the clipping planes. The shader replacement is something that can easily work in a game (though with potential quality differences), but hard clipping planes cannot.

V-man
06-27-2003, 06:13 PM
Originally posted by Ostsol:
The shader replacement is something that can easily work in a game (though with potential quality differences), but hard clipping planes cannot.

I think it won't because it will only replace the 3Dmarks shaders, thus some of us define this as a cheat.

The only kind of "replacement" that should be performed is optimization that gives equal visual result when compared to the original.

Looks like the confusion over this still persists. Or some people are playing dumb...

zeckensack
06-27-2003, 06:48 PM
Originally posted by scurvyman:
In all seriousness, what is the benefit of 3DMark?You aren't particularly on topic, but that's something many people have done wherever this has been discussed.

What does 3DMark's relevance or lack of relevance have to do with NVIDIA screwing the press?

If you wish to defend NVIDIA by dismissing 3DMark, maybe you can tell me why NVIDIA put so much effort into artificially inflating the 3DMark scores? They wouldn't do this if 3DMark was irrelevant, or would they?

Sheesh.

matt_weird
06-27-2003, 09:15 PM
Originally posted by NitroGL:
Do you know WHY nVidia's OpenGL ICD is "better"? It's because they have a lot of people that use to work at SGI (at least *I* think that's the reason).

Yep, i think right when Mark Kilgard &Co moved to nVidia, their drivers became way more better. But i think the reason of the better drivers' work is not only they used to work at SGI before, but also that those people *ARE* working hard to throw all the **** outta that HW indeed. Still, NV hardware seems to be more buggy than the big brother's one http://www.opengl.org/discussion_boards/ubb/wink.gif

davepermen
06-28-2003, 04:29 AM
Originally posted by jebus:
why is this even called cheating?!?

because they don't deliver the image quality they should deliver
because their hw is not able to deliver good image quality fast and they try to hide that

but mainly because their "optimisations" don't work in every application. even a shader replacement that results in the exactly same image quality is a cheat, if the shader replacement just happends because its "3dmark03.exe". it means that you don't get the same performance in the same situation, depending on how important you are on the world. if you're a big one like carmack, your games will perform bether on the hw than if you're a small one, like most of us in here are.

think of it: i would say "hm cool, the 3dmark water shader rocks" and try to recode it. it would happen to be identical. still, my version would run only at say 75% of the speed than in 3dmark. no mather what i do. till i rename my exe to 3dmark03.exe.. then i would gain the speed..

that is cheating. customers get fooled into false performance and quality, developers get fooled into false performance and quality.

and madman, explain me why you prefer a 5900FX over the 9800PRO while the ati card is faster, has bether image quality, and is cheaper? do you feel cool because of it? well i love my money, i don't feel cool in spending it for ****. and non of my friends would say eighter "woah, how cool you are that you can buy the 3th best card for more money than the best card would cost..". everyone would laught about me. only my pc not, it would cry because of that fat piece of **** (the 5900 is fat and hot, too.. while much bether than the 5800, agreed http://www.opengl.org/discussion_boards/ubb/biggrin.gif).

you're not cool, just stupid. i want to know why.. http://www.opengl.org/discussion_boards/ubb/biggrin.gif

*Aaron*
06-28-2003, 11:31 AM
Originally posted by zeckensack:
You aren't particularly on topic, but that's something many people have done wherever this has been discussed.

What does 3DMark's relevance or lack of relevance have to do with NVIDIA screwing the press?

If you wish to defend NVIDIA by dismissing 3DMark, maybe you can tell me why NVIDIA put so much effort into artificially inflating the 3DMark scores? They wouldn't do this if 3DMark was irrelevant, or would they? Criticizing Futuremark is not the same as defending nVidia. This scandal has brought Futuremark's business practices into the spotlight, so its only natural that they will get some criticism for their morally questionable practices. Maybe you should be asking yourself why you feel the need to defend Futuremark in order to condemn nVidia.


Originally posted by Nakoruru:
Picking sides in this fight is pointless. All three sides are less than trustworthy and all three of them would step all over each other to pick up a dollar laying on the sidewalk. All three of them will lie to protect their self interest. I couldn't agree more. Remember the Quake/Quack scandal? Many people considered that cheating because ATI's drivers blatantly ignored the game's texture resolution settings and texture filtering mode, and because Quake3 was widely used as a benchmark. This "optimization" could not be disabled and was not publicized by ATI. It wasn't even necessary to achieve playable framerates. Therefore, it was a cheat to boost the Quake3 benchmark framerate. Granted, nVidia went _way_ beyond this in their cheat, but nobody's squeaky clean in this case.


Originally posted by davepermen:
the 5900 is fat and hot, too..Keep your wierd fetishes to yourself. This an OpenGL discussion board. http://www.opengl.org/discussion_boards/ubb/wink.gif

dorbie
06-28-2003, 03:14 PM
The boot is an Orwellian reference (an appropriate one when we have "newspeak" enforced to eliminate the word cheat), go read a book. As for "accusations", what I wrote just happened incase you missed it.

I agree picking sides is pointless, but being critical of atrocious conduct by one company or another is absolutely essential. I'd be critical of any graphics company that behaved as badly as NVIDIA just has.

This business of saying ATI is just as bad when it's NVIDIA caught red handed is a recipe for apathy. It looks like the only thing that might discourage these cheats in future is the consequences of getting caught but the fanboys and morons in the press can't wait to act as apologists and muddy an absolutely crystal clear issue.

Yep ATI behaved badly with the Quack thing, but NVIDIA was their biggest critic, leaking the story to publications and writing quackifier to make it easier to expose the difference. ATI were rightly criticised at the time, by me and just about everyone else with a clue.

Now NVIDIA has implemented a set of blatant cheats that are the most comprehensive attempt to undermine any graphics performance measurement that I'm aware of, and it's "Never mind ATI are just as bad". In this case ATI's conduct is NOT as bad, ATI has done nothing wrong here. One past transgression from several years ago does not give NVIDIA a free pass for their cynical deception today.

For the record this is not JUST about image quality. Hand coded clip planes & screen clear disables don't affect quality, but are totally inexcusable in a driver when triggered by a benchmark. The debate over shader rewrites vs optimization has obscured the other blatant cheats implemented by NVIDIA.

[This message has been edited by dorbie (edited 06-28-2003).]

zeckensack
06-28-2003, 06:59 PM
Originally posted by *Aaron*:
Criticizing Futuremark is not the same as defending nVidia.Criticizing FM *in this context* is defending NVIDIA. Interestingly enough, dismissing 3DMark03 is also NVIDIA's official company line.
How else should I interpret this?
"NVIDIA cheated in 3DMark!"
"Oh, who cares? 3DMark is irrelevant, move on."

How would you like that:
"NVIDIA killed Aaron!"
"Well, nobody liked Aaron anyway, let them go free."

It's no matter how popular the victim, justice is justice, period.


This scandal has brought Futuremark's business practices into the spotlight, so its only natural that they will get some criticism for their morally questionable practices.Morally questionable, huh?
I agree that FM's new business model is overboard, but that's not the problem here. You're just (like so many others) trying to pull interest away from the real issue, which is that NVIDIA screwed the press and everyone who relies on the press by forging competitive data.
You don't think that 3DMark is a viable benchmark, that's fine, I have my doubts too. But that's *no excuse*.

Maybe you should be asking yourself why you feel the need to defend Futuremark in order to condemn nVidia.Maybe you should ask yourself whether I really do, or if it's just easier for you to assume it. Maybe you should browse back a few months to the 3DMark03 release and see what I've said about it back then.

Nakoruru
06-28-2003, 09:44 PM
Ah, I find the Orwellian boot and the newspeak redefining of 'cheat' to be rather clever, its just that it sounds like a huge exaggeration and you were accusing people of overreacting, so it seemed a little odd.

BTW, I just read 1984 a couple of months ago and do not appreciate being told to 'read a book', Mr. Ad Homonim.

ANYWAY, There are very few people blind/hypocritical enough to say what nVidia did was not wrong. But, what should be done about it? About all we can do is make a stern face at nVidia and say 'Please don't do this again'.

Or, you can be silly and think nVidia cares if you don't buy their graphics card. One of my points is that you should buy the best graphics hardware and that making some moral stand, as if nVidia had sex with your wife, is kinda empty. Its the kind of easy moral stand you can make in a decadent modern society were all your basic needs are taken care of.

I mean, who cares if there are still contries in the world where real boots are stomping on real faces, nVidia cheated on a graphics benchmark!

I'll save my anger for people who run over homeless people and leave them embedded in the windshield for 3 days.

I'm more upset that Bush cheated when he ran WMDMark on Iraq ^_^.

Unless someone can come up with an effective way of holding nVidia accountable, then all this complaining is more theraputic than anything else.


[This message has been edited by Nakoruru (edited 06-29-2003).]

[This message has been edited by Nakoruru (edited 06-29-2003).]

dorbie
06-28-2003, 11:04 PM
You could compare anything to some unrelated and irrelevant outrage, there's no point. We're having a discussion about a specific topic with others who have an interest in it. If you want to discuss those other things there's no shortage of newsgroups to post to. When you're posting and reading here don't patronize me about the importance of this issue.

The boot is not an exaggeration, it's an appropriate reference when taken in context. Even in 1984 it was a metaphor.

[This message has been edited by dorbie (edited 06-29-2003).]

*Aaron*
06-29-2003, 06:24 AM
Originally posted by dorbie:
In this case ATI's conduct is NOT as bad, ATI has done nothing wrong here.It's not as bad, but ATI admitted to replacing one of 3dMark03's shaders with a functionally equivalent one. If this wasn't the least bit questionable, then why did they decide to remove this application-specific optimization in the future?


Originally posted by zeckensack:
Criticizing FM *in this context* is defending NVIDIA. Interestingly enough, dismissing 3DMark03 is also NVIDIA's official company line.So by that same argument, criticizing nVidia in this context is supporting ATI (who also got caught cheating). So you condone ATI's cheating and not nVidia's? And criticizing nVidia in this context is defending Futuremark? But I thought you said you don't agree with their business model. ...or maybe it's possible to criticize both nVidia *and* Futuremark.


Originally posted by zeckensack:
It's no matter how popular the victim, justice is justice, period.Futuremark is the victim, huh? nVidia cheated their customers. They didn't cheat Futuremark.


Originally posted by zeckensack:
You're just (like so many others) trying to pull interest away from the real issue, which is that NVIDIA screwed the press and everyone who relies on the press by forging competitive data.Damn, you caught me. This thread is just crawling with nVidia employees, ya know.

[This message has been edited by *Aaron* (edited 06-29-2003).]

dorbie
06-29-2003, 10:24 AM
It's known that ATI optimized a shader to reorder instructions but it remained mathematically and functionally equivalent. I discussed this in depth in another thread so you may know my detailed thoughts on this already. It is nowhere near equivalent to anything NVIDIA did, the most unfortunate aspect of this is it has been used as a smokescreen for NVIDIA's egregious cheating. Moreover, ATI's withdrawal of this optimization from future drivers has been interpreted as an admission of guilt, whereas NVIDIAs brazen attack on Futuremark seems to have obscured the whole issue for some.

I'll repeat, ATI has done nothing wrong, and certainly nothing that even approaches the cheating of NVIDIA. More importantly, their conduct when this was exposed has been the exact opposite of NVIDIA's.

[This message has been edited by dorbie (edited 06-29-2003).]

scurvyman
06-29-2003, 10:32 AM
Originally posted by zeckensack:
You aren't particularly on topic, but that's something many people have done wherever this has been discussed.


Are you seriously telling me there's a single topic within this thread? On this page alone the discussion ranges from Orwell, to the performance of Apple's newest computer, to the definition of the word "cheating," among other things. What, would you rather have me start an entire new thread to ask this question? Would you still ignore my question if I did?


Originally posted by zeckensack:
What does 3DMark's relevance or lack of relevance have to do with NVIDIA screwing the press?

Absolutely nothing. Which makes me wonder: Why are you ignoring my question and changing the topic back to NVIDIA "screwing the press?"

The irony is that you're implying I'm the one who's a rabid fanboy.


Originally posted by zeckensack:
If you wish to defend NVIDIA by dismissing 3DMark, maybe you can tell me why NVIDIA put so much effort into artificially inflating the 3DMark scores? They wouldn't do this if 3DMark was irrelevant, or would they?


I don't wish to defend NVIDIA. Somehow I'm not the least bit surprised that somebody assumed my criticism of Futuremark was tantamount to NVIDIA fanboyism. In reality, I turned in my GeForce256 for a Radeon 9700 some time back, and except for minor driver issues, I've been quite pleased.

However, I will tell you why NVIDIA put so much effort into artificially inflating their scores: 3DMark IS relevant. I never said that it wasn't. Futuremark has somehow MADE 3DMark relevant, and now it's the number that is pointed to on websites and that is printed on boxes as a measurement of the performance, and therefore value, of a 3D card.

If you'll refer yourself to my original question, the one that you even quoted in your post, I said:

In all seriousness, what is the benefit of 3DMark?

Now that I've answered all YOUR questions, do you finally have an answer for mine? You just said, yourself, that developers put lots of effort into inflating their 3DMark scores. Do you really consider this time well spent? Do you think this is the best thing they could be spending their time working on? There's nothing higher on your priority list than this?

Moreover, can you come up with a reason why performance, as measured by some benchmark that bears no resemblance to a game not only in its graphics code, but in the fact that there's no sound, AI, physics, etc. qualifies as a better benchmark for gaming performance than an actual game?

I don't see Futuremark's industry position as beneficial to anyone other than Futuremark itself: self-declared authority being paid money in exchange for levying semi-arbitrary judgments on various products.


Originally posted by zeckensack:
Sheesh.

You're telling me.

Forgive me for being testy, but the behavior of supporters of both sides of this argument makes me sick. If the positions of NVIDIA and ATI were reversed, would so many ATI supporters still hold Futuremark in such high esteem? Would 3DMark results still be above question? Would NVIDIA supporters be the ones defending arbitrary benchmarks to the death? Call me a cynic, but I think the real issues are being ignored in the midst of all this us vs. them jingoism.

dorbie
06-29-2003, 11:12 AM
There's not a lot of jingoism, there is a lot of denial though. We're seeing every excuse thrown up as a pretext for ignoring or excusing obviously bad conduct. This has ramification for the future of our industry, particularly the way Futuremark withdrew it's allegations of cheating, when there WAS clear evidence of flagrant cheating.

I'm criticising NVIDIA because of their conduct, not because I'm predisposed to like or dislike any given company.

Implying that everyone on either side of this issue is a fanboy is just more smoke to obscure the truth. Implying that we must be evenly critical of all sides, or critical of none to be 'fair' is misguided for obvious reasons. It would in fact be grossly unfair to do that, it's essential you make any judgement based on conduct, not some foolish notion of equivalence. All sides are not equal in this, NVIDIA is the miscreant here. This is not a grey area, or a close run thing, they are so obviously and egregiously wrong that I am lost for an explanation as to why anyone would defend them or pretend that their behaviour is mitigated by anything someone else did.

[This message has been edited by dorbie (edited 06-29-2003).]

zeckensack
06-29-2003, 11:47 AM
Futuremark

Futuremark (and in between madonion) built a following with earlier revs of 3DMark, that were free, idiotproof, and best of all, reduced all its testing to a single number that's just so easy to use by boys and younger men (in mind) to show off their virtual manhood (commonly referred to as 'PC'). Those were the jolly days where users got something for nothing and could even brag about it. Loyalty was further increased by the public forums, where apparently the greatest minds of the world share ideas freely *cough*.

The new and reborn Futuremark requires users to pay up to even be able to select which tests to run. They also 'offer' differing levels of membership, so that IHVs can tell them for a fee how to render things.

The problem is that 3DMark's basic features aren't worth a dime. Users will turn away (or towards keygens), a classic shot in the foot.

Another problem is that requiring money before soliciting IHV feedback is just plain backwards. If FM doesn't understand how to write a clean renderer alone (they dropped their engine license), they're beyond help, and they *shouldn't* try and write a benchmark.

It would at least be logical if they *paid* for IHV feedback.
______________

My loyalties

I own and use:
A Radeon 9500Pro, a Radeon 8500LE, a Radeon 7000, a Geforce 2MX, a Geforce 3, a Voodoo 3 PCI and a Kyro II.
I write stuff. I like having the luxury of being able to perform testing on a wide range of hardware.

I keep the Radeon 7000 around because of what turned out to be a hardware issue (http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/005734.html) , so I can't rely on the backwards compatibility of my newer Radeons for testing. Otherwise I would've sold that thing a long time ago.

See? I have no loyalty. I talk about ATI hardware issues in public. I also talk about the Geforce 3 being ten times as fast as my Radeon 9500Pro in specific areas (http://www.opengl.org/discussion_boards/ubb/Forum2/HTML/013054.html) .

I could also link a thread where I tell the public that the R200's AF is so bad that I never dare turn it on, but that's in German.

It's sickening to see how quickly one ends up being on one side.

_______

To sum this up for myself (now I really hate this thread), IMO Futuremark really is irrelevant now.
NVIDIA have lost all my respect, all of it. They can go to hell.
ATI have shamefully weasled out of their static instruction reordering. Hmm. Bad, bad ATI. They should've used these engineering resources to generally improve the *many* inefficient paths through their drivers. What a waste of energy.

Nakoruru
06-29-2003, 01:25 PM
I am not commiting the fallicy of 'fairness' by saying that we should judge all the parties as being the same.

I mentioned those other things because I was hoping you would see how ridiculous it is to use black and white moral language like 'miscreant' to discribe the mischief which nVidia has done.

You sound like some prosecutor trying someone for murder.

What nVidia did was wrong, shame on nVidia. Now, what do you plan to do about it? It really seems like all you can do is complain.

I dunno why you refuse to acknowledge that the relavence of 3DMark is an issue. I mean, cheating at Worldcup soccer match is much more important than cheating at a match between kids in a vacant lot. If we want to argue whether 3DMark is the 'Worldcup' or a 'kids game', it is relavent.

dorbie
06-29-2003, 02:21 PM
Your inability to read comments in context is not my problem, neither is your apathy.

You seem more interested in discussing trivial rubbish while claiming the main subjects are irrelevant. It begs the question, why do you condescend to post at all?

[This message has been edited by dorbie (edited 06-29-2003).]

M/\dm/\n
06-29-2003, 10:14 PM
Lets do a quick comparision, just what you like/don't like about:

NV30/35
+++++++
128 bit color, vp/fp calc precision;
Integers in vp/fp;
Increased instruction count;
Cg(DX+OGL) with compiler that tries to optimize code for NV;
Cheapest DX9 card that'll flood the 'users' market;
-------
Speed in avg gaming;
Recent drivers where buggy & are getting better slowly;
2 slots;
KNOWN CHEATERS => 3D MARK

R300/350
+++++++
Speed;
Beter AA;

-------
Multipassing when f-buffer;
96-bit precision;
I just don't like their drivers yet;
KNOWN CHEATERS => QUACK

3fx
+++++++
SIMPLY RULEZ http://www.opengl.org/discussion_boards/ubb/biggrin.gif
-------
NONE!

[This message has been edited by M/\dm/\n (edited 06-30-2003).]

mattc
06-30-2003, 12:43 AM
looooooool, madman, your faith must be strong http://www.opengl.org/discussion_boards/ubb/wink.gif

M/\dm/\n
06-30-2003, 01:04 AM
The question is not about faith, I just want to see what ppl like & don't like about HW, Right now I see that R's are better for gaming NV's for development (although could be real headache when you must think about precision/speed tradeup).

davepermen
06-30-2003, 01:12 AM
Originally posted by M/\dm/\n:
128 bit color, vp/fp calc precision;
and using it is so slow that nvidia needed to cheat and use 16bit with help of drivers everywhere to get ... ACCEPTABLE performance. so in the end, most the time, your 128bit precision is not guaranted to work that way, too.. (and difference IS negitible. do the math. its 0.00152587890625 PERCENT difference in storage-quality to the radeon!!

Integers in vp/fp;
fine. would be nice, yes.

Increased instruction count;
against infinite in 9800 cards? http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Cg(DX+OGL) with compiler that tries to optimize code for NV;
hm.. nvidia tries to cheat there as well? great. as about nobody uses cg..


Cheapest DX9 card that'll flood the 'users' market;

great.. heavy discussed if the cheap one actually is full dx9 compliant.. and the other ones are expensive as hell for what they deliver


Speed;
what speed? if they run dx9 compliant, they run HELL SLOW!! because then they have to run at 128bit float all the time, doing 32bits too much, and they start to stutter. the cards simply don't fit well into dx9. and thats sooooooooooo poor because they are made for it..


Recent drivers where buggy & are getting better slowly;
no real good driver out since half a year.. buggy, and full of cheats to gain not-available performance.


R300/350
+++++++
Speed;
speed even without cheating! you get what its written on the box, and it even is fast!


Beter AA;
ever since i've ran a radeon9700, i've moved back to gf cards quite some times. and it always made me shudder how ugly they render. really..

-------


Multipassing when f-buffer;

and?


96-bit precision;

the only visible differences between gf and radeon i've seen where actually worse for the gf. namely because of their cheating in the aa, in the af, and the overall worse aa and af. you can't even see the difference between 96 and 128.. in the end its just 32bits anyways..

I just don't like their drivers yet;
too bad, as they are rockstable in a good system, don't have tons of cheats in as only way to be fast, deliver tons of extensions including opengl2.0 extensions, and still, they follow opengl, and they follow dx. something i cannot say from nvidia. they drifted away long ago, and now drivers and compilers have to help that out.

nvidia hw is not bad (not good eighter..). the major problem is they are way off the marked. they have no dx9 card. just a less-than-dx9 (but good performing in that low quality), or way higher than dx9 card (but terrible performance, and as this is the only official dx9 way, they have a problem).

and their cg shows as well how they want to fit some holes they made with gf3/gf4.. same as cg should hide the fact that the card is simply **** in standard opengl ARB_fp..

the card has a lot of performance. but not for the standard opengl or dx9 games. so it will suck in them. and thats why nvidia cheats all the time and cries around.

then again. i would not want to put such a big card into my pc..

M/\dm/\n
06-30-2003, 01:35 AM
I mentioned speed of NV30 with "-" http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Cg isn't that bad if you don't have a team to write shaders for you in assembly, & while GLSLANG (as far as I know, driver will do the optimizations) isn't officialy out I'll stick with it, & look at the asm source, they arn't cheating there http://www.opengl.org/discussion_boards/ubb/biggrin.gif

FX5200, has enouh speed to make all games playable, though somtimes w/o AA/AF, so you can slowly move to ARB_F_P;

jebus
06-30-2003, 02:51 AM
well, i'm no "NVIDIA fanboy" :p but what i do know is that with my GeForce 4 i can run ALL my games at 1600x1200 with all effects cranked up. if that is because NVIDIA cheats, then i say ... cheat on, brothers!

jebus

tfpsly
06-30-2003, 03:37 AM
Originally posted by jebus:
what i do know is that with my GeForce 4 i can run ALL my games at 1600x1200 with all effects cranked up.

Even the Doom alpha ? :P

How about concluding this long thread, dudes ?
Note : the guy who said that most gamers will have fx5200 or equivalent cards soon is quite right, I think. So if you're creating PC games, maybe you should aim at such cards.

Nakoruru
06-30-2003, 04:15 AM
Maybe, dorbie, I refuse to 'read comments in context' because by 'context', you mean the context in which you are correct and I am wrong.

Which is funny, because I never really said that I did not agree with you. What nVidia did was stupid and wrong.

But, why are you preaching it from the mountain tops?

I'm glad you are hear to tell us what is right and wrong. Thank you.

matt_weird
06-30-2003, 04:22 AM
Originally posted by tfpsly:
How about concluding this long thread, dudes ?

NO WAY, DUDE!! http://www.opengl.org/discussion_boards/ubb/tongue.gif Let's continue th rant! http://www.opengl.org/discussion_boards/ubb/cool.gif

(uhh, otherwise it's getting a bit boring in here... http://www.opengl.org/discussion_boards/ubb/rolleyes.gif )



[This message has been edited by matt_weird (edited 06-30-2003).]

UT2003 announcer
06-30-2003, 04:52 AM
UNNNSTOPPABLE!

dorbie
06-30-2003, 05:20 AM
Nakoruru, by out of context I'm talking about your patronizing yet inane posts about world events. I'm referring to your obvious inability to interpret plain English in context without responding with irrelevant non sequiturs about my choice of my words. I wish I could stick to the subject, but the subject according to you is less important than your inaccurate linguistic lint picking.

On the right vs wrong, you're jumping to conclusions. I'm not disagreeing with you based on some assumption of what you're thinking. I'm responding to your posts and specifically disagreeing with you on your call for apathy (and related issues). If you think it's not important why on Earth would you think us thinking it is important is important enough to post about?


[This message has been edited by dorbie (edited 06-30-2003).]

tfpsly
06-30-2003, 06:06 AM
Originally posted by matt_weird:
NO WAY, DUDE!! http://www.opengl.org/discussion_boards/ubb/tongue.gif Let's continue th rant! http://www.opengl.org/discussion_boards/ubb/cool.gif

(uhh, otherwise it's getting a bit boring in here... http://www.opengl.org/discussion_boards/ubb/rolleyes.gif )

Ok, I'll take the popcorn then

V-man
06-30-2003, 06:47 AM
>>>too bad, as they are rockstable in a good system, don't have tons of cheats in as only way to be fast, deliver tons of extensions including opengl2.0 extensions, and still, they follow opengl, and they follow dx. something i cannot say from nvidia.<<<<

daveperman or davy as we like to call him is a big ATI fan. Some of what he says is true but some is exagerated.

From the benchmarks I see, Radeon 9700 vs FX5800, they come close in todays games. On the average, the 9700 may have a couple of FPS lead. It seems the situation for 9800 vs 5900 is similar.

I'm not convinced about them having rockstable drivers.

From the patched up 3dmark scores, it looks like the ATI has an advantage. I have to agree that ATI is offering some nice speed with features.

>>>and their cg shows as well how they want to fit some holes they made with gf3/gf4.. same as cg should hide the fact that the card is simply **** in standard opengl ARB_fp..<<<

That's way off. Cg for fitting some holes???
It's just a tool. You talk as though the FX gets 1% of the performance of the ATI equivalents.
That's just blowing things out of proportion.

>>>the card has a lot of performance. but not for the standard opengl or dx9 games. so it will suck in them. and thats why nvidia cheats all the time and cries around.<<

For standard games, there are plenty of benchmarks showing that sometimes ATI is in lead, sometimes NV. It doesn't make define a leader.

Cheating all the time? Pfff!
I would not shoot down NVidia.

*Aaron*
06-30-2003, 06:51 AM
Originally posted by davepermen:
what speed? if they run dx9 compliant, they run HELL SLOW!!"Hell slow", huh? That's like saying the fastest BMW is slow as molases because there is a Lamborghini that is faster. If you mean that the FX 5800 and FX 5900 are not a good value for the performance they deliver, I would agree. Even if the Raedon 9800 is faster in every situation, it doesn't matter because they're all a very poor value for their performance. For the price of one of these cards, a gamer could buy a shiny new console and a pile of games to play on it. Developing games for high-end video cards is, IMO, a waste of time. As is comparing their performance.

Quote of the day:

Originally posted by dorbie:
If you think it's not important why on Earth would you think us thinking it is important is important enough to post about?I wish we had signitures on this board, because this quote would go in mine. http://www.opengl.org/discussion_boards/ubb/biggrin.gif

matt_weird
06-30-2003, 07:09 AM
Originally posted by tfpsly:
Ok, I'll take the popcorn then

Yep, don't forget the stereoglasses -- this gonna be exciting! http://www.opengl.org/discussion_boards/ubb/cool.gif ..i even managed to slap that chick that was trying to pass by to take a seat on the same row http://www.opengl.org/discussion_boards/ubb/wink.gif She said her name is..er.. Lidia or something like that, but with the -idia ending http://www.opengl.org/discussion_boards/ubb/tongue.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif


Originally posted by V-man:
daveperman or davy as we like to call him is a big ATI fan. Some of what he says is true but some is exagerated.

See, tfpsly, i told ya that'll be a good movie, way more better than matrix with those fancy stupid machines who enslaved all the people... like those videochips manufacturers http://www.opengl.org/discussion_boards/ubb/rolleyes.gif

dorbie
06-30-2003, 08:20 AM
They're both awesome cards, and each is at least an order of magnitude faster and immeasurably more functional than I ever dreamed I could have obtained in my PC. I often think of the capabilities of these systems and I'm just amazed, not just at the raw performance, but that it's all done on a die smaller than my pupil.

I'd hate to choose between them, we need them both to keep prices in check and motivate the frenetic development. There's a reason they're both about equivalent give or take some precision issues and design details. If either one didn't exist the other wouldn't be anywhere near as capable, or as affordable.

davepermen
06-30-2003, 12:22 PM
Originally posted by V-man:
[Bdaveperman or davy as we like to call him is a big ATI fan. Some of what he says is true but some is exagerated.[/B]
hm.. actually i just dislike nvidia. i like matrox' work, too.. its just so slow http://www.opengl.org/discussion_boards/ubb/frown.gif

actually most of my work is software rendering anyways. no cheating there except.... "optimisations" http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif

nvidia created cg because they realised their fragment program solution for gf3/gf4 sucked BIG TIME and about nobody could grasp them. and they created cg to hype fx. there is no other reason. don't think cg would be created to help us. they are a company that even cheats to gain money. you really think they care about something else in the marketing-part of nvidia? cg is a way to make developers dependend on them, using proprietary tools for proprietary hw.

i prefer to have hw that simply runs correct (remember GL_CLAMP? or the clipplane-issues? or the non-hw-texturematrix-row on gf2? different issues all the time, that we had to work around), and fits exactly opengl or dx specs. it does not have to provide more, because that more is not useful for the gamer.

nvidia now claims games should get developed for each and every hw individually. thats just their way to hide that they simply where not able to deliver hw that works good in standard gl or dx. that IS fact.

just like the p4 started veeeery bad on x87 instructions (the fpu mainly) and so developers got "motivated" to use sse extensively.

nvidia failed to deliver real dx9 hw that fits simply the dx9 specs and nothing more. we never needed more. now we have slow 32bit fpu and fast 16bit fpu, one not enough for dx, one too much for dx. what the gamer wants is one that just fits dx. not more, not less!!!

thats what nvidia never got. while i told it right from the beginning.

just like ati made that fault with the 8500.. the additional features never get used much (except futuremark http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif)

the 8500 is amazing if you USE its power. if not, its less than a gf4...

same for gfFX.. its amazing if you USE its power. else, it sucks (sorry, blows http://www.opengl.org/discussion_boards/ubb/biggrin.gif).

problem now, it means a lot of additional work to use the gfFX power. doubling the graphics-department work.. and that only helps nvidia.

and their cheating is just because of that. their hw does not fit any real specs. way too much or way too less. for me, its a simple statement from nvidia: "yes, we designed the nv30 completely wrong and way off any uses. still, thats our way, follow us!!!!". they put real amazing effort in making developers work with them.

this is not the future, this is a past i never wanted to see back again. proprietary hw...

so soon before gl2.. its yet in atis drivers, i can even TOUCH IT!!! http://www.opengl.org/discussion_boards/ubb/biggrin.gif

davepermen
06-30-2003, 12:24 PM
oh, and.. please hand me over the popcorn, too.. http://www.opengl.org/discussion_boards/ubb/biggrin.gif

john
06-30-2003, 04:51 PM
nvidia now claims games should get developed for each and every hw individually. thats just their way to hide that they simply where not able to deliver hw that works good in standard gl or dx. that IS fact.
just like the p4 started veeeery bad on x87 instructions (the fpu mainly) and so developers got "motivated" to use sse extensively.

I disagree with this argument. Computer hardware such as CPUs and graphics processors are sufficiently complicated to require proper scheduling to take full advantage of their architecture. The time when CPUs became pipelined and able to store more than one instruction in the pipe at once was the same time that comparing CPUs by their cycle rate and MIPS score became meaingless. Graphics processors are different from CPUs have have the same issues.

Processor engineers can never solve every problem at once; furthermore, there will *always* be trade-offs according to design decisions. This isn't unique to processors, either; some examples include -

choosing a time quantum for process scheduling -- too large and the system loses interactivity but if it is too small then the system will take too long to reset the state between context switches than doing actual work choosing a pipeline length -- a longer pipeline means the clockrate can be faster (because less work is done) but missed branch predictions will hurt performance because more work has to be undone choosing a cache sizes -- more cache means more data can be stored in cache but it MAY take longer to clear the cache when it becomes incoherent and lots of others

My point is that it is physically and theoreticaly impossible for engineers to target architecture to be optium for ALL users. They are compelled to make design decisions and then write software to optimise a code steam FOR those decisions. Is this a bad thing? Arguably not: a computerised system has always been a combination of hardware and software (even to the point where some operating systems were stored in ROM, for instance).

Writing s/w for a CPU means coming up with a compiler that knows how to best schedule code for a particular chip. There are numerous examples where benchmarks perform noticably better depending on what compiler you use. gcc is invariably outperformed by compilers written by the hardware vendors (Intel, SGI, Digital/Compaq/HP).

Intel is in a different position than other hardware vendors because they need to maintain their legacy instruction set. Given the diminishing returns of increasing the size of the silicon base (it becomes prohibitively expensive for very small increases in core size) and a limited transistor count, then what is wrong with intel pushing for accelerating newer, different parts of the ISA? There are a lot of complicated instructions an x86 machine has to implement to remain bacwackwards compatible; if these become slower as a trade-off from making simpler or vectorised instructions faster, then what's the problem if the recompiled code (compiled FOR the new processor) is faster at the end of day? Intel is compelled to maintain binary compatibility with backwards processors; other, RISC h/w vendors are NOT so constrained. Instead, their approach is to largely forget about compatibility and release a new compiler.

My point is that hardware changes can be masked by appropriately written s/w (in the form of compilers for CPUs) to push hardware changes. Its the same as writing new drivers for new hardware: the interface remains the same, but the ~implementation~ is different but hidden. In the compiler's case, the language IS the interface but the code stream it emits is targetted for a processor.

Graphics processors are not so different to processors. They are sufficiently complicated that they need software support to target a code stream (in the form of opengl operarations) for a particular vendor. There is not yet an elegant and seamless way to do this, but Cg, I would argue, is so far a vendor-specific start.

Targetting fragement programs is not new and not avoided by everyone; this thread notes that ATI had a fragment program that was functionally equivilent but scheduled differerently to the 3DMark test. Software writers are in the position of targetting a graphics stream for a particular card because there is no uniform language to describe the effect they're after: they have to code their vision to a particular card's fast-path.

Cg, or gslang, or whatever incarnation you want to talk about is a Good Thing for graphics cards and programmers. It is a layer of abstraction over graphics hardware just like a compiler abstracts over the hardware complexities of a CPU. Yes, Cg it is only available for nvidia cards at the moment, but the ideology is a good start. If nVidia and ATI and 3Dlabs and whoever else can agree on a language for OpenGL that allow programmers to descibe high=level graphics operations so the video driver can schedule code for a particular implementation, then EVERYONE will be a winner:

graphics programmers will be able to concentrate on other, more important parts of their code than just figuring out the fast path for every card they want to target; hardware vendors will be able to make changes to their processors without being overly concerned about the paths of existing software. They'll also have a reasonable expectation that toy-benchmarks will run optimially on their system WITHOUT adding application specific modifcations in the drivers; and users will have a graphics system that remains relatively future-proof for existing s/w and will enjoy better performance.

jebus
06-30-2003, 06:33 PM
^^ what he said! ^^

jebus

matt_weird
06-30-2003, 08:02 PM
ah, jebus, if you don't get what he said -- just watch the movie anyway, that's it http://www.opengl.org/discussion_boards/ubb/tongue.gif

*hands the popcorn to davepermen an jebus* hey, hand me over some beer! http://www.opengl.org/discussion_boards/ubb/biggrin.gif

M/\dm/\n
06-30-2003, 10:45 PM
Which pill you'll take, green or red one?

CG is compatible for ARB_F_P so it works for ATI too! Unfortunately ATI is not going to write their compiler for Cg so trying to make it useless. Cg is beter than HLSL&GLSLANG from my point of view, because it is for both DX/GL. Moreover it optimizes code for HW at runtime, the same way as GLSLANG is ment to do that. DXs HLSL isn't supporting that.


[This message has been edited by M/\dm/\n (edited 07-01-2003).]

davepermen
07-01-2003, 01:29 AM
Originally posted by john:
I disagree with this argument.

nice for the movie..

but there is no point. gpu's are simple, they have simple, well defined tasks. just since the gfFX nvidia now CLAIMS that hw gets so difficult and so different all the time that we actually have to develop for each one individually.

THEY CLAIM THAT

and you know why!

without direct optimization for gfFX cards, MANUAL optimisation (eighter in drivers *cough cough* or by developers), their card are not able to run stuff good.

but thats the first time they don't deliver hw that can just rastericer the way apis mean it.

and thats THEIR problem.

gpu's are nothing complicated. the path, the design is actually well defined.

they should simply blame themselves and get quiet. i don't want to start lowlevel optimize again for all different kinds of hw. isn't that why opengl is there, or dx? because we DON'T NEED THAT?! why should it be now different. just now, that nvidia made a big design fault its not their fault but instead the whole world changed..

BLA.

they just cannot loose. bad loosers. primitive.

M/\dm/\n
07-01-2003, 04:33 AM
http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif
F**K it F**K it F**K it F**K it F**K it F**K it F**K it F**K it F**K it!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

I'm getting sick of it!!!!!!!!! When you have 400+ fps in avarage, non optimized game you say that the card can't run it well, you must be siting on supa dupa plasma nuclear CRT & must have uber blown eyeballs to see the difference. If you want to write program do it & it WILL run allmost everywhere with acceptable fps, IF YOU WANT TO INVENT SOMTHING HUGE & NEW then I guess you'll be glad that you have new hw paths even though they are not standart. Think about Cg plugins in 3DMax, when all you care is IQ & effects, throw all those 1024 instr's and look at the image.

NV is bul$hit when we compare fps in games & when we take in account diferences like 5%, but we are talking about 100+fps usually, so all current games are more than playable on NV's, they are even more playable on R's, but still you can't feel the difference because your LCD can handle 75Hz or your SUPA DUPA CRT can handle 100Hz at 1600x1280.
BTW, that is if you are not CPU bound =>P4~3GHz;

So who cares?

I now one thing about NV they had CG out before HLSL & GLSLANG, it helps a lot (at least for me) while I'm waiting for GLslang.

They are late with NV, it's not that revolutionary, drivers still stinks a bit, but it has a lot of b@lls to do stuff, it can do HUGE things if you want to spent time on them. So you can chose:

R-> Supadupa card runs at 5000fps in game A
NV-> almost supadupa card runs at 4999fps in game A (and has cheated to show you guys that it's faster by 1/500000 just to get sells high, because everybody is talking about that 1fps)

and then comes hw path bundles.
3vp paths NV, 2 ATI;
2fp paths NV, 1 ATI;
int + 2 float color formats NV, 1 float ATI;
huge instr count NV, UNLIMITED instr count in shaders if you are multipassing after ~96 instrs, each time writing and reading 96 bit floats to SUPER f-buffer (do 1024/96 passes & you'll see what crap comes out of your f-buffer, though of corse with such shader you'll be able to get 0.0000001fps on both cards (Film rendering case).

Let's stop this bull$hiting, they are both good cards & I see that NV fits for me better, if you wish you can stay with ATI, we have democracy!

http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif

V-man
07-01-2003, 06:58 AM
>>>hm.. actually i just dislike nvidia. i like matrox' work, too.. its just so slow<<<

Obviously you never owned a Matrox. They have always had and always will sheetty gl drivers.

I like NVidia because they are innovators and push a lot of technology onto us. NV has a huge list of extensions. They don't sit and wait for ARB to decide on something.
Heck, a lot of the questions posted here was about using NV extensions.

I really don't know why you are so negative about NVidia. Certainly, you can compile for ARB extensions (profiles) with Cg.

>>>you really think they care about something else in the marketing-part of nvidia?<<<

I know that Nvidia is pretty heavy in the R&D department. Cg doesn't offend me. I don't see why you hate it.

>>>the non-hw-texturematrix-row on gf2? <<<

I haven't heard of this. Maybe you are thinking of some other hw.

Do you rememeber the bug in ATI drivers that gave incorrect tex coords if you set your tex matrix something other than identity?

Nvidia has a nice extension that let's you use FLOAT16 as data.
They also have a nice extension called GL_NV_fragment_program
that allows you to compute at 12, 16 or 32 bit precision.
They also have GL_NV_vertex_program2 that allows, jumps, loops, conditional execution, subroutine execution.

Those are some of the things I appreciate. Other people appreciate other NV extensions (NV_RC, NV_TS, ...).

I do prefer vendor neutral functions but the other vendors except for ATI suck really bad.

Why don't you hate SIS, trident, intel, and the others who dont even have a 1.3 driver?

tfpsly
07-01-2003, 07:31 AM
Originally posted by davepermen:
without direct optimization for gfFX cards, MANUAL optimisation (eighter in drivers *cough cough* or by developers), their card are not able to run stuff good.

Well my fx5200 (less than 80 bucks in here) is still fast enough to play decently the doom 3 alpha (and the final version is supposed to be even faster =) and for my personnal coding, without using any special trick or extension.

Why would I pay much more for a hardware that would give me a worse performances/price ratio ? Self-masturbation ?
And I don't think I should speak about the Linux support, that (even if it is not that good in the case of nvidia) is highly random and do not give very good performances.

Nvidia cheated, ati did also before, who cares ?

davepermen
07-02-2003, 06:03 AM
Originally posted by V-man:

hm.. we have some at work.. never used them for gl, though http://www.opengl.org/discussion_boards/ubb/biggrin.gif

[B]
I like NVidia because they are innovators and push a lot of technology onto us. NV has a huge list of extensions. They don't sit and wait for ARB to decide on something.
Heck, a lot of the questions posted here was about using NV extensions.

because most extensions are just so proprietary designed that you don't understand on how to use them without reading all the papers and using all their tools..
yes they don't sit and wait for ARB. they actually don't really do ANYTHING for the ARB. oh, there is a gl2, so they spit out cg instead. they always have to go an own way, since ever i knew them.



I really don't know why you are so negative about NVidia. Certainly, you can compile for ARB extensions (profiles) with Cg.
because nvidia is always something "special" and i have to work in some "special" way to just get stuff to actually WORK.


I know that Nvidia is pretty heavy in the R&D department. Cg doesn't offend me. I don't see why you hate it.
it came out, and every dude downloaded the emulator for gfFX and feeling like "woah, gfFX can do so much thing no other card can ever do!!".
somehow due to the cg hype a lot of people missed that most the stuff can be done on a radeon9500 or bether, too.


I haven't heard of this. Maybe you are thinking of some other hw.
try perspective texturing the "normal" way in gl. you hit a software-emulation-part in the gf2 drivers, because they missed the 4th row in hw. you have to use instead those funny planes.. heck, forgot the name http://www.opengl.org/discussion_boards/ubb/biggrin.gif



Do you rememeber the bug in ATI drivers that gave incorrect tex coords if you set your tex matrix something other than identity?

actually, no. but a driver bug is solvable. hw bugs aren't



Nvidia has ...
and what if i don't just want to code for nvidia only? then i have to use the standard again. and then uh, oh, geforces drop in speed! (gfFX in ARB path .. ouch).

i want a card that can just do what the standard does, and does it fast. sure, innovations all the way, but FIRST GET THE STANDARD FAST. not make innovations the "only" way wich really works...

tfpsly:
Nvidia cheated, ati did also before, who cares ?
because since the first info on gfFX they had nothing but propaganda and cheats everywhere to make customers buy their card. when ever there is no cheating, their cards perform so much worse, it hurts.

oh, and the big difference over all differences is:
ATI: uhm.. yes.. we cheated.. our fault
NVIDIA: NO WE DID NOT THAT COMPANY JUST WANTS TO MAKE US LOOK BAD (calls lawyer-army till that little company wines the way they want).

that is marketing-crime going on there. i don't like to have stuff from companies i cannot trust in any way. i cannot trust nvidia anymore.
they have not shown to be able to deliver small, quiet, yet very good and fast hw
they have not shown to be able to follow any standards given to them
they have not shown to be more or less fair in the competition
they have not shown to be able to stand their faults

they suck. badly.

and thats bad, because they had quite some good stuff, too. and still have. its not that i just hate them. but they definitely made too much wrong currently to make them trusty sellers..

if my car would be based on nvidias politics.. i think i would be dead in some tree.. or exploded... hm no.. blown away from the engine-cooler, i bet..

Ysaneya
07-02-2003, 06:43 AM
My views on the subject:



I like NVidia because they are innovators and push a lot of technology onto us. NV has a huge list of extensions. They don't sit and wait for ARB to decide on something.


Can you truely say that from ATI ? EMBM was available on ATI cards before NVidia's, if i'm not mistaking. The Radeon 8500 was the first card to support "true" pixel shaders (and not combiner tricks). The Radeon 9000 family, released 6 months before the FX, was also the first to support ps2.0 functionnality. And now we see GL2 extensions appearing in ATI's drivers.

Both companies make different extensions available at different times, but i think it's not fair to say NVidia's more innovative.



Heck, a lot of the questions posted here was about using NV extensions.


Popularity has nothing to do with the amount of functionnality available. If 9 people out of 10 are using NVidia cards, regardless of extensions ATI is supporting, you'll find that 9 questions out of 10 here are about NVidia extensions. Note that, since 1-2 years, this has been changing.



Certainly, you can compile for ARB extensions (profiles) with Cg.


Instead of pushing Cg, they could have pushed GL2. This sounds like a pure political/marketing decision to me.



Do you rememeber the bug in ATI drivers that gave incorrect tex coords if you set your tex matrix something other than identity?


Everybody remembers the bugs, both in NVidia and ATI drivers. And don't say there was none in NVidia's. Yes, ATI's drivers used to be more buggued than NVidia's (and honnestly i think it's still true :p) but this is far from being that bad. NVidia also has its share of problems. Software clipping planes anybody ?

At the moment, Radeons are generally faster, cheaper, and have a better image quality than Geforces. They are also less stable IMO. From a pure technical standpoint, i found their extensions to be a lot more well designed, and simple than NVidias. If you prefer NVidia cards for reason X, Y or Z, fair. But don't defend NVidia because "they are used" to have the best cards. Judge the cards on their own merits, not on history.



Why don't you hate SIS, trident, intel, and the others who dont even have a 1.3 driver?


I know this was not directed to me but... i do hate 'em :)

I'm definately happy with NVidia and ATI though.

Y.

dorbie
07-02-2003, 08:55 AM
davepermen, can NVIDIA do anything right in your eyes?

OK, the Futuremark shenanigans were deplorable, but apart from that is there really so much to get so bitter about? (yup, the Futuremark stuff is enough in itself)

You can make all sorts of assumptions about their motivations but it's verging on irrational to to interpret their every move in such a negative light.

The FX 5900 Ultra doesn't look too shabby.

*Aaron*
07-02-2003, 04:49 PM
Originally posted by tfpsly:
Self-masturbation ?Is there any other kind?

dbugger
07-02-2003, 09:33 PM
^^ What Ysaneya and davepermen said ^^

M/\dm/\n
07-02-2003, 10:45 PM
daveperman: Look at my post with a lot of smilies in this page http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Next, CG in ARB path is nothing special, even syntax is allmost like for GLslang, moreover you are writing one program in Cg that can be used for DX/OGL/Maya/3Ds/Lightwave/&that FX thingy + that works on all cards: vp - starting from GF1 & ATI cards with ARB_v_p (forget about that sh*t from Intel, Sis & S3. I REALLYYYYYYY http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif HATE THEM TOO) and for fp - If I remember correctly, gf3/gf4/gFX & all cards that support ARB_f_p. But if you want to use true loops and so on you'll have to compile shaders in NV_f_p & for vp NV_v_p2 (one change in compiles setup -fp30&#0124; &#0124;-arbvp&#0124; &#0124;-vp30 + your shader (AND YES YOU CAN DO THAT IN YOUR PROGRAM DYNAMICALY)).

Next thing is that Cg is out and working GLslang isn't, HLSL - I just dont like it http://www.opengl.org/discussion_boards/ubb/frown.gif & I havn't tried BUT I THINK IT WILL NOT WORK WITH OGL http://www.opengl.org/discussion_boards/ubb/mad.gif

And then, NV has a lot of their own extensions now, but (if I remember correctly) I saw some developer thoughts on tomshardware about 24/32 formats. And you now, most of them told that 32 float + int is the ONLY way for the next year, so 1 year & well be writing backward compatible shaders for ATI, so I don't know which path is better http://www.opengl.org/discussion_boards/ubb/biggrin.gif At least I hate all the backward stuff http://www.opengl.org/discussion_boards/ubb/biggrin.gif

And actually, that f-buffer gets me every time I think about it http://www.opengl.org/discussion_boards/ubb/mad.gif (ease of use, Phe)

Then look at demo on front page NV_oclussion_querry (not HP or ATI) + ARB_vertex_buffer_object, I think it's nice http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Thats all, I not offending ATI, but The facts are facts http://www.opengl.org/discussion_boards/ubb/biggrin.gif

tfpsly
07-02-2003, 11:20 PM
Originally posted by *Aaron*:
> Originally posted by tfpsly:
> Self-masturbation ?
Is there any other kind?

http://www.opengl.org/discussion_boards/ubb/smile.gif Ask your gf, dude!

[This message has been edited by tfpsly (edited 07-03-2003).]

masterpoi
07-03-2003, 01:19 AM
Originally posted by tfpsly:
http://www.opengl.org/discussion_boards/ubb/smile.gif Ask your gf, dude!


Why should he ask his geforce about self-masturbation ;-)

MtPOI

Nutty
07-03-2003, 04:47 AM
davepermen, can NVIDIA do anything right in your eyes?
OK, the Futuremark shenanigans were deplorable, but apart from that is there really so much to get so bitter about? (yup, the Futuremark stuff is enough in itself)

You can make all sorts of assumptions about their motivations but it's verging on irrational to to interpret their every move in such a negative light.

He's an ATI fanboy now, dont expect sanity! :P

Seriously though,


they have not shown to be able to follow any standards given to them

So you're saying NV3X doesn't support ARB_Vertex_Program ARB_Fragment_Program? Those are the standards, and NV3X _does_ support them. Okay the performance in ARB_FP may not be as high as the 9800, but its still fast, and has no basis for saying they dont follow the standards. They just happen to offer an alternative that allow their hardware to be better utilized.

As for cheating, well all IHV's do it. ATI and nvidia have been doing it ever since ppl looked at benchmarks...

What would you do, not "cheat", and watch as your stock price plummets while company X, says they've magically improved driver performance, and 3dmarkX scores on their card are double yours?! Yeah I'm sure you'd sit there, and preach from the rooftops that its becuase you're doing everything properly.

I'm not saying its right, but its like that, and thats the way it is! http://www.opengl.org/discussion_boards/ubb/smile.gif

Nutty

V-man
07-03-2003, 05:00 AM
davy,

give me a link about this "non-hw-texturematrix-row on gf2" cause I can't find anything. I don't see why they wouldn't do it in hardware. All it needs is an identical circuitry to perform a matrix mult, just like it does for model-proj matrix for vertices, etc, etc.

I know of a Nvidia demo that does projected textures (the smiley face being projected) on a teapot. Once I remove the "slices", it runs smooth. It has a fillrate issue when you turn the slices on.

I wish they had an errata and bug fix list. I already asked that they put this up on the web.

V-man
07-03-2003, 05:15 AM
>>>What would you do, not "cheat", and watch as your stock price plummets while company X, says they've magically improved driver performance, and 3dmarkX scores on their card are double yours?! Yeah I'm sure you'd sit there, and preach from the rooftops that its becuase you're doing everything properly.<<<

It's not smart to put your reputation at risk.
All they had to do is say that 3dmark gives bad numbers for them cause it sucks or that they signed a deal with the competition to make them look bad.
Then they can wip out their own benchmark 1 week later and drop the bomb on 3dmark and ATI.

Well, it's not my company and it's not my problem. http://www.opengl.org/discussion_boards/ubb/smile.gif

dorbie
07-03-2003, 06:55 AM
Nutty,

so basically you're saying that consumer fraud is just fine if it's to save your stock price.

Unscrupulous behaviour is not justified by the motive for it. I'm sure NVIDIA thought they had a motive for doing what they did. I don't give a crap, they got caught and they shouldn't have done it.

I just don't buy this "oh well everyone does it so when someone get's caught we should just give them a free pass" argument. It's pathetic, and it's completely misleading, it's actually quite rare that someone is caught red handed and NVIDIA has even done the catching and finger pointing in the past. NVIDIA's cheats were extensive and to my knowledge unprecedented in their scale, this was clearly an orchestrated deliberate deception. Their performance was poor because of design tradeoffs they made w.r.t. DX9, not some conspiracy of Futuremark, who basically implemented vanilla DX9 shaders, these are exactly the kind of shaders NVIDIA claims supremacy at with FX. In addition to undeniable cheats like hidden clip planes and screen clears NVIDIA hosed their quality and conformity in these to boost their frame rate while boasting of superior quality in their products, then they compounded this by their reaction when caught.

This isn't run of the mill stuff. It's as bad is it's imaginable to get. At least we know where NVIDIA stands on this though. They've flat out said that their blatant cheating is OK, so I guess we can expect it in future from them.

The only reason they did this and thought they'd get away with it is the kind of moral equivocation that you're exhibiting, that typifies the unbelievable reaction to blatant benchmark cheating and worse.

Nutty
07-03-2003, 06:58 AM
No, I didn't say it was fine. I also said, I dont think its right.

You should be telling NV and ATI, not me, they're the ones doing it.

dorbie
07-03-2003, 07:53 AM
They're not the ones confusing the issues in this discussion. No, NVIDIA is the one doing it, not ATI. There's the same insistence that we just cannot look critically upon this but must resign ourselves to be the disempowered pawns because everyone does it (or nobody does it). All sides are not equal in this incident. (See extensive discussion above.) It's like people have inexplicably lost the ability to objectively measure the relative significance of events & actions or separate them in time.

Nobody likes this, but everyone is real sanguine over this as if that's not part of the problem. Nobody wants to call the miscreant caught red handed a cheat saying things "that's just the way it is". Thereby excusing the behaviour while not endorsing it.

There's only ONE company standing up right now saying blatant cheats are just fine, and that's NVIDIA. ATI have said their conservative optimization is getting pulled for fear of backlash.

You & others seem to think there's no difference between the two positions. There is a massive gulf between these positions. Only the liberal arts dumbasses in the media can be excused for their lapses. I expect better discernment from OpenGL programmers :-)

We can hang ATI when they're caught stealing a horse, unfortunately NVIDIA has been caught russling a whole herd but they've gotten away with it by shooting the Sheriff. (OK sorry for that, I'm in Texas these days).


[This message has been edited by dorbie (edited 07-03-2003).]

davepermen
07-03-2003, 09:00 AM
Originally posted by Nutty:
So you're saying NV3X doesn't support ARB_Vertex_Program ARB_Fragment_Program? Those are the standards, and NV3X _does_ support them. Okay the performance in ARB_FP may not be as high as the 9800, but its still fast, and has no basis for saying they dont follow the standards. They just happen to offer an alternative that allow their hardware to be better utilized.
if they just followed the standards and would have implemented a 24bit mode, all would have been happy. performance seekers, standarts seekers, quality seekers.

no, they don't follow any standard by creating 16 and 32 bit. one is too much => too slow, one is too less => too low quality.


they do follow standards, of course, they could not sell their hw else. but they only have some sort of wrappers for the standards.. as they always state: use our extensions for really using the card. and this gets more and more the way it is.

they created cg to fit all their proprietary standards under one proprietary new standard.

yes, its just the way it is. and berlusconi can take over italia, thats just the way it is. and all the other evils are just the way they are.

hell in what a great world we live, all the hope for good we have, not? and the feel that we can change so much!!

Nutty
07-03-2003, 02:04 PM
No, NVIDIA is the one doing it, not ATI.

I'm sorry, but ATI have cheated just as much as NV. In 3dmark03, and 3dmark01, and others.
http://www.nvnews.net/vbulletin/showthread.php?s=&threadid=13789
http://www.digit-life.com/articles2/antidetect/index.html

You can also find others if you search around.


There's only ONE company standing up right now saying blatant cheats are just fine, and that's NVIDIA. ATI have said their conservative optimization is getting pulled for fear of backlash.

Some of the things nvidia did in benchmarks, are also some of the things they do for games. Optimizing shaders etc.. While it may or may not be ethical to do this in games, the fact is they do. So if 3dmark is meant to be a gaming benchmark, it should be subject to the same rules as games. And nvidia optimize games where they can see an ability to do so.

I'm not saying its right, but thats what they do.


You & others seem to think there's no difference between the two positions. There is a massive gulf between these positions. Only the liberal arts dumbasses in the media can be excused for their lapses. I expect better discernment from OpenGL programmers :-)

I dont see any difference in NV and ATI's practises. How they handle it when confronted is different. But I do honestly believe nv has a point. If a game is shipped that runs truly bad on nvidia hardware, they will modify it to run better, so doing the same to a gaming benchmark, _is_ indicative of real world game performance.



if they just followed the standards and would have implemented a 24bit mode, all would have been happy. performance seekers, standarts seekers, quality seekers.
no, they don't follow any standard by creating 16 and 32 bit. one is too much => too slow, one is too less => too low quality.

So you're saying NV is not allowed to make hardware that does something different? Well thats what I like to see dave, put a stamp on innovation eh?!


they created cg to fit all their proprietary standards under one proprietary new standard.

No, they created Cg, to make it easier for developers to make good stuff on their hardware. A fundamentally different perspective. Theres nothing, not a single thing! Stopping you using Cg, for ATI cards.


no, they don't follow any standard


they do follow standards, of course

Make your mind up, do they or dont they follow standards?

Nutty

V-man
07-03-2003, 06:14 PM
>>>if they just followed the standards and would have implemented a 24bit mode, all would have been happy. performance seekers, standarts seekers, quality seekers.
no, they don't follow any standard by creating 16 and 32 bit. one is too much => too slow, one is too less => too low quality.<<<

Where does it say that you need to support exactly 24 bit internal precision?

The spec requires about 17 bits at minimum which is an odd number (byte wise). So having 16 bit is not bad (it's your job as a programmer to decide to use it in the right places), 24 bit is better and 32 bit is traditional.

The 16 bit precision should help save some clock cycles giving NV an edge.

Q: when you use ARB_vp or ARB_fp, NV will be forced to use 32 bit precision, right?
I'm assuming this is why they recommend to program "specifically" for their hw.

And this angers Davey a lot. I see I see!

sqrt[-1]
07-03-2003, 07:32 PM
"Q: when you use ARB_vp or ARB_fp, NV will be forced to use 32 bit precision, right?
I'm assuming this is why they recommend to program "specifically" for their hw."

I believe there is a glHint(..) flag for fragment programs (NICEST and FASTEST) to allow you to select 16 or 32 bit precision in ARB_fp. (on Nvidia)

GT5
07-03-2003, 07:45 PM
wowww
neva knew my thread was soo popular http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif
and this is the 200th reply http://www.opengl.org/discussion_boards/ubb/tongue.gif

dorbie
07-04-2003, 12:46 AM
Nutty, there are a couple of serious problems with that article that are easily revealed by simply doing the comparrison on the images he provides, using autocontrast would show even rounding differences in virtually equivalent algorithms, I've looked at those shots in photoshop myself now and that's exactly what he's done. The NVIDIA shot differences show huge leaf rotations. The ATI difference shows single pixel rounding in the alpha component of the leaves, of the sort that might be introduced by innocent and genuinely equivalent optimization, download the raw images and do a flipbook between them, instead of taking some guy's eyeball of autocontrast post processed comparrison claims as gospel. (It's amazing the analogy between this guy's autocontrast image processing and the general debate claiming all sides are equal in this losing all sight of proportion or objectivity). The other big problem in the article you linked is the extrapolation from his analysis to 3DMark2003. These are both HUGE flaws in the article, it's complete garbage apart from the insight on rotation as the likely NVIDIA cheat. All it tells me is that NVIDIA cheat in 3DMark2001 and again ATI are tarred by the same brush with insufficient evidence.

Back to 2003, ATI & NVIDIA responses are the absolute opposite of what should be going on here, ATI have seemingly legitimate optimizations (and the screenshots bear this our) NVIDIA have cheats, yet ATI are removing theirs and NVIDIA are brazenly saying they've done nothing wrong, when they have. It's not just that I find the responses of ATI & NVIDIA to be different, I find them both to be the opposite of what they should be.

Many of NVIDIA's cheats were not about shaders, this is another deception that gets repeated, but the only reason that lie is pushed is to back up a second lie, and that's that there was somehow equivalence between what NVIDIA did and what ATI did to the shaders.

Some of NVIDIA's cheats are impossible for games (fixed path clip planes). So it doesn't matter how much 'driver' development time a game get's it has no bearing on these mods being cheats.

The modification of a shader to reorder instructions or use different registers etc while maintaining functional equivalence is borderline acceptable (see other long threads on this), modification of a shader to something that is significantly different is totally unacceptable for obvious reasons. I could modify any shader to be untextured flat shaded white and win all benchmarks, where do you draw the line? Where some marketing droid or driver 'engineer' finds the number they need and they think they can get away with it, is the inevitable answer (for NVIDIA).

Finally, NVIDIA can make any hardware they like, I never said otherwise, don't put words into my mouth just so you have some easy target, it's a cheap tactic. Hardware affects all software performance and they must live with their design tradeoffs. To modify applications through driver trojans to simulate the results had their design decisions been different is cheating. When I look at a benchmark I want to see how a piece of software (or at least functionality) works with their hardware, not how some piece of trojan code that does something completely different performs.

[This message has been edited by dorbie (edited 07-04-2003).]

billy
07-04-2003, 01:27 AM
Originally posted by namespace:
Thx Humus!

But I'll stay with Nvidia. They need my support/money right now http://192.48.159.181/discussion_boards/ubb/wink.gif

Just hope, that there will be a fx with passive cooling soon. I HATE fan-noise...

They grew 180%!! Why do you think they need more money?

Nutty
07-04-2003, 01:31 AM
Finally, NVIDIA can make any hardware they like, I never said otherwise, don't put words into my mouth just so you have some easy target, it's a cheap tactic.

I didn't say that you said otherwise, that was directed at dave, who seems to think, unless your hardware does anything except the standard ARB path, you should be shot for bringing out proprietry technology. Perhaps if you looked at the text I quoted, you would have known this.


using autocontrast would show even rounding differences in virtually equivalent algorithms

Its still app checking in a benchmark, regardless of how you look at it.


All it tells me is that NVIDIA cheat in 3DMark2001 and again ATI are tarred by the same brush with insufficient evidence.

Do you work for ATI or something? You seem to be completely blind to any evidence suggesting that ATI app check for benchmarks just like nv do.

They said they would remove those cheats in Cat 3.5, and low and behold, when cat3.5 appeared, huge increases in yet more benchmark scores..


I could modify any shader to be untextured flat shaded white and win all benchmarks, where do you draw the line?

Exactly where do you? No-one seems to know. Is it okay to draw the line at say 10% pixel error, or more? Who decides?

I dont think we'll ever agree, so I'll leave this thread here. If you want to believe ATI never does any benchmark manipulation, then thats your decision.

davepermen
07-04-2003, 02:44 AM
first of all, nutty: poor you run away..


Originally posted by Nutty:

I didn't say that you said otherwise, that was directed at dave, who seems to think, unless your hardware does anything except the standard ARB path, you should be shot for bringing out proprietry technology. Perhaps if you looked at the text I quoted, you would have known this.


you obviously don't get it. their hw cannot fullfill the standards, they have to sort of EMULATE it. like saying the gba can do floatingpoints. of course he can do the full IEEE floatingpoint math. but he's not made for it, and cannot do it fast. result, you're forced to not use floatingpoint on the gba.

first of all, i think EVERYONE in here wants standards. we're openglprogrammers, and we all know enough about the extensionmess to love and hate it.

and thats why i say, every hw vendor should, for its own best (we see how nvidia has troubles now), make hw that just fullfills the standards, making them run fast. they all knew what dx9 asks for, they all knew what opengl ARB exts will ask for (namely the same requirements +- that dx9 does..).

so nvidia knew right from the start their hw will never have a real dx9 mode, but rather has to emulate it. and the result IS visible for example in 3dmark03, where it eigher has to emulate ps1.4 wich its very slow at, or ps2.0 wich nvidia hw has to emulate as well!! the result was sucking speed, and still IS sucking speed. and the logical result was cheating.


first: build standard compatible hw, that does exactly what the standard asks for FAST. because games will first of all fit standards and gamers want to have them fast.

THEN: add additional features to your hw, as much as you want, as much as you can. and yes, the additional features of the gfFX are great, i don't say anything about that. its just that the standard parts of the gfFX lack speed, its obvious that they have to emulate all ARB_fp with their NV_fs in a way they don't like. they are NOT allowed to touch 16bit fp at any time in any standard compliant shader and that is HORRIBLE for them in terms of performance.
but they KNEW that!! WHY HAVE THEY DONE THAT THEN?!

i do fully support new stuff in hw, i want hw to evolve. but NEVER at the cost of slow standarts, and sorry, when ever you disable the cheatings in nvidia drivers, they fall back to very very very slow real benchmarks. ati doesn't.

and i do understand shaderoptimizing cheats. while they ARE wrong the way they are implemented, they are .. understandable.
but the clipplanes just suck, plain and simple http://www.opengl.org/discussion_boards/ubb/biggrin.gif

i hope you understand my point now, nutty..

i'm happy that i can draw with plain primitive opengl the way it works on all older cards with my ati and it works fast. that i can draw with plain opengl the way only newest cards can and it works fast.

thats why the ati cards are good designed cards. they just run fast where ever they need to. yes the gfFX is more advanced. but does the gfFX without developer or driver optimisation perform well in standard opengl? never ever done.. and thats simply sad.

no standard ever says something about maximum allowed performance, so please vendors make your hw fast as hell in standard mode. and then, shine with your hw in any benches, without (big..) cheating. shine in the developers hearts by adding additional, revolutionary features that may define future standards. but always design for todays or possible future standards. never design into some other direction, redefining everything to own standards. too much troubles for yourself, as well as for the developers.

Nutty
07-04-2003, 04:10 AM
first of all, nutty: poor you run away..

I aint running away, I just fail to see the point in dragging out an argument, that is never going to be concluded. I've got better things to do with my time than argue with you two.

V-man
07-04-2003, 09:22 PM
This must be the largest thread in the history of these boards. I'll drink to that!
And it looks like Nutty burned you guys. LoL

dorbie
07-04-2003, 11:06 PM
I didn't realize that comment was directed at Dave, I'm skimming where I can.

I'm not blind to ATI cheating, I looked at the *evidence*, did you before you posted your inaccurate claims? You insist there's cheating where there is none, why? Rounding differences are fundamentally different in character from blatant cheats (see extensive other threads on app specific issues). Shader optimization is fundamentally different from shader modification. View dependant clip planes are fundamentally different from all of the above.

You seem to ignore clear differences between the actions and response of these companies. If you have better things to do, then go do them. I have better things to do that discuss this issue with you but I feel the need to highlight your repeated misrepresentation of the facts.

V-man, shall I fetch you some pom-poms?

P.S. texture filter may be the cause of the ATI 'nature' diffs, more suspicious IMHO, depends on your view of filtering on billboards.

[This message has been edited by dorbie (edited 07-05-2003).]

davepermen
07-05-2003, 08:22 AM
Originally posted by V-man:
This must be the largest thread in the history of these boards. I'll drink to that!
i'll take a red bull on it http://www.opengl.org/discussion_boards/ubb/biggrin.gif


And it looks like Nutty burned you guys. LoL

uhm.. he ran away again as he realised he had no chance.. calling that get burned, dunno..

he always runs away in discussions with me.. not caring actually what i say.. he just asks are you with me or not? if not, he directly closes..

poor boy http://www.opengl.org/discussion_boards/ubb/biggrin.gif

he'll always love his geforces.. even to death..
so did a lot loved voodoo cards..

but if you're real, they are all just companies, and it does not mather wich one does what.

nvidia is currently way off. there is no love or hate helping there, its just plain simple.
they do unfair stuff
they do unuseful stuff
they don't work for the community

we want a good working united opengl
we want a good directx (yeah, not we on this board http://www.opengl.org/discussion_boards/ubb/biggrin.gif)
we don't want another glide.. and nvgl can get called glide2.. for a long time, only the first small window was actually not nvextension. and that was just to init the extensions.. http://www.opengl.org/discussion_boards/ubb/biggrin.gif
nvidia is out.. they have to do real work to get in again. i hope it for them, i have no problems with them..

dawn is nude.. *cough* cute i ment http://www.opengl.org/discussion_boards/ubb/biggrin.gif

dorbie
07-05-2003, 09:06 AM
Dave, I think you're on your own in your view of NVIDIA. Your post takes the mosts extreme anti-NVIDIA position on every issue.

Nutty
07-05-2003, 02:51 PM
I think your completely ignorant remarks, speak volumes about why I cant be bothered to continue.

matt_weird
07-05-2003, 09:59 PM
Originally posted by V-man:
This must be the largest thread in the history of these boards. I'll drink to that!


Originally posted by davepermen:
i'll take a red bull on it http://www.opengl.org/discussion_boards/ubb/biggrin.gif


LOL, good one! http://www.opengl.org/discussion_boards/ubb/biggrin.gif I'll have Efes too it! http://www.opengl.org/discussion_boards/ubb/cool.gif


Originally posted by davepermen:
he'll always love his geforces.. even to death..
so did a lot loved voodoo cards..

hehehe http://www.opengl.org/discussion_boards/ubb/wink.gif, voodoo cards -- i never owned a voodo card, and that was pity , specially when i was playing NFSII back in 1996 http://www.opengl.org/discussion_boards/ubb/frown.gif So i always hated 'em http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/tongue.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif
(besides, i think i have something personal against nVidia too http://www.opengl.org/discussion_boards/ubb/wink.gif http://www.opengl.org/discussion_boards/ubb/mad.gif )


Originally posted by davepermen:
but if you're real, they are all just companies, and it does not mather wich one does what.

and that's bloody right! http://www.opengl.org/discussion_boards/ubb/cool.gif and it does't matter wich one does what, unless they're manufacturing a nuclear bombs to sell it to a world's most aggressive countries! http://www.opengl.org/discussion_boards/ubb/biggrin.gif


Originally posted by davepermen:
they do unfair stuff
they do unuseful stuff
they don't work for the community

we want a good working united opengl
we want a good directx (yeah, not we on this board http://www.opengl.org/discussion_boards/ubb/biggrin.gif)
we don't want another glide.. and nvgl can get called glide2.. for a long time, only the first small window was actually not nvextension. and that was just to init the extensions.. http://www.opengl.org/discussion_boards/ubb/biggrin.gif

viva la revolucion! http://www.opengl.org/discussion_boards/ubb/biggrin.gif


Originally posted by davepermen:

dawn is nude.. *cough* cute i ment http://www.opengl.org/discussion_boards/ubb/biggrin.gif

yeah, what a "nice" example of a silicon processing industry progress! http://www.opengl.org/discussion_boards/ubb/biggrin.gif (quite like another one -- P.Anderson' silicon boobs http://www.opengl.org/discussion_boards/ubb/rolleyes.gif http://www.opengl.org/discussion_boards/ubb/tongue.gif)

[This message has been edited by matt_weird (edited 07-06-2003).]

davepermen
07-06-2003, 04:11 AM
Originally posted by Nutty:
I think your completely ignorant remarks, speak volumes about why I cant be bothered to continue.

always the same.. if its something that could be against your view, run away as fast as you can. just don't let anything else interfer.. duck and cover, fight the terrorists, get rid of the evil other thinking..

you haven't read my stuff actually nutty, i KNOW that.

Dusk
07-06-2003, 04:15 AM
Dawn is tramp.

Dusk.

V-man
07-06-2003, 08:12 AM
Originally posted by Dusk:

Dawn is tramp.

Dusk.

What? You dont like fairies?

It's like beer commercials. http://www.opengl.org/discussion_boards/ubb/smile.gif

I have one of the screenshots as my desktop. The one where she is in leather.

*Aaron*
07-06-2003, 09:33 AM
Dawn is tramp.

Dusk.I'm not surprised you feel this way. After all, you two are exact opposites. http://www.opengl.org/discussion_boards/ubb/wink.gif

Dawn'
07-06-2003, 11:42 AM
Originally posted by Dusk:

Dawn is tramp.

Dusk.

Huh?! Jealous slut.

Dawn.

john
07-06-2003, 04:05 PM
I swore that I wouldn't post to this thread again because I had said all I wanted to say on this subject. I also thought better of posting a timely correction (because I thought my retorts would be obvious and there would be no need to write them down)... but I've changed my mind, mainly because of comments like


you haven't read my stuff actually nutty, i KNOW that.

which is ironic (for reasons described at the end).

NB. When I am talking about graphics cards and graphics vendors in this discussion, I am NOT talking specifically about nVidia.


but there is no point. gpu's are simple

If you truly believe this then there really is no point in arguing about it... oter than congratulating you on coming top in your country in your computer systems engineering course at university and for single handedly designing your own CPU and writing an optimisd compiler for it.

Obviously you are mistaken.

The Geforce and Radeon cards both are >> 100 million transistors. 100million is approximately twice that of a Pentium 4. The sheer number of transistors clearly suggests that these processors are insanely complicated devices. Not even comments like --


they have simple, well defined tasks.

will deny that they are sophistated devices. All hardware and computer systems have a simple, well defined tasks. They are state machines: their behaviour can be well defined, also.


just since the gfFX nvidia now CLAIMS that hw gets so difficult and so different all the time that we actually have to develop for each one individually.

and you know otherwise? You realise that the graphics processors are now effectively massively parallel systems, right? That their functionality aims for executing a programs in parallel: once per pixel, and once per fragment, while other vertex streams are coming in ABD while trying to implement some kind of system to minimise data-dependancy stalls from fetching data from the frame buffer? And you honestly believe that this is a simple task?

Do you realise that the supercomputers from www.top500.org (http://www.top500.org) have an architecture and associated problems not unlike a graphics card? They're machines with a well defined interface ("share this data, execute operations on them, share the results"), but no-one from the Distributed and High-Performance research groups around the world will claim that they are easy to program or design. I am not saying that GeForce/Radeon ARE the same as supercomputers: my argument is that they have the same kind of problem: massicely vectorised number crunchers sound easy if you don't think about the complexities behind their design.


but thats the first time they don't deliver hw that can just rastericer the way apis mean it.

I have no idea what this means. The API is the interface; therefore the hardware must, by definition, conform to the API.


gpu's are nothing complicated. the path, the design is actually well defined.

Their design is well defined? Have you seen a chip roll out of the fabrication plant from a blueprint that WASN'T well defined? All CPUS have a pipeline, or a path for an instruction to travel. Are you also saying that CPUs and compilers are easy to write, too? Since when has "well defined" translated to "not complicated"?


they should simply blame themselves and get quiet.

blame themselves for _what_, exactly? Coming up with hardware that is sufficiently complicated that it can be exploited or undermined by toy benchmarks? You can do that for ANY computerised system. Blame themselves for bad publicitity over being caught cheating? Maybe: but it highlights a valid point that toy benchmarks are meaningless and stupid.


i don't want to start lowlevel optimize again for all different kinds of hw. isn't that why opengl is there, or dx? because we DON'T NEED THAT?! why should it be now different. just now, that nvidia made a big design fault its not their fault but instead the whole world changed..

... you didn't read what I wrote. I was advocating a system AGAINST low-level optimisation. I'll even quote the relevent part from my earlier post:

Cg, or gslang, or whatever incarnation you want to talk about is a Good Thing for graphics cards and programmers. It is a layer of abstraction over graphics hardware just like a compiler abstracts over the hardware complexities of a CPU. Yes, Cg it is only available for nvidia cards at the moment, but the ideology is a good start. If nVidia and ATI and 3Dlabs and whoever else can agree on a language for OpenGL that allow programmers to descibe high=level graphics operations so the video driver can schedule code for a particular implementation, then EVERYONE will be a winner.

... but that only discusses optimisation for prograammable shaders, not render paths. There is yet no language to describe the fast path of a graphics processor, and until there is such a system, application programmers will be compelled to target one architecture over another.

Graphics processors are not simple. Arguing otherwise is... naive.

billy
07-06-2003, 11:16 PM
Originally posted by dorbie:
Dave, I think you're on your own in your view of NVIDIA. Your post takes the mosts extreme anti-NVIDIA position on every issue.

I think there are too many people that work for or would like to work for NVIDIA here!.

Not everyone is crazy about NVIDIA. They produce good graphic cards but a cheat is a cheat!

Morpheus
07-07-2003, 03:50 AM
Nvidia didn't cheat. These were glitches in Matrix.

tfpsly
07-07-2003, 04:47 AM
Originally posted by Morpheus:
Nvidia didn't cheat. These were glitches in Matrix.

LOL! If you say, then you're a NvMatrix agent! How comes? I thought Morpheus was on the AtiHuman side?

Ps : you saw those nice altered matrix screenshots too?

tfpsly
07-07-2003, 04:49 AM
Originally posted by billy:
I think there are too many people that work for or would like to work for NVIDIA here!.
Not everyone is crazy about NVIDIA. They produce good graphic cards but a cheat is a cheat!

I'm against Nv when they cheat like they did. I'm against Ati when they do such crappy drivers under Lnx like they do.

No one is perfect, god probably does not exist, and this is not a wonderfull world. Thanks for opening my eyes http://www.opengl.org/discussion_boards/ubb/smile.gif

Sorry, could not resist http://www.opengl.org/discussion_boards/ubb/wink.gif

dorbie
07-07-2003, 05:25 AM
Originally posted by billy:
I think there are too many people that work for or would like to work for NVIDIA here!.

Not everyone is crazy about NVIDIA. They produce good graphic cards but a cheat is a cheat!

Billy, read my earlier posts instead of jumping on the end of a thread. To say I'm excusing NVIDIA because I want to work there just demonstrates that you're an idiot who never read the thread. It is possible to criticize NVIDIA and call them cheats without demonizing their every move.

Daveperemen's summary of NVIDIA was extreme (and irrational IMHO), there's no two ways about it.

If I wanted to work for NVIDIA I'd be working for them by now, I've never applied nor do I intend to, they're a great company, so's ATI, I just don't want a dog in that fight, thanks.

Here's a quote from earlier in this thread to show how laughable your accusation is:


Originally posted by dorbie:

All they need to do is grow a spine, get a clue and call a cheat a cheat. If we can't do that then benchmarks of any stripe are utterly useless. They're made useless by these vacillating fools. We have NVIDIA actually having the gall to stand up and say what they did was OK (quite a change in position since writing quackifier), I guess that means they'll be cheating again in future.

It's completely open season now thanks to NVIDIA, and if they get caught cheating and committing consumer fraud (IMO) that'll be just fine by them. They can even rewrite your copyright software through driver trojans without a license and undermine your business and if you speak out maybe they'll sue your ass unless you kiss theirs. It's bloody brilliant, we need more of this in future, the boot isn't stomping on our faces hard enough yet.


[This message has been edited by dorbie (edited 07-07-2003).]

knackered
07-07-2003, 06:41 AM
So, to sum up...

t0y
07-07-2003, 07:36 AM
Originally posted by knackered:
So, to sum up...

Nvidia cheated, ATi cheated, and this thread is pointless...

This will only end when they document all the app-specific optimizations/fixes from their drivers, *and* give you the option to disable them all.

billy
07-07-2003, 07:59 AM
Originally posted by dorbie:
Billy, read my earlier posts instead of jumping on the end of a thread. To say I'm excusing NVIDIA because I want to work there just demonstrates that you're an idiot who never read the thread. It is possible to criticize NVIDIA and call them cheats without demonizing their every move.


I have better things to do than to read 200 posts and your an idiot to do so.

Nutty
07-07-2003, 08:01 AM
This will only end when they document all the app-specific optimizations/fixes from their drivers, *and* give you the option to disable them all.

Which aint ever gonna happen.

The End.

[This message has been edited by Nutty (edited 07-07-2003).]

jebus
07-07-2003, 08:08 AM
so, has anyone played The Frozen Throne yet? i spent all weekend playing it!

jebus

V-man
07-07-2003, 01:27 PM
Until the next benchmark, until the next cheat!

This thread will be back at the tops by next year. Maybe less.

dorbie
07-08-2003, 01:49 AM
Billy, I was participating in the thread for the 'fun' of it.

I see you work for CAE... considering your company's deals with ATI that makes your accusation of my favoring NVIDIA because of my employment, not only ridiculous but rather hypocritical doesn't it. I was told recently that when someone attacks you out of the blue it isn't about you, it's about them. That seems to apply here in spades.

billy
07-08-2003, 02:44 AM
Originally posted by dorbie:
Billy, I was participating in the thread for the 'fun' of it.

I see you work for CAE... considering your company's deals with ATI that makes your accusation of my favoring NVIDIA because of my employment, not only ridiculous but rather hypocritical doesn't it. I was told recently that when someone attacks you out of the blue it isn't about you, it's about them. That seems to apply here in spades.

How did you guess? http://www.opengl.org/discussion_boards/ubb/biggrin.gif
You must dedicate yourself to fortune telling.



[This message has been edited by billy (edited 07-08-2003).]

M/\dm/\n
07-08-2003, 02:44 AM
F**K http://www.opengl.org/discussion_boards/ubb/mad.gif Password entered incorrectly press back button, & F**K text is lost as allways, when we'll see update to this ubb?

OK to the offtopic http://www.opengl.org/discussion_boards/ubb/biggrin.gif

1) What is called poor ARB_f_p performance in fps (100-, 85-, 30-) & where you actually see it (shader, length, syntetic bench/game)?

2) Is 24 bit STANDART? And WILL it be THE STANDART next year? (look at developer interviews at tomshardware).

3) Is Ati faster in f_p when doing multipassing 3+ times using f-buffer & do the precision issues arise (writing+reading each pass) & is this F-buffer easier to use vs. NV exts?

4) If Futuremark wrote such OBJECTIVE test, why Ati, Matrox etc. didn't helped them in fight against NV (NV paying them too? NV($$$)->ATI)?

5) Someone mentioned that he doesn't like extensions, I allways thought that GL is about extensions http://www.opengl.org/discussion_boards/ubb/biggrin.gif and that's one of the strongest sides of it vs. dx.

6) When well see this thread as sticky? http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif

dorbie
07-08-2003, 03:03 AM
Originally posted by billy:
How did you guess? http://www.opengl.org/discussion_boards/ubb/biggrin.gif
You must dedicate yourself to fortune telling.
[This message has been edited by billy (edited 07-08-2003).]

Nothing so mysterious, your opengl.org profile lists your occupation as "CAE Analyst".

davepermen
07-08-2003, 03:20 AM
Originally posted by M/\dm/\n:
F**K http://www.opengl.org/discussion_boards/ubb/mad.gif Password entered incorrectly press back button, & F**K text is lost as allways, when we'll see update to this ubb?
i know that feeling....



1) What is called poor ARB_f_p performance in fps (100-, 85-, 30-) & where you actually see it (shader, length, syntetic bench/game)?

for example john carmack as he tests and compares, gfFX run much faster with extensions, and radeon run faster on ARB_fp than gfFX. this is not shown to be wrong in ANY program yet..



2) Is 24 bit STANDART? And WILL it be THE STANDART next year? (look at developer interviews at tomshardware).
that tomshw interview is crap. at least, you've read it wrong http://www.opengl.org/discussion_boards/ubb/biggrin.gif

and yes, it is standard. read the dx9 specs. and you don't think dx9 is going to be a standard for the next years? even while microsoft wants it as a base for his next os, where each window is a dx9 object?!



3) Is Ati faster in f_p when doing multipassing 3+ times using f-buffer & do the precision issues arise (writing+reading each pass) & is this F-buffer easier to use vs. NV exts?

the f-buffer is only on the 9800 and not officially supported yet. get that finally, as you babble around it all the time.

yes, it is. dawn rocks on my card, wich is now about 8 months old (and yes, the updated version wich is equivalent/bether than the nv original version. no normalizationcubemaps but full DP3,RSQ,MUL instead, and all).

the fbuffer at least looooks like it is to be used easily.

long fp on nv cards with no early out don't sound great anyways..



4) If Futuremark wrote such OBJECTIVE test, why Ati, Matrox etc. didn't helped them in fight against NV (NV paying them too? NV($$$)->ATI)?

because that would mean a lawsuite against nvidia, cost a lot of money, and all the struggles.



5) Someone mentioned that he doesn't like extensions, I allways thought that GL is about extensions http://www.opengl.org/discussion_boards/ubb/biggrin.gif and that's one of the strongest sides of it vs. dx.
you got it wrong again. extensions are GREAT. BUT THE STANDARD SHOULD BE THE FIRST OBJECTIVE AT ALL AND ANY AND EVER AND FOR EVER TIMES!

don't make extensions to replace the standard, because the standard works slow on your card/is not really supported and has to be hack-emulated.

you want to have to use different extensions on all cards, or bether just have opengl running fast by default, using some tiny exts here and there for some additinal features?

look at the mess we had with vertex_object_buffer.. that was stupid how it was before..

now we have similar situation, where you should use the nv fragment program on nv cards, as arb fp is slower. and THAT is stupid.

24bit IS standard and will be for a while (difference to 32bit is minimal, if you do the math yourself you'd know it). so support for it by nvidia would have been great. it would have simplified all the mess we have now, would have not made that huge 3dmark scandal, and all the **** we have now.


i don't know why you don't get that..

arb_fp asks for 24bit fpu, and nvidia knew when they designed their hw that this will be the minimum requested.

not supporting 24bit but only 16 and 32 WAS a stupid step for a united standardized opengl. that IS plain fact.

john
07-08-2003, 03:33 AM
and yes, it is standard. read the dx9 specs. and you don't think dx9 is going to be a standard for the next years? even while microsoft wants it as a base for his next os, where each window is a dx9 object?!

why do you care about DX9? Whereas a directx9 programmer might look at the 16/32bit difference as not conforming to a spec Microsoft designed, surely an OpenGL prorgammer should look at it and think that he gets a choice of TWO!
Why does a card that not properly supports some other standard have anything to do with opengl and ITS standards?


because that would mean a lawsuite against nvidia, cost a lot of money, and all the struggles.

yeah.. yeah, thats a good interpretation. But I've got a better one. How about *I* give you another interpretation, and you give me a better excuse? You can't scare me with this gestapo crap; I know my rights.

Remember Intel's famed pentium floating point bug? Want to guess why all the other processor manufacturers DIDN'T get on TV and advertise their chips as being bug free and perfect...? Because then Intel would point out their failures, too. Only those without failures (or cheats, in this case) should cast the first law-suit... which is why no one did.

tfpsly
07-08-2003, 03:39 AM
Originally posted by davepermen:
for example john carmack as he tests and compares, gfFX run much faster with extensions, and radeon run faster on ARB_fp than gfFX. this is not shown to be wrong in ANY program yet..

According to him, this is due to the precision. In 16 bits, NV is faster; while in 32 NV is slower than ATI.


BUT THE STANDARD SHOULD BE THE FIRST OBJECTIVE AT ALL AND ANY AND EVER AND FOR EVER TIMES!

I agree.

M/\dm/\n
07-08-2003, 03:42 AM
Yeap, but those fps are over 30 anyways and at 30 you don't feel that much difference anyway.

32 bit format is better and that's where I'll stay. And its huge step that NV implemented 3 formats in same silicon space. Int + 2 floats are great.
24 just works NOW, but the NV combo gets better results where needed, and ints are really handy sometimes. The question is 90 or 80 fps?

Concerning DOOM III, NV codepath is faster as it's single pass, things that can be done are done in int's that are faster and gives more precise results than fp32 or if ther's no difference in fp16 and other things are done in fp32 anyway. The same thing as int, float, double for C.

[This message has been edited by M/\dm/\n (edited 07-08-2003).]

m2
07-08-2003, 05:33 AM
The name of the game is to keep posting to this thread to figure out when will the system break, right?

(sorry, couldn't resist)

Nutty
07-08-2003, 05:59 AM
A tidbit of info I just saw somewhere;


First of all, nVidia is set to rejoin FutureMark's beta program. Amusing, isn't it?
And secondly, FutureMark agreed to do a demo ( I don't believe this is a benchmark demo, just something like Dawn ) for nVidia's NV40, to be launched at Comdex.

Go on dave, you know you want to... http://www.opengl.org/discussion_boards/ubb/smile.gif

V-man
07-08-2003, 06:09 AM
arb_fp asks for 24bit fpu, and nvidia knew when they designed their hw that this will be the minimum requested.


OK, again I'll say this,
Where does it say in the ARB spec that 24 bit FPU is required?

You have understood something totally different from reading that spec.

M/\dm/\n
07-11-2003, 01:35 AM
HEHEHEEEEEEEEEE...... http://www.theinquirer.net/?article=10426 & as far as I saw at http://www.tech-report.com/onearticle.x/5362 no fixed clipplanes?

HEHE, they're still cheating?

Mazy
07-11-2003, 02:08 AM
Originally posted by V-man:
OK, again I'll say this,
Where does it say in the ARB spec that 24 bit FPU is required?

You have understood something totally different from reading that spec.

oh, i'm hating myself for answer anything on this thread, but the information you are looking for sould be in this document : http://www.opengl.org/developers/documentation/Version1.2/OpenGL_spec_1.2.1.pdf at section 2.1.1, i havent check exacly how many bits you need to meet those requirement, thats for other people to do http://www.opengl.org/discussion_boards/ubb/smile.gif

davepermen
07-11-2003, 02:19 AM
nope, they're now paying futuremark to cheat for them.. http://www.opengl.org/discussion_boards/ubb/biggrin.gif

about the precicion issues.. okay, mixed it up a bit..

so thats the ARB_fp requirement:


To summarize section 2.1.1, the maximum representable magnitude of
colors must be at least 2^10, while the maximum representable
magnitude of other floating-point values must be at least 2^32.
The individual results of floating-point operations must be
accurate to about 1 part in 10^5.


i don't know if the nvidia half float fits in that spec..

yes, 16bit is faster than 24bit wich is faster than 32bit. thats exactly what carmack measured. it has nothing to do with collapsing passes as far as i remember, just with the precicion.


why do i care about dx9? because it is a defacto hardware standard nontheless. and its a standard wich nvidia can only emulate, no direct hw support is in. that leads to the resulted cheats, where they want to go below specs to gain performance. THATS what i said. thats why 3dmark03 got cheated like hell by nvidia. because it follows the dx9 specs to create some bench, and nvidia showed to really loose on it.

so.. if 16bit would be enough for ARBfp, why doesn't nvidia do it? they would be much faster in ARB modes then and rock doom3 with standard gl.


as i stated before: first make the cards work fast and good on the standards.

that means for me:
dx9 implementation that is fast
opengl implementation that is fast

it means further things (mainly for opengl): new hw features should be designed in a generic way that can fit future gl standards

examples: floating point textures and render targets. good done by ati, bad done by nvidia. yes, none of both made it perfect. but ati has only restrictions on filtering from, and blending to, the floating point textures. for the rest, you can integrate it directly as if it was standard in opengl since years..
accumulation buffer: it looks like its implemented in hw on radeon9500+ cards, have i read that right?!!? sounds great at least http://www.opengl.org/discussion_boards/ubb/biggrin.gif


ati offers quite some things wich are not pixel and vertex shader features wich help quite much as well. they had VAO since long time, much bether compared to VAR it was. they now have überBuffers in, not yet finalized (...i want it now...), but very cool once its here. they work hard on gl2 implementations.

as dx9 is a standard and will be, there will for quite some time hw get selled that has exactly what ARB_fragment_program can do (as its essencialy ps2.0). it should be the main goal of vendors to make hw that can do ARB_fragment_programs fast. not having a replacement ext because they cannot directly make ARB_fragment_program fast. that is BAD.

extensions are there to EXTEND opengl. not to REPLACE opengl. thats something nvidia has sometimes troubles with..


madman: would you not prefer to simply use opengl the way it works on all hw and have it rocking fast then? well.. i do. thats what i use opengl for.

M/\dm/\n
07-11-2003, 03:28 AM
What do you mean fast? Are you going to say that FX5800 or FX5900 is slow? <30 fps?

davepermen
07-11-2003, 03:44 AM
Originally posted by M/\dm/\n:
What do you mean fast? Are you going to say that FX5800 or FX5900 is slow? <30 fps?


i mean that you don't have to use extensions to get it fast by default.

if you use standard opengl, or standard dx9 (without cheats) then ati cards perform bether.

if you fully tweak and optimize (in gl through extensions, in dx9 through driver cheats http://www.opengl.org/discussion_boards/ubb/biggrin.gif), then the nvidia cards are faster..

but then again.. just staying with the ati card means fast by default, without a lot of extension-mess, without a lot of driver mess, without such big threads http://www.opengl.org/discussion_boards/ubb/biggrin.gif

if you only use ARB, then nvidia performs quite bad on it. and thats a shame.

i mean, a gfFX 5900 ultra, nearly not available yet, is nearly not faster than a radeon9700pro, wich now soon celebrates happy birthday!

i own such a card, and gl stuff by default is fast. i don't need to tweak tons of stuff for that, i don't need to rely on proprietary exts for that.

i think you agree that this is good.


i don't say extensions are bad. but cards performing much bether with extensions than without just show one thing: way off the standard. thats why they need extensions to perform well in standard situations. bether stay on standard and EXTEND with the extensions..


don't you agree?

i like the addons nvidia gives with nv_fragment_program, and the nv_vertex_program with the loops and that. i DO like that.

what i DONT like is that NVfp runs much bether than ARBfp by default. it should not run bether. ARBfp should be blazzingly fast on the nv30, to beat out the r300 in every modern opengl application. shouldn't it?

seeing dawn running realtime on my old card, with bether fps than on most nv30 cards, THAT is what i mean with bad cards/good cards..

and my dawn runs with standard opengl, too http://www.opengl.org/discussion_boards/ubb/biggrin.gif

dorbie
07-11-2003, 07:19 AM
Madman, I don't think you can say for sure if they're cheating, but considering they claimed they weren't cheating before it's a fair guess. NVIDIA appears to have an intellectually dishonest interpretation of what a reasonable optimization is.

w.r.t. fixed clip planes, again, you can't say for sure either way, I mean they might just disable the cheats when you're off the rail to avoid the earlier anomalies. The fact that it looks OK when you go interractive says nothing about what they do when you're on a fixed path.

A performance improvement does not mean a cheat and even a shader optimization does not mean a cheat, it depends on the details. This is one reason Futuremark's analysis was so informative, but I guess we won't be seeing a repeat of that.

So much for benchmarks.

V-man
07-12-2003, 04:50 AM
>>>so.. if 16bit would be enough for ARBfp, why doesn't nvidia do it? they would be much faster in ARB modes then and rock doom3 with standard gl.<<<

For GL (and ARB_fp), 17 bit is the minimum. NOT 24!

Now, ARB_fp has hints so you can ask for fastest or nicest, but I'm not sure if NV will use 16 bit for fastest.
I'm guessing ATI always uses 24 bit so basically they ignore the hint.

Anyway, NV is following the standards. 32 bit is within the standards limits.
And their FX cards are DX9 cards. What is it that they emulate in software?

Much of this thread is BS.

M/\dm/\n
07-15-2003, 02:31 AM
Hey, daveperman, there's somthing to drive you mad http://www.opengl.org/discussion_boards/ubb/biggrin.gif => http://www17.tomshardware.com/graphic/20030714/vga_card_guide-08.html

Currently, the FX 5900 Ultra can safely be called the fastest card on the market.

matt_weird
07-15-2003, 04:03 AM
hey, M/\dm/\n, this is what should make you less "mad" http://www.opengl.org/discussion_boards/ubb/biggrin.gif:
http://tomshardware.bizrate.com/buy/browse__cat_id--405.html

Look through "Top Sellers" section -- see? http://www.opengl.org/discussion_boards/ubb/confused.gif

Also note this:

Radeon 9800 PRO 128MB, $359
GeForceFX 5900 128 MB, $399

And this:

GeForceFX 5900 Ultra 256 MB 256-bit DDR - (450/850 MHz); official price: http://www.opengl.org/discussion_boards/ubb/eek.gif -> $499

The "fastest" doesn't mean yet "most stable".

john
07-15-2003, 05:10 AM
come on fellas! this is a non-technical argument and is going to end in tears :-/ not to mention being pointless.
besides, in our capitalist economy, price is determined by lots of factors: it's demand AND supply!