PDA

View Full Version : Nvidia or ATI for OpenGL



soconne
10-08-2003, 09:30 AM
I know somebody will probably say, "research this yourself", but I would like to know if anybody has their own opinion on which card is better for OpenGL programming, Nvidia or ATI, their latest cards by the way. I know ATI is 'supposedly' better for DirectX 9 because of HalfLife2 news, but Nvidia seems to support OpenGL more by showing more demos on their site than ATI. Any opinions on this?

Coconut
10-08-2003, 10:19 AM
Have you at least searched this forum for discussions on both cards?

NitroGL
10-08-2003, 10:28 AM
nVidia tends to have a lot of their own proprietary extensions, where ATI has more ARB extensions. I think ATI is more of a standard than nVidia because of that.

nVidia does have more programmibility on the FX part, but it doesn't really have the speed unless you want to toss precision out the window. ATI on the other hand has the speed, but you don't get the long shader programs (and a slightly less powerful set of instructions).

Personally I would wait for the next gen of chips to come out, and then decide as to which card/chip you want.

nukem
10-08-2003, 11:59 AM
I have a nvidia FX 5900 ulta. The card works great much better then my ATI card on my laptop(ATI Radeon 7500 32meg compared to my old nvidia geforce 2 mx it ran like ****). I like nvidia because they do support OpenGL alot and for there continous *nix support. In all the reviews I saw the FX was much faster then ATI with OpenGL, it was mixed for DX.

pkaler
10-08-2003, 12:26 PM
I have a Radeon 9000 at work. And a GFFX 5900 Ultra at home. Both work great. For Windows, it is a toss up. Purchase the best card you can afford. Try to get one of each for testing purposes.

nVidia's fan and extra power dongle might annoy you. I don't mind so much.

On Linux (and other *nix OSes) go with nVidia. The driver support is proven. ATI did just release Linux drivers. But I'd wait 6 months to a year and take a wait and see approach.

Nakoruru
10-08-2003, 12:52 PM
It is not true that ATI supports significantly more more ARB extensions than nVidia. If you take the extensions from nVidia's OpenGL spec and from ATI's webpage then you see that nVidia supports 24 and ATI supports 25. Hardly a huge enough advantage to make them 'more standard'

At the same time, nVidia has 34 NV extensions and ATI only has 15. Since nVidia supports nearly as many ARB extensions as ATI, I would consider having twice as many extensions from nVidia to be very generous (they are letting you get at the actual hardward more than ATI).

If the situation was different, i.e., nVidia did not support ARB extensions just as much as ATI, then I would believe the myth that nVidia is 'glide-like' and propietary. But if all you base it on is the number of extensions, then that simply is not true.

Of course, things are a little more complicated than just how many extensions are supported. It is true that nVidia does not perform as well using ARB_fragment_program as ATI does. But, I'm only pointing out that you cannot base your argument on the number of extensions, because there isn't really any difference.

Ostsol
10-08-2003, 02:30 PM
Reviews' OpenGL game performance results are dependant on a number of factors and I don't think that game benchmarks can really be used to determine what video card one should get for development. My own opinion is that one's choice must depend on what one expects to be doing with the video card. For that, the specific performance for individual features is far more relevant.

If one wants to do some heavy fragment shader work in a real-time, ATI is the one to go for simply due to speed. For experimental or non-real-time applications, GeforceFXs certainly are very good simply due having less limitations as far as shader instructions and dependant texture reads go. Also, speed doesn't -really- matter in those instances since you're already expecting frame rendering to take a long time.

JD
10-08-2003, 02:35 PM
Maybe he meant to say that Ati is more gl compliant. Witness the crossbar issue under nv. Or clamp to edge nv fiasco.

davepermen
10-08-2003, 02:39 PM
i prefer ati because you cannot only create nice fragment programs with floatingpoint, but you can also have nice inputvalues as floats from textures, and render to outputs as floats to textures..

this, with not much restrictions, allows for great move over to full hdr lighting instead of 0..1.

if you want to be future proof, i'd suggest you an ati. why? because even carmack sais so http://www.opengl.org/discussion_boards/ubb/biggrin.gif fx is great for dx8, but for not much more..

about the glide thing.. well.. if nvidia would get the ARB exts to run WELL and only have the nv exts as EXTENSIONS, then, i would say no. currently, you actually need the nv exts if you want acceptable performance, and the arb exts are just in there to claim they have gl1.4 or what ever support. they are very bad performing and rather useless.

then again, i could be ati-biased because my 9700 still rocks after over a year.. and can still fight up with the newest fx cards and beat them in hdr situations.. both in features and speed.

Korval
10-08-2003, 03:57 PM
I have a nvidia FX 5900 ulta. The card works great much better then my ATI card on my laptop(ATI Radeon 7500 32meg compared to my old nvidia geforce 2 mx it ran like ****).

I'm not quite sure what you're comparing here. If an FX5900 Ultra can't take out an Radeon 7500, which is 2.5 years older, then nVidia never deserves to sell another card again; it's expected that cards release long after current cards can run faster. As for the 7500 vs the 2MX, that's an old argument, and does not speak to the nature of current hardware.


In all the reviews I saw the FX was much faster then ATI with OpenGL, it was mixed for DX.

On the other hand, for games that actually test DX9/ARB_fp functionality, the clear winner in every test was the ATi card.


At the same time, nVidia has 34 NV extensions and ATI only has 15. Since nVidia supports nearly as many ARB extensions as ATI, I would consider having twice as many extensions from nVidia to be very generous (they are letting you get at the actual hardward more than ATI).

But ATi's extensions tend to be better.

Take ATI_texture_float vs. NV_float_buffer. Same basic functionality (floating-point textures), but the ATi extension offers more power. The ATi extension allows for any kind of floating-point textures (1D, 2D, Cube, etc), while the NV version is limitted to only NV_texture_rectangle. The ATi one offers all of the data formats that regular textures do (intensity, luminance, etc), while nVidia's only offers RGB and RGBA.

Also, a lot of nVidia's extensions relate to vertex programs (which are depricated by ARB_vp except for NV_vp2), VAR (depricated by VBO), or older functionality that is superceeded by newer functionality. The bulk of nVidia's extension are for getting at older hardware.

Admittedly, most of ATi's extensions are depricated by VBO too, but the rest are almost all new functionality for their current generation of graphics chips.


i prefer ati because you cannot only create nice fragment programs with floatingpoint, but you can also have nice inputvalues as floats from textures, and render to outputs as floats to textures..

For the sake of fairness, I have to point out that an nVidia card can do much of the above. It doesn't handle float textures nearly as well (as I pointed out before), but it can do them. Just not quite as fast as an equivalent ATi card.


because even carmack sais so [/qutoe]

Oh, there's a great reason to do something. Because Carmack said so. http://www.opengl.org/discussion_boards/ubb/rolleyes.gif

[quote]if nvidia would get the ARB exts to run WELL and only have the nv exts as EXTENSIONS, then, i would say no.

Then say no. Benchmarks with the most recent versions of the Det50's show that the gap between the two has narrowed considerably. Granted, it hasn't gone away, but the high-end nVidia cards are performing respectably in what appear to be floating-point situations. And they do it without a dramatic loss in image quality (see www.anandtech.com (http://www.anandtech.com) for comparison).

davepermen
10-08-2003, 04:30 PM
the statement with carmack was just there because of one reason:

people buy gfFX because it performs GREAT in doom3, and they think doom3 is THE FUTURE of games.

carmack stated himself that, yes, the gfFX rocks in doom3, but that just because doom3 does not need to use any advanced new features (dx9 style features, that is), nor the high quality of floatingpoint shaders. for any new style game, gfFX perform very bad even in his own tests, and it is, for him, rather dissapointing to see that.


yes you can do floatingpoint textures on gfFX, but, as yet mentoyed, not at all as well as on ati cards. there, we can have simply float textures, when ever we want, where ever we want (at least i never got any problem yet with any sort of float texture..). on nvidia, it can only be done with nvidia proprietary NV_texture_rectangles. and i'm not even sure how good it works with ARB_fragment_program. it does, though, with NV_fp proprietary, of course..

oh, and i'm not sure about the det50.. imho, the image quality is rather bad now.. and still there is a gap.

i prefer to run at 24bit floats than to not know if i now run at 32bit floats or 16bit floats or what ever..

well, i can't trust nvidia drivers anymore anyways.. too much went wrong the last months, and they still haven't officially ever stated they where wrong and they will change it.



On the other hand, for games that actually test DX9/ARB_fp functionality, the clear winner in every test was the ATi card
i just remember the one comming in here crying that his 5200 has only some fps in the humus demos, while the one year old 9700 has 50 (according to the page. i can support these numbers http://www.opengl.org/discussion_boards/ubb/biggrin.gif).
the nvidia marketing part is GREAT.. it would be bether if the nvidia hwdev part would be that GREAT.

stefan
10-08-2003, 11:04 PM
This won't make your decision any easier 'cause it's the other way round, but anyway:

NVidia cards perform very well (=pretty much the same speed) under OpenGL compared to DirectX while there are some people claiming that ATI cards generally perform better under DirectX (I became one of those last time I checked).
If someone can disprove that statement please let me know!

vincoof
10-08-2003, 11:15 PM
Originally posted by davepermen:
carmack stated himself that, yes, the gfFX rocks in doom3, but that just because doom3 does not need to use any advanced new features (dx9 style features, that is), nor the high quality of floatingpoint shaders. for any new style game, gfFX perform very bad even in his own tests, and it is, for him, rather dissapointing to see that.
And where do you get that info from ? I haven't seen a Carmack's .plan for a while, and by the time I read the last one I remember that the GFFX cards weren't even in stores.


i just remember the one comming in here crying that his 5200 has only some fps in the humus demos, while the one year old 9700 has 50 (according to the page. i can support these numbers http://www.opengl.org/discussion_boards/ubb/biggrin.gif).
You compare FX5200 with a 9700 ? http://www.opengl.org/discussion_boards/ubb/smile.gif
Fine, why don't you compare it with a card that is meant to be equivalent, say the 9000 which is not even capable of fragment programmability http://www.opengl.org/discussion_boards/ubb/wink.gif


the nvidia marketing part is GREAT.. it would be bether if the nvidia hwdev part would be that GREAT.

OOops I think that is not a line to post as I would consider it as flaming. You will be bann... huh I mean nothing, there is not even moderators in this board http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Korval
10-09-2003, 12:43 AM
carmack stated himself that, yes, the gfFX rocks in doom3, but that just because doom3 does not need to use any advanced new features (dx9 style features, that is), nor the high quality of floatingpoint shaders.

You are aware, of course, that the only advanced features that the FX's doesn't perform as fast as the 9500+'s involve floating point. There are many other DX9 features besides floating point, and they are arguably more useful than floating point too.


, and i'm not sure about the det50.. imho, the image quality is rather bad now

Your opinion is wrong and likely based on outdated information. The image comparison on AnandTech shows, at worst, a negligible difference between the ATi card and the Det50's on the nVidia one. There's some difference in anisotropic filtering quality, but that's to be expected when you're working with two different hardware implementations. And the actual differences are almost all due to actual driver bugs (det50's aren't live yet).


and still there is a gap.

When you benchmark 2 cards, one of them wins, and one of them loses. The "gap" you are refering to is no longer a full-fledged rout, but can be explained as simply a performance difference for certain kinds of applications.

A similar gap exists if you benchmark an nVidia card against an ATi card in a stencil-shadow-heavy game, except that it points in nVidia's favor.

davepermen
10-09-2003, 02:52 AM
what other useful things came with dx9 cards than floatingpoint over the whole pipeline? bether pixelshaders, wich both have, bether vertexprograms, wich could even get emulated rasonably well. the rest was all yet there (not in dx8, but in hw).

the only difference is now floatingpoint everywhere. and there, fx lacks a lot of features (very restricted floatingpoint texture support), and a lot of performance. they run well, though, in dx8 apps.

i don't know of anything else special that dx9 brought to the hw..


yes i compare teh fx5200 with the 9700.. just for the fun of it. nvidia marketing gets people believe here that they can buy a cheap fx5200 and beat my old 9700. thats why i compare them http://www.opengl.org/discussion_boards/ubb/biggrin.gif
oh, and the 9000 is quite capable of fragment programability, just only ps1.4, but at least those rather well..


well, i have to read up again on anandtech then.. hm.. all i've read up and seen till now is VERY BAD for anything 50.xx and higher.. i'll recheck..

oh, and, korval, wasn't you the one who always fought for fixedpoint==useful in the other threads as well? i can still not see any use for this, it can be a nice speedup if you WANT TO CARE, but first of all, the card should perform well in the general case of a general situation, and there in general data should be floats.


about the carmack statement. this was a mail, i think it got posted on beyond3d, not sure anymore where.. i'll look around for it. they asked about the hl2 fiasko, and in contrary why does doom3 run so well then while other dx9 games all don't (same for opengl games with ARB_fp..). his answer was, that doom3 runs bad with ARB_fp, too, on fx, but he doesn't need the ARB_fp (yes, korval, he does not need floats for doom3). but in general, more future designed apps, will require floats, for hdr and similar. doom3 is NOT a dx9/ARB_fp app, its an old style app, designed merely for dx8 capable cards.

thats about his statement, and explanation why the fx performs so exceptionally well in doom3, compared to all other new games.

and he stated as well, that the precicion difference is visible/measurable, but it doesn't hurt much in the case of doom3.

zeckensack
10-09-2003, 04:29 AM
Anand is wrong. Det 52.14 forces 'brilinear' filtering in DirectX Graphics. You can't get trilinear filtering - at all.

For those who can read German, this (http://www.3dcenter.org/artikel/detonator_52.14/index2.php) is what I'm talking about. An English version is in the works.

I trust the guy who wrote that article - a lot more than I trust Anand these days.

edit: OpenGL texture filtering is okay.

[This message has been edited by zeckensack (edited 10-09-2003).]

DopeFish
10-09-2003, 04:43 AM
That sounds more like it would be a bug, much like the infamous 16-bit texture opengl bug that was in the Radeon drivers for a few releases. That has been fixed in Cat 3.8 (released today) tho

Nakoruru
10-09-2003, 04:56 AM
For dev work I would have BOTH nVidia and ATI cards, and I do. Why do you have to make a choice? Unless you can force your customers to use one or the other.

If I had to choose one it would be nVidia because even if 'you have to use NV extensions to get good performance', well, you HAVE to because thats simply the way things are and I would not want to release a product that half my customers are going to think sucks.

That goes both ways, the only way to get good performance on both is to write your product for both. Don't write it for ATI pretending that it is 'standard' and then complain about how it doesn't work on well nVidia cards like Valve did.

If you are some college kid or hobbiest looking to get into OGL programming then I do not think it really matters at all, get the one that will play the games you like better when you aren't programming http://www.opengl.org/discussion_boards/ubb/smile.gif. Its not like you are going to be taking each card to the limit. You are not John Carmack. By the time you run into any real issues with your card, the next generation of cards will be out and all the issues will be different.

zeckensack
10-09-2003, 05:02 AM
Now how on earth can this be a bug? I beg to differ ...

NVIDIA designed 'brilinear' filtering into the FX series. They've done so on purpose, clearly. NV2x can't do it, no other chip on the market can do it. And now they are using it.

It improves performance at the expense of quality. Nothing more, nothing less. No graphics API known to man wants this type of filtering, yet there it is, after some explicit silicon redesign effort. And again the apologetics come along and say it's a bug. Sheesh.

Regarding ATI's 16 bit textures, sure, it was a regression. You could still get 32 bit textures on Cat 3.7, you just had to explicitly request them. Default texture depth is specified very loosely in OpenGL, so it wasn't even a spec violation.

[This message has been edited by zeckensack (edited 10-09-2003).]

Nakoruru
10-09-2003, 06:38 AM
Concerning nVidia's clamping behavior and crossbar. nVidia cards support texture crossbar, there is just a slight difference in how invalid texture stages are handled. Is it really such a big deal (serious question, not rhetorical). Also, you can enable conformant clamping behavior in the control panel. It is like that because come games rely on the incorrect behavior to look correct.

If anyone was looking to call nVidia non-conformant, I think they need to look a little deeper than those two issues.

Also, as a developer, I do not think I am too worried about nVidia's optimizations for games like UT2003 (i.e., brilinear filtering) because it does not effect MY application. If I request trilinear on an FX card in my own program, I get trilinear filtering right? (again, a serious question).

davepermen
10-09-2003, 07:16 AM
nakoruku, i don't get you. if you would have to choose between the cards, you would take nvidia? why?

if you can choose between a card wich performs standard opengl (and ARB_fp IS standard opengl, even while new), well, and you can code for standard gl and it performs well, or a card wich can not run standard gl fast but you have to always fallback to proprietary extensions wich can and will die out today, tomorrow, or in some years, but definitely before standard opengl, why do you choose the proprietary one?

i've coded a lot on the gf2 with proprietary extensions, as the card was not nearly as usable in pixelshading with standard gl (similar to FX now, just at a higher level). i can not run any of my old apps now, thanks to proprietarity.

the apps i code for the 9700 now will work for ever (well, yes http://www.opengl.org/discussion_boards/ubb/biggrin.gif), as opengl will last for ever (.. again.. well, yes..). it will run on FX cards as well, just not that well. but that is not MY fault. first, i want to make the thing working for now and the future, fast on cards that perform fast in opengl. and THEN i could still make proprietary optimisations, or try to map my stuff to lower-end hw like radeon8500+ and gf3+ (wich would be more useful than rewriting for a superoptimized lowquality fx version imho.. i know more people owning gf3,gf4, than gfFX..)

tell me one reason to first support FX over standard opengl. and remember, you're in the opengl forum, not on cgshaders.org http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Tom Nuydens
10-09-2003, 07:31 AM
Originally posted by davepermen:
tell me one reason to first support FX over standard opengl.

You obviously don't write OpenGL apps for a living. If you did, you would realize that you need your app to run well on many different cards, and hence writing multiple code paths is almost inevitable. If you can only afford to buy one card, it therefore makes sense to get an NVidia, since they can run the most different code paths on a single card. They also still cover the largest share of your potential user base.

-- Tom

(Edit: spelling)

[This message has been edited by Tom Nuydens (edited 10-09-2003).]

Nakoruru
10-09-2003, 07:39 AM
Hey daveperman, I would say you are absolutely right if I was saying which card I would choose personally, or if I was writing something I wanted to keep for a long time, or if performance was the most important thing, but that is not the context.

I should have been more clear that if I was a commercial developer, and could only choose one card, it would be nVidia. Because if I want my game to perform will on all platforms that my customers run, then I HAVE TO use nVidia's proprietary extensions, and therefore I need an nVidia card. I do not see performing well on all the cards most of my customers are expected to have to be an option, and if I think its too hard, then I'm just being lazy.

It is all about context, there is no single card which wins in all situations.

I would want to use an ATI if I had to choose just one for personal use, but then again, there seems to be a couple of things I can do using nVidia that I want to do privately, that ATI does not support yet. I'll have to make sure before I can be more specific however.

I was just reiterating what John Carmack said about using the nVidia card in his dev machine because it could run the game in more ways, and was therefore more efficient for development (even if the ARB path was slower, that was not as important for development as it simply working).

If the ARB path was just as well performing as the NV path, then there would not even be any need to write the NV path, and all things would be equal again. I guess I'm flipping everything on its head and saying that as long as nVidia owns a large segment of the market, and as long as you have to use proprietary extensions to get the best performance, the developer needs to have an nVidia card in one of his machines ^_^

It is not developers that determine what card they need, it is the customers.

I guess it is sort of a messed up situation.

So, I hope you 'get me' now.

Korval
10-09-2003, 09:43 AM
i don't know of anything else special that dx9 brought to the hw..

Nothing?

Off the top of my head, I see looping in vertex programs and real fragment programs that are beyond merely setting up some fixed-function state.

Note that everybody can use these features. Not everybody wants to use HDR and float-buffers; it isn't appropriate for all rendering. If I'm doing some non-photorealistic rendering, HDR does nothing for me. But conditional branching in vp's and good fp's are still useful to me.


yes i compare teh fx5200 with the 9700.. just for the fun of it.

OK, to balance your anti-nVidia stance, let's compare the 9600Pro to the 5900FX. Oh, look, the 5900FX always wins. Let's do it again. The 5900FX wins again. http://www.opengl.org/discussion_boards/ubb/rolleyes.gif


ia marketing gets people believe here that they can buy a cheap fx5200 and beat my old 9700.

Don't blame marketing for the uninformed populace. Anybody who thinks that a $100 card can be a $400 card is clearly uninformed and deserves what they get.


but in general, more future designed apps, will require floats, for hdr and similar.

Perhaps. However, the GeForceFX has a significant performance advantage in the case of rendering stencil shadow volumes. The FX's lead in Doom3 is more than just being faster at DX8 tasks.

Any app that uses stencil shadows will be able to run through the volume rendering steps faster on an FX than an equivalent R300 card. This performance difference could make up the difference in using floating-point fragment programs.


Anand is wrong.

A picture is worth a thousand words. Consider that they have at least 10 image comparisons. The images don't lie; the image quality difference is negligable. Certainly, the questionable behavior of earlier Det50's (missing effects, improper lighting, etc) has been removed.


Det 52.14 forces 'brilinear' filtering in DirectX Graphics.

What is "brilinear" filtering?


Now how on earth can this be a bug?

Easily. Someone accidentally passed the hardware the wrong value when the user said "trilinear". I'm sure we've all made similar bugs.


And again the apologetics come along and say it's a bug.

How do you know that nVidia didn't intend "brilinear" to be an image-quality replacement filtering technique for bilinear? Remember, the Det50's aren't final yet.


the apps i code for the 9700 now will work for ever (well, yes ), as opengl will last for ever (.. again.. well, yes..).

You have a great deal of faith in the longevity of the ARB_fp extension. Once glslang gets on-line, the only reason for an implementation to support ARB_fp is for legacy support. Indeed, I would imagine that latter implementations wouldn't even bother to write a decent optimizing compiler for ARB_fp.

zeckensack
10-09-2003, 10:35 AM
Korval,
I have never by pure coincidence designed any hacks that can't be used for anything else besides unwanted performance/quality tradeoffs. Nope. If I design something new, that involves conscious work. A bug is an accident and it just doesn't add up this way.

Last time I've seen something like this it was called mipmap dithering and clearly labelled as "faster while not as pretty as trilinear". Needless to say, the implementors correctly remembered that they had it in the silicon for that purpose, and offered application control.

Brilinear is the coined term for that somewhere-in-between mix of bilinear and trilinear. An accidental, unconscious design choice, if you will.

You think it's a benign quality enhancement to replace bilinear filtering? Give me a break. Bilinear is still bilinear on NVIDIA drivers.

Regarding Anand, I believe I've already addressed this. I have better sources at my disposal. People who know what they're doing.

davepermen
10-09-2003, 12:15 PM
i can't even find the anandtech paper.. the anand-search only finds infos about detonator 4,5, and 6, and, uhm, yes, they are.. rather old http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Korval
10-09-2003, 01:56 PM
I have never by pure coincidence designed any hacks that can't be used for anything else besides unwanted performance/quality tradeoffs.

I'm not sure what this is in reference to. The existence of "Brilinear" filtering, or the potentially accidental use of it in D3D?


An accidental, unconscious design choice, if you will.

Why does it have to be "An accidental, unconscious design choice," rather than a merely another filtering alternative?


You think it's a benign quality enhancement to replace bilinear filtering?

If that is what they do with it, rather than replacing trilinear with it as their current drivers do, yes.


Regarding Anand, I believe I've already addressed this. I have better sources at my disposal.

That you believe this source to be better does not make it so. I, for one, cannot judge the veracity of your statement, as it has been many years since I've taken German.

I've been using AnandTech's reviews for upwards of 3 years now, and they have never lead me astray. I have found their reasoning on various subjects to agree with my own on many occasions. Granted, I think they could do better, but I don't have much to complain about.


i can't even find the anandtech paper

No need to search; it's on their main page. Their image quality comparisons are part of their benchmarking. Part 2 of their "Video Card Roundup".

Obli
10-11-2003, 05:39 AM
Besides internet and other sources of information says, I was on the FX side for some time. Then I took a look at the extension again and I figured out the new FX extensions are simply bad when compared to ATi counterparts.
So, by the easiness of development ATi wins in my opinion.

The other fact is that I spent some time on a FX5600 an the performance was very bad. So bad it was comparable to nvEmulate (only 4x times faster - ack)! The radeon 9600 runs the same app much faster.
Please take the above statement with a bit of salt since the application used was acually a prototype of a component I am going to use - it was not meant for benchmarking purposes so it may not reflect real performance - this is just my own experience.

I raccomanded an ATi to a friend just few weeks ago and after watching at it I must say it's been performing well (there's a small driver issue however). As for me, I am planning to buy an ATi in the next future.

Also, considering the price ATi has a win (this at least, applies in the region I live) since it's somewhat less expensive (well, the FX5200 is really cheap but I fear this card cannot handle anything for real and I'm unable to find it in the stores anyway).

If installed base is the problem, then this becomes a very hard decision. I know some peoples which allows me to monitor sales in a bunch of stores here. They told me FXs are selling a lot (and now that Athlon 64 is here, I guess they will sell even more). Anyway, I would get an ATi.

It has been an hard truth for me discovering that the whole FX generation is simply so bad, not only performance wise but also feature wise.