GeForce FX vs Radeon 9700 or higher

Hey everyone,

I’ve been looking forward to buying a GeForce FX for a while now. But recently I’ve started reading around the internet to see how it was going to turn out at release. Well, now I’m in a bit of a jam. I don’t know which card to buy.

The FX render at a higher precision than the 9700 most of the time. This leads to better graphics, but slower performance. And I read the comments that John Carmack made about the programmable pipeline capabilities of both the cards. The Nvidia card has a much higher maximum instructions limit. However, it runs slower than the ATI card again.

ATI is supposed to come out with a new card soon anyways that exposes the vs2.0 an ps2.0 functionality in DX9. I’ve been an Nvidia fan for as long as I can remember, but ATI has made a lot of high quality products that have caught my attention.

Which one would you guys go for and why?

Note: I’m not getting a new graphics card to play games. I want one to be able to program a lot of the newer effects on. I currently have a GeForce Ti 200 AGP4x. It’s a darn good card, but doesn’t have support for any of the newer shading languages from dx9.

Thanks to all those who reply!

  • Halcyon

-> I had originally posted this in the beginners forum, but decided to move it here instead. I tried deleting the post there, but it was still in the topics list. If you click on it…it doesn’t go anywhere.

I am really not sure if in this case more bits mean better quality.

Shouldnt be the colors (each channel) be normalized/clamped between 0.0 and 1.0 anyway?

Would’nt that lead to the same disscusion that a 32Bit Zbuffer offers no better results than a 24Bit one (due to IEEE floating point format compliance)?

I am not saying that I am right, I just had no chance to test it yet.

I am just curious. Anyone?

[This message has been edited by HS (edited 02-03-2003).]

I don’t remember exactly where, but i read about how through higher precision floating point buffers, artifacts can be removed from a lot of advanced rendering effects such as bump-mapping. I don’t think it means that there will be richer color. Just better looking scenes as a result of more accurate data.

If any one is interested in reading a good comparison, they should check this out:

Beyon3D - NV30 vs R300

  • Halcyon

You misunderstood my post.

The standard IEEE floating point format looks like this:

In Bits:
S EEEEEEEE FFFFFFFFFFFFFFFFFFFFFFF

Where:
S is the sign (+/-)
E is the Exponent
F is the Fraction

So in order to represent a number between 0.0 and 1.0 you only need the 23 fraction bits and 1 Exponent bit the remaining bits can be substituted.

And it dosent matter what kind of operation you are doing (alpha, dot3, …) its always the same.

So 24Bit per channel should suffice.

Maybe Nvidia uses an internal format that differs from IEEE. I dont know but thats what I am asking…

[This message has been edited by HS (edited 02-03-2003).]

My apologies about misunderstanding the post. I’m really just a beginner at OpenGL and I only posted this here because it seemed like it fit in the advanced forum. So you’ll have to forgive the stupid things I say time to time .

I read over my sources for this information again and I realized my mistake. The floating point precision is not for buffers, it’s for the fragment shaders. the NV30 apparently has 3 different precisions where as the R300 has a precision between 2 of the NV30 settings.

Again, I apologize for the misunderstanding. I’m just starting to learn about the actual hardware instead of just the software behind graphics.

  • Halcyon

There is no need to apologize and it doesnt matter if you are a beginner or not.

So far I really havent looked into the FX to much (if at all)…

But thanks to you, I got curious if that claimed 32 bit precision per channel is real or just a marketing trick.

Thanks.

After playing around with a 9700 for a few months I can say I can recommend the card with all my heart. It’s the largest step forward in one product generation since VoodooII.

HS, intermediate results can benefit from increased precision.

The NV30’s 32 bit floating point format matches that description exactly. 1 bit for sign, 8 for exponent, and 23 for mantissa (fraction). For 16 bit it’s 1 for sign, 5 for exponent, and 10 for mantissa.

When values are clamped to 0.0 - 1.0 it is true that only one bit is needed for exponent (and none for sign), but values are still able to go far beyond that. This is similar to one of the big selling points of DirectX 8.1’s PS1.4: while final results are clamped to 0.0 - 1.0, internal precision allowed values within a range of -8.0 to 8.0. Thus, ATI’s 24 bit precision clearly has less precision after the decimal than the NV30’s 32 bit floating point format.

[This message has been edited by Ostsol (edited 02-03-2003).]

Thanks for understanding HS!

@Humus: Yeah I’ve seen what that card can do, and it is amazing. I mean don’t get me wrong. I think the FX is a huge leap in technology too!

@Everyone: First of all, should we even be comparing the 9700 to the FX? The reason the ‘or higher’ was in the subject was because I’m wondering if the R350 (which is supposed to come out sometime in march i think) is just going to blow the FX out of the water. I mean the 9700 is just barely outperformed by the FX, but it has been out for a lot longer and is older technology. It almost seems like NVIDIA is one step behind ATI.

Just for the record: ATI has a much nicer website than NVIDIA

  • Halcyon

Edit: I said the R350 is supposed to support VS/PS 3.0 in DX9. It’s actually the R400 chip that is supposed to expose that functionality.

[This message has been edited by HalcyonBlaze (edited 02-03-2003).]

The thing with the R350 is that ATI really hasn’t revealed anything about it in terms of features. . .

That’s true that ATI hasn’t released much information on the R350. However, I think that they are making this chip in response to the FX. I’ve read that the R350 is supposed to maintain a low power usage. Since the FX is supposed to have high power consumption, it already has a disadvantage. Also if they are aiming to maintain their lead (which i’m sure they are), they will attempt to set the R350 over the NV30.

I’m not going to say that the R350 will definetely be better than the NV30 in terms of speed, but I think it will be a very solid competitor. It may just be an intermediate card to maintain their status in the market until the R400 is released (whenever that may be). It is possible that all the huge technology advantages will be kept until the R400 release.

  • Halcyon

This is a good article i found that details some known information about the R350 and kinda compares it to the FX.

Click Here!

  • Halcyon

Ive been looking at getting a new card as well. The card that nvidia is comming out with that no one talks about is the nvidia quatro fx. That card seems to have much better performence then all the other cards including the geforce fx. The quatro is all for workstations while the geforce is for desktops. The quatro is also supposed to be alot quiter and the drivers work better. Quatro is built for OGL and DX. While the geforce is more learning towards DX, im a linux OGL programmer so I only use OGL. Im going to probly get the quatro FX when it comes out at the end of this month.

Heres an artical on the quatro fx http://firingsquad.gamers.com/hardware/workstation/default.asp

So if understood that correctly the NV30 uses an alternate floating point format (like for example 1 bit exponent and 31 bits fraction) on the fragment level?

So if you have a fragment program with lots of instructions that would probably pay off.

That would also explain all the transistors in the chip

Interesting, I guess I have to look more into this card.

Thanks for the input.

HalcyonBlaze,
I think you mixed up some of the chips
Okay, I won’t claim factual knowledge, but here’s my take on 'em:
R350 - same as R300, higher clocks, higher power consumption

RV350 - R300 reduced to 4 pipes, smaller manufacturing rules, lower power, cheaper. Think Radeon 9500 done right.

I haven’t heard anything about improved shading capabilities whatsoever.

As for the NV30, it would sure be a nice toy, but a little too intrusive for my tastes. A 250MHz, cut down version with a sensible cooling system would easily win me over, but not a noisy monster like this.

I don’t care much about raw speed, I want features first, and I want to be able to still hear the phone when it rings …

@HS: Go to page 8 of this pdf file for an example of the result of having a 32 bit floating precision for the fragment shaders.

The Dawn of Cinematic Computing

I just saw the Quadro FX on NVIDIA’s site. I mistook it for an upgrade to the nForce. It looks pretty cool. The FX 2000 looks reallllly good. However, it still requires an adjacent PCI slot to the AGP slot it goes in. Do you know if the Quadro FX is supposed to also support VS/PS 2.0?

  • Halcyon

According to some benchmarks, the GeForce FX 5800 Ultra performs 10% better than the Radeon 9700 PRO, ATI said, but it is also true that Nvidia’s new chip requires greater power consumption. Its new R350, however, will feature low power consumption, with which ATI hopes to target both the desktop and notebook markets at the same time.

I got my information from that section of this article: FX vs R350 in the next few weeks

  • Halcyon

Edit: Forgot to mention:

@nukem: I thought NVIDIA was supporting OpenGL, while ATI teamed up with Microsoft. You said that OpenGL was leaning more towards the DirectX side. I think that that is because DX9 is just offering more features for cards to support (such as floating point precision fragment shaders).

[This message has been edited by HalcyonBlaze (edited 02-03-2003).]

Radeon 9500 and 9700 already support ps2.0 (I think you caught that typo already).

Radeon 9500 pro is sold for $170 street and supports all that programmability. Meanwhile, I haven’t heard anything except the high-end for GeForceFX being announced, so I’d expect that to be in the $380 street range.

If it’s programmability using standard APIs you are after (DX9 or ARB_fragment_program) and you’d rather save a few hundred, the 9500 Pro looks fair.

If the latest-and-greatest-at-any-price is more your bit, then the GeForceFX will certainly hold claim to that title once you can actually buy it. For how long, who can say? I’m sure ATI will respond. I’m sure nVIDIA are working hard on building the response to the response. And so it goes, to everyone’s benefit :slight_smile:

I’m hoping for support for beyond PS and VS 2.0 in the R350. The extended versions of 2.0 are quite impressive in their specs and 3.0 is even more extreme. OpenGL will have to play catch-up, though. ARB_vertex_program is already behind in that it doesn’t support any form of flow control. . .