PDA

View Full Version : GeForce FX vs Radeon 9700 or higher



HalcyonBlaze
02-03-2003, 04:21 PM
Hey everyone,

I've been looking forward to buying a GeForce FX for a while now. But recently I've started reading around the internet to see how it was going to turn out at release. Well, now I'm in a bit of a jam. I don't know which card to buy.

The FX render at a higher precision than the 9700 most of the time. This leads to better graphics, but slower performance. And I read the comments that John Carmack made about the programmable pipeline capabilities of both the cards. The Nvidia card has a much higher maximum instructions limit. However, it runs slower than the ATI card again.

ATI is supposed to come out with a new card soon anyways that exposes the vs2.0 an ps2.0 functionality in DX9. I've been an Nvidia fan for as long as I can remember, but ATI has made a lot of high quality products that have caught my attention.

Which one would you guys go for and why?

Note: I'm not getting a new graphics card to play games. I want one to be able to program a lot of the newer effects on. I currently have a GeForce Ti 200 AGP4x. It's a darn good card, but doesn't have support for any of the newer shading languages from dx9.

Thanks to all those who reply!

- Halcyon

-> I had originally posted this in the beginners forum, but decided to move it here instead. I tried deleting the post there, but it was still in the topics list. If you click on it...it doesn't go anywhere.

HS
02-03-2003, 04:29 PM
I am really not sure if in this case more bits mean better quality.

Shouldnt be the colors (each channel) be normalized/clamped between 0.0 and 1.0 anyway?

Would'nt that lead to the same disscusion that a 32Bit Zbuffer offers no better results than a 24Bit one (due to IEEE floating point format compliance)?

I am not saying that I am right, I just had no chance to test it yet.

I am just curious. Anyone?


[This message has been edited by HS (edited 02-03-2003).]

HalcyonBlaze
02-03-2003, 04:47 PM
I don't remember exactly where, but i read about how through higher precision floating point buffers, artifacts can be removed from a lot of advanced rendering effects such as bump-mapping. I don't think it means that there will be richer color. Just better looking scenes as a result of more accurate data.

If any one is interested in reading a good comparison, they should check this out:

Beyon3D - NV30 vs R300 (http://www.beyond3d.com/articles/nv30r300/)

- Halcyon

HS
02-03-2003, 05:23 PM
You misunderstood my post.

The standard IEEE floating point format looks like this:

In Bits:
S EEEEEEEE FFFFFFFFFFFFFFFFFFFFFFF

Where:
S is the sign (+/-)
E is the Exponent
F is the Fraction

So in order to represent a number between 0.0 and 1.0 you only need the 23 fraction bits and 1 Exponent bit the remaining bits can be substituted.

And it dosent matter what kind of operation you are doing (alpha, dot3, ...) its always the same.

So 24Bit per channel should suffice.

Maybe Nvidia uses an internal format that differs from IEEE. I dont know but thats what I am asking....


[This message has been edited by HS (edited 02-03-2003).]

HalcyonBlaze
02-03-2003, 05:33 PM
My apologies about misunderstanding the post. I'm really just a beginner at OpenGL and I only posted this here because it seemed like it fit in the advanced forum. So you'll have to forgive the stupid things I say time to time http://www.opengl.org/discussion_boards/ubb/biggrin.gif.

I read over my sources for this information again and I realized my mistake. The floating point precision is not for buffers, it's for the fragment shaders. the NV30 apparently has 3 different precisions where as the R300 has a precision between 2 of the NV30 settings.

Again, I apologize for the misunderstanding. I'm just starting to learn about the actual hardware instead of just the software behind graphics.

- Halcyon

HS
02-03-2003, 05:41 PM
There is no need to apologize and it doesnt matter if you are a beginner or not.

So far I really havent looked into the FX to much (if at all)...

But thanks to you, I got curious if that claimed 32 bit precision per channel is real or just a marketing trick.

Thanks.

Humus
02-03-2003, 05:44 PM
After playing around with a 9700 for a few months I can say I can recommend the card with all my heart. It's the largest step forward in one product generation since VoodooII.

DFrey
02-03-2003, 05:47 PM
HS, intermediate results can benefit from increased precision.

Ostsol
02-03-2003, 05:47 PM
The NV30's 32 bit floating point format matches that description exactly. 1 bit for sign, 8 for exponent, and 23 for mantissa (fraction). For 16 bit it's 1 for sign, 5 for exponent, and 10 for mantissa.

When values are clamped to 0.0 - 1.0 it is true that only one bit is needed for exponent (and none for sign), but values are still able to go far beyond that. This is similar to one of the big selling points of DirectX 8.1's PS1.4: while final results are clamped to 0.0 - 1.0, internal precision allowed values within a range of -8.0 to 8.0. Thus, ATI's 24 bit precision clearly has less precision after the decimal than the NV30's 32 bit floating point format.


[This message has been edited by Ostsol (edited 02-03-2003).]

HalcyonBlaze
02-03-2003, 05:50 PM
Thanks for understanding HS!

@Humus: Yeah I've seen what that card can do, and it is amazing. I mean don't get me wrong. I think the FX is a huge leap in technology too!

@Everyone: First of all, should we even be comparing the 9700 to the FX? The reason the 'or higher' was in the subject was because I'm wondering if the R350 (which is supposed to come out sometime in march i think) is just going to blow the FX out of the water. I mean the 9700 is just barely outperformed by the FX, but it has been out for a lot longer and is older technology. It almost seems like NVIDIA is one step behind ATI.

Just for the record: ATI has a much nicer website than NVIDIA http://www.opengl.org/discussion_boards/ubb/biggrin.gif

- Halcyon

Edit: I said the R350 is supposed to support VS/PS 3.0 in DX9. It's actually the R400 chip that is supposed to expose that functionality.

[This message has been edited by HalcyonBlaze (edited 02-03-2003).]

Ostsol
02-03-2003, 05:52 PM
The thing with the R350 is that ATI really hasn't revealed anything about it in terms of features. . .

HalcyonBlaze
02-03-2003, 06:01 PM
That's true that ATI hasn't released much information on the R350. However, I think that they are making this chip in response to the FX. I've read that the R350 is supposed to maintain a low power usage. Since the FX is supposed to have high power consumption, it already has a disadvantage. Also if they are aiming to maintain their lead (which i'm sure they are), they will attempt to set the R350 over the NV30.

I'm not going to say that the R350 will definetely be better than the NV30 in terms of speed, but I think it will be a very solid competitor. It may just be an intermediate card to maintain their status in the market until the R400 is released (whenever that may be). It is possible that all the huge technology advantages will be kept until the R400 release.

- Halcyon

nukem
02-03-2003, 06:09 PM
Ive been looking at getting a new card as well. The card that nvidia is comming out with that no one talks about is the nvidia quatro fx. That card seems to have much better performence then all the other cards including the geforce fx. The quatro is all for workstations while the geforce is for desktops. The quatro is also supposed to be alot quiter and the drivers work better. Quatro is built for OGL and DX. While the geforce is more learning towards DX, im a linux OGL programmer so I only use OGL. Im going to probly get the quatro FX when it comes out at the end of this month.

Heres an artical on the quatro fx http://firingsquad.gamers.com/hardware/workstation/default.asp

HalcyonBlaze
02-03-2003, 06:09 PM
This is a good article i found that details some known information about the R350 and kinda compares it to the FX.

Click Here! (http://www.beyond3d.com/index.php#news4015)

- Halcyon

HS
02-03-2003, 06:12 PM
So if understood that correctly the NV30 uses an alternate floating point format (like for example 1 bit exponent and 31 bits fraction) on the fragment level?

So if you have a fragment program with lots of instructions that would probably pay off.

That would also explain all the transistors in the chip http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Interesting, I guess I have to look more into this card.

Thanks for the input.

zeckensack
02-03-2003, 06:24 PM
HalcyonBlaze,
I think you mixed up some of the chips http://www.opengl.org/discussion_boards/ubb/wink.gif
Okay, I won't claim factual knowledge, but here's my take on 'em:
R350 - same as R300, higher clocks, higher power consumption

RV350 - R300 reduced to 4 pipes, smaller manufacturing rules, lower power, cheaper. Think Radeon 9500 done right.

I haven't heard anything about improved shading capabilities whatsoever.

As for the NV30, it would sure be a nice toy, but a little too intrusive for my tastes. A 250MHz, cut down version with a sensible cooling system would easily win me over, but not a noisy monster like this.

I don't care much about raw speed, I want features first, and I want to be able to still hear the phone when it rings ...

HalcyonBlaze
02-03-2003, 06:25 PM
@HS: Go to page 8 of this pdf file for an example of the result of having a 32 bit floating precision for the fragment shaders.

The Dawn of Cinematic Computing (http://www.nvidia.com/docs/lo/2416/SUPP/Overview.pdf)

I just saw the Quadro FX on NVIDIA's site. I mistook it for an upgrade to the nForce. It looks pretty cool. The FX 2000 looks reallllly good. However, it still requires an adjacent PCI slot to the AGP slot it goes in. Do you know if the Quadro FX is supposed to also support VS/PS 2.0?

- Halcyon

HalcyonBlaze
02-03-2003, 06:35 PM
According to some benchmarks, the GeForce FX 5800 Ultra performs 10% better than the Radeon 9700 PRO, ATI said, but it is also true that Nvidia’s new chip requires greater power consumption. Its new R350, however, will feature low power consumption, with which ATI hopes to target both the desktop and notebook markets at the same time.


I got my information from that section of this article: FX vs R350 in the next few weeks (http://www.digitimes.com/NewsShow/Article.asp?datePublish=2003/01/29&pages=03&seq=13)

- Halcyon

Edit: Forgot to mention:

@nukem: I thought NVIDIA was supporting OpenGL, while ATI teamed up with Microsoft. You said that OpenGL was leaning more towards the DirectX side. I think that that is because DX9 is just offering more features for cards to support (such as floating point precision fragment shaders).

[This message has been edited by HalcyonBlaze (edited 02-03-2003).]

jwatte
02-03-2003, 07:19 PM
Radeon 9500 and 9700 already support ps2.0 (I think you caught that typo already).

Radeon 9500 pro is sold for $170 street and supports all that programmability. Meanwhile, I haven't heard anything except the high-end for GeForceFX being announced, so I'd expect that to be in the $380 street range.

If it's programmability using standard APIs you are after (DX9 or ARB_fragment_program) and you'd rather save a few hundred, the 9500 Pro looks fair.

If the latest-and-greatest-at-any-price is more your bit, then the GeForceFX will certainly hold claim to that title once you can actually buy it. For how long, who can say? I'm sure ATI will respond. I'm sure nVIDIA are working hard on building the response to the response. And so it goes, to everyone's benefit :-)

Ostsol
02-03-2003, 07:44 PM
I'm hoping for support for beyond PS and VS 2.0 in the R350. The extended versions of 2.0 are quite impressive in their specs and 3.0 is even more extreme. OpenGL will have to play catch-up, though. ARB_vertex_program is already behind in that it doesn't support any form of flow control. . .

davepermen
02-03-2003, 11:24 PM
the image quality is actually bether in the r300, and 24bits for pixelshader is more than enough. shrek is done with 16bit floats (according to nvidia).

check www.anandtech.com (http://www.anandtech.com)

and i think the card is not really worth the money, for being not much faster after 6 months (nvidia claims that all 6months, hw gets twice as fast..), for being so much more advanced technique to be fast (ddr2,.13), still this new features only help to catch up and a tiny bit beyond what ati presented 6 months ago. and all just with extra overclocking and ultra strong cooling. that thing boosts 60°C out in the back, and has 140°C on the top, some centimeters away from the processor. and 70db is not very quiet, too.. (well, there is a statement of a 7db version currently. we'll see).
and power consumation is enormeous, too..

i hope nvidia can get much more of this technically very very powerful features. ati currently works at using the same technological power, to speed up their, well.. "old" chip, wich looks like it is algorithmically far superiour..

anyways, i'm open for some funny duels during this year.. at least all of these gpu's are based on full dx9 support features, wich is a great base.. (i've heard rumors nv31 and nv34 don't support full dx9 in hw.. that sounds crappy.. i hope its not true).

we'll see..

but currently, a radeon9500 with 128mb ram is best. mod to a radeon9700pro and you're done http://www.opengl.org/discussion_boards/ubb/biggrin.gif

HalcyonBlaze
02-04-2003, 04:17 AM
Well, I'm looking to buy a videocard but it doesn't have to be right now. I was actually aiming at sometime around christmas. I read somewhere that ATI is attempting to release the R400 in the fall. I don't know (or care) how fast the R400 is over the R350 or R300, but it should have support for DirectX9's ps/vs 3.0. I'll probably end up getting to that level of programming by christmas anyways.

I've also heard nothing about the nv31 and n34. After reading a few more articles i found one line saying that they were the real focus of Nvidia and that the NV30 is just an intermediate 'keep up with the competition' card.

- Halcyon

KRONOS
02-04-2003, 05:14 AM
Of course the FX is a better and faster card than the current (current) 9700 PRO. Just look at the specs of the two...

If you want a card for development, choose the FX. It offers more features than any other card available (it surpasses DX9 and only GL exposes the entire hardware). The reason for 32 bits color is simple: more precison. But it isn't clamped to [0,1]. The only place where you can have a 24 bit float buffer without loosing precision is in the z-buffer because it is meant to work that way, only between [0,1]. That's why you can have a 24 bit z-buffer. During a fragment program
the values can go beyond 1 for the next calculation. If the values were always
clamped, instead of using floating-point you could use integer math because you would only worry with the fractional part. The FX offers 32 bit full precision math, instead of the 24 offered by the 9700. I don't know if it useless or not.

As for speed... The card is a monster. I wouldn't like to have to write drivers for it. It's complexity is enormous. Driver's can't be mature yet, and it will be a long time untill they do. But look at some tests made with it. For example, in the high polygon count in 3DMark2001, the FX almost doubles the performance compared to the 9700. Doubles?! How can the card be so good at something and "suck" in others?! If you look at the test, is is a simply one: throw an huge amont of triangles with some hardware lights. The test is simple, so it maybe that the drivers are working fine in one part, and the other is being worked out. If not how can you explain the huge diference in values. And another example: Carmack says the card is faster when using it's features (NV30 code path) rather than the ARB2 code path. Why? The ARB2 features are less powerfull than the NV30 features, so if it works faster in NV30 code path it must work faaster in the ARB2 code path. Is
just drivers...


And finnaly, I believe this card is very suited for GL2. I believe NVidia looked
a lot at the GL2 specs when building this card. I don't mean it is fully in accordance
to the current specs, but they're close. I think it was matt that said, in a post here
in the beginnig of the year, something like: start loving GL2....


How, http://www.opengl.org/discussion_boards/ubb/wink.gif never wrote such a huge post before...
How is my english?! http://www.opengl.org/discussion_boards/ubb/tongue.gif

davepermen
02-04-2003, 06:21 AM
Originally posted by KRONOS:
Of course the FX is a better and faster card than the current (current) 9700 PRO. Just look at the specs of the two...
specs don't make a card fast. a p4 was always faster than any athlon when looking at the specs. only tests showed the reality. at the same spec-speed, p4 was much slower. today, they win because of raw speed.
www.tomshardware.com (http://www.tomshardware.com) www.anandtech.com (http://www.anandtech.com)

then you know how much faster the FX actually is.

the 8bit more in the floatingpoint unit is surely not useless, but not proven to be useful eighter. its like using doubles in your code instead of floats.. cinematic movies are done with 16bit floats according to nvidia so it at least looks like 32bit floats are quite useless.

about the speed. its not a monster. its not much more advanced than the r300. more pixelshader instructions don't make a chip more advanced. it has some new features, yes, but they should not make it such a monster in handling.

its not just drivers. its even not much faster in some tests, where it is yet at its own technical max.


at least ati works at compilers to compile gl2 slang to arb_fragment and arb_vertex program o today.


your english is nice.

Ostsol
02-04-2003, 06:23 AM
LOL! The NV30 is faster using the NV30 path because that way it isn't necessarily forced to use its 32 bit precision all the time. The developer can decide whether or not to use 32 bit precision for an instruction, or only use 16 bit.

I also heard that the NV30 has a dedicated T&L unit, rather than emulating T&L via vertex shaders. For that reason, it performs much closer to what it should in 3dMark's high polygon tests. In contrast, the R300 does not have a dedicated T&L unit, so performance in that test is comparatively worse.

KRONOS
02-04-2003, 06:38 AM
specs don't make a card fast. a p4 was always faster than any athlon when looking at the specs. only tests showed the reality. at the same spec-speed, p4 was much slower. today, they win because of raw speed.


Of course they do. If it isn't faster on paper, it wont' be faster on the chip either. But being faster on paper doesn't mean full speed on chip. The silicon must be good...



cinematic movies are done with 16bit floats according to nvidia so it at least looks like 32bit floats are quite useless.


So this means that they (NVidia) know that it is useless and did this on purpose so the card would be more expensive?! What more useless things did they put in the card?



about the speed. its not a monster. its not much more advanced than the r300. more pixelshader instructions don't make a chip more advanced. it has some new features, yes, but they should not make it such a monster in handling.


But when the R300 was out is was worse in drivers than the FX, it almost took 6 months for them to have a good set of drivers. And that had an huge impact on performance. The drivers are as important as the hardware. If the drivers aren't fast, the card can't be fast.



LOL! The NV30 is faster using the NV30 path because that way it isn't necessarily forced to use its 32 bit precision all the time. The developer can decide whether or not to use 32 bit precision for an instruction, or only use 16 bit.


I guess the test was with the same precision. That way it wouldn't be possible to compare... http://www.opengl.org/discussion_boards/ubb/wink.gif

Ostsol
02-04-2003, 06:52 AM
Originally posted by KRONOS:
I guess the test was with the same precision. That way it wouldn't be possible to compare... http://www.opengl.org/discussion_boards/ubb/wink.gif

It wasn't. The R300 uses 24 bit precision only, for fragment programs. The NV30 supports 12 bit (fixed point), 16 bit (floating point), and 32 bit (floating point), but not 24 bit. Thus, it is impossible for the NV30 and R300 to be compared to each other using the same precision.

KRONOS
02-04-2003, 06:53 AM
Ostsol: you're right... Maybe the FX used 32 bit precision all the time?

[This message has been edited by KRONOS (edited 02-04-2003).]

Tom Nuydens
02-04-2003, 07:02 AM
Don't get religous about it guys -- they're just video cards.

Get a Radeon 9500 or 9700 because:
- It's out now, and your patience is running out
- It's cheaper than the GeForceFX
- It has a more elegant cooling solution
- It will make Davepermen like you

Get a GeForceFX because:
- It has some extra flexibility in the shaders (precision, flow control, instruction count)
- NVIDIA has better OpenGL drivers than ATI
- You can get the Gainward card which will allegedly have a quiet cooler
- You can boast that your video card is more expensive than Davepermen's

How's that for a comparison?

-- Tom

P.S.: No offense, Dave http://www.opengl.org/discussion_boards/ubb/wink.gif

Ostsol
02-04-2003, 07:25 AM
Originally posted by KRONOS:
Maybe the FX used 32 bit precision all the time?

For Doom 3's ARB2 path, that is confirmed by Carmack. There is a precision "hint" provided in ARB_fragment_program that can allow for a lower precision to be used (if the card supports it), but it is applied globaly to the fragment program, rather than per instruction, as NV_fragment_program does. I'm guessing that that way the NV30 would certainly be at least as fast as the R300, using that path.

HalcyonBlaze
02-04-2003, 08:36 AM
I thought that the hint was to allow a higher precision. I thought the card was set to 16bit as a default and if you wanted you could bump it up to 32bit using a shader program. I probably got it backwards.

I think dave has a point. I mean 24bit is probably enough. The low precision picture in the "The Dawn of Cinematic Computing" article by NVIDIA could either be 16bit or 12bit. Since NVIDIA is generally an honest company, we'll assume 16bit. Sure the difference between a 16bit and a 32bit fragment shaders is clearly visible in each of the pictures. But I don't see any comparisons between the 24bit and the 32bit. There is no way to tell how much more useful the extra byte is. I'm sure it makes a difference, but how much more of an difference?

I also don't want a video card taking up two expansion slots. And 70db is a bit much for a cooling fan inside a computer. However, I read somewhere (might even me in this thread) that NVIDIA might be making a 7db fan for the FX later. It is probably just a rumor, but if they did that, the FX would be a very very nice card.

I'm leaning towards the R300 right now, but I don't want to rush a purchase right away. I mean there isn't a lot of info on the R350 and it may be an incredible card or just a sucky one. Either way, it'll probably bring down the price of the 9700 or the 9500 by a subtantial amount.

Does anyone know the approximate maximum instruction count of the R300 chips? The NV30 is 65,536 (Got it from the pdf file in one of my previous posts on this thread). The FX is a HUUUUUUGEEEE improvement over the GF4 Ti in the shaders department . And I have a GF3 Ti right now, so you can imagine how much of a difference it or the R300 would make.

- Halcyon

davepermen
02-04-2003, 09:00 AM
Originally posted by Tom Nuydens:
Don't get religous about it guys -- they're just video cards.

Get a Radeon 9500 or 9700 because:
- It's out now, and your patience is running out
- It's cheaper than the GeForceFX
- It has a more elegant cooling solution
- It will make Davepermen like you

Get a GeForceFX because:
- It has some extra flexibility in the shaders (precision, flow control, instruction count)
- NVIDIA has better OpenGL drivers than ATI
- You can get the Gainward card which will allegedly have a quiet cooler
- You can boast that your video card is more expensive than Davepermen's

How's that for a comparison?

-- Tom

P.S.: No offense, Dave http://www.opengl.org/discussion_boards/ubb/wink.gif

if expensive counts for you, yes, buy it. i prefer to smoke my money http://www.opengl.org/discussion_boards/ubb/biggrin.gif

and about the drivers.. thats an old one. old but wrong today.

HalcyonBlaze
02-04-2003, 09:11 AM
I have a few questions for all you guys that do shader programming...i'm still waaay back around lighting http://www.opengl.org/discussion_boards/ubb/smile.gif. Is the introduction of floating point fragment shaders very recent? I mean DX9 is boasting that as one of it's new features. I'm guessing GL had it accessible through it's extention mechanism. I mean were all the shaders before in 12bit format (fixed point)?

Also, is a fragment shader in OpenGL the same as a pixel shader in DX? I mean a fragment is what a pixel is when all the calculations are being done on it.

- Halcyon

davepermen
02-04-2003, 09:22 AM
+ for radeon: you can get a passiv-cooled one. while 7db aren't possible to actually hear, quiet is even bether http://www.opengl.org/discussion_boards/ubb/biggrin.gif


another thing, about the "advancedness" of gfFX. if i would overclock my radeon, i would get about linear speed increase (can clock to 420mhz with normal drivers..)

at 500mhz, the card is 1.5384615384615384615384615384615 times faster. except possibly one test, it would beat the gfFX in every test, by up to 53.84615384615384615384615384615 percent, and more (there where the gfFX yet now is slower in the tests)

so the actual power, wich makes a gfFX as fast as a radeon, or a bit faster, is clockspeed. and this very clockspeed results in this enormeous heat it produces, even with smaller produced die.

a r300 chip with .15 and at 325mhz can nearly catch a nv30 chip with .13 and 500mhz. and the nv30 only beats because of its going to the limits. the fat old r300 isn't even at its limits.

don't let us start talking about a .13 r300 version and its clockspeeds possible.. blah


_THAT_ is what makes me sad. the gfFX does perform well. but it should do much more than it does currently with all the extraboosts they gavem.

and as i see there are quite much other dx9 compliant cards (with ps2.0 and vs2.0 only support) comming. then there will be ps3.0 and vs3.0.. but i don't see much ps2.x and vs2.x cards.. i don't think the additional features will get that much support, they are too less for 3.0, but too much for 2.0. nothing really useful.. proprietary, as normally.

and i think currently a devteam cannot oversee the r300 as a target audience. so its more easy to develop for r300/gfFX with one path than two special paths. the r300 path will be supported by about all hw vendors in future.. dunno about the gfFX path..

anyways, enough rant. i really want to see nvidia to catch up. their gfFX is a very promising hw. what went wrong at this first test, what went wrong?..

davepermen
02-04-2003, 09:27 AM
Originally posted by HalcyonBlaze:
I have a few questions for all you guys that do shader programming...i'm still waaay back around lighting http://www.opengl.org/discussion_boards/ubb/smile.gif. Is the introduction of floating point fragment shaders very recent? I mean DX9 is boasting that as one of it's new features. I'm guessing GL had it accessible through it's extention mechanism. I mean were all the shaders before in 12bit format (fixed point)?

Also, is a fragment shader in OpenGL the same as a pixel shader in DX? I mean a fragment is what a pixel is when all the calculations are being done on it.

- Halcyon

yep, very new. dx9 introduced it, first implementation is on the radeon9500+ cards, and second one is on the gfFX cards. no cards before had it. 12bit? actually most have 8bits per component, geforces got 9bits (an additional sign => their range is not from 0..1 but from -1..1).. dunno, the radeon8500 had even higher precision, i think. but fixed point anyays.

and floatingpoint unlimitedrange pixelshaders are a real advantage. its amazingly easy to work with, and very powerful.. no need for texcoords or any vertexshader to send in lighting information. just calc it in the fragment program.. sure, its not the most effient way to calc it all in the fragment program, but it shows how powerful that is. wasn't possible at all before..

PixelDuck
02-04-2003, 09:33 AM
Yeah, I think it was all fixed point calculations. Not sure of the precision, though. I'm sticking with R300 for the moment, since the FX mostly adds support for the vs/ps 2.0 extensions. K, it's much, atleast when you're doing very high level effects, I mean the instruction count in R300 is 512 (with loops, correct me if I'm wrong) and 256 with no looping (again, correct me if...) and 1024 in NV30 (65,536 with looping), that's a huge difference. But those kinds of instruction counts are only achieved with really complex programs.

Korval
02-04-2003, 09:52 AM
But when the R300 was out is was worse in drivers than the FX, it almost took 6 months for them to have a good set of drivers. And that had an huge impact on performance. The drivers are as important as the hardware. If the drivers aren't fast, the card can't be fast.

That's not true at all.

The 9700's shipping drivers were good enough to beat a GeForce 4Ti 4600 by 50% on average (check the original Anandtech benchmarks).

Up until now, nVidia's sudden driver performance upgrades were strategically timed to hurt ATi. It was certainly no coincidence that, on the week the 8500's were released to benchmark sites that nVidia drivers suddenly caused a 20% performance boost. And nVidia knew months in advance that the 8500 was coming, so they could prepare for it.

nVidia has known, for the past 5 months, that the 9700 was a performance beast with excellent, fast-running, drivers and very fast running antialiasing and anisotropic filtering. Given that knowledge, they should have thrown every day of those 5 months into perfecting their drivers. If nVidia did, and this is the best they could do, the FX is an overhyped POS. If nVidia didn't, then they don't deserve to beat the 9700, and their FX line of cards will be crushed by the R350 core (which will, likely, be mostly a performance upgrade along with some modest improvements to instruction count and functionality).

Certainly, in the medium-end market, dominated by the all-powerful DX9-capable Radeon 9500Pro, nVidia will have its work cut out for it. The 9500Pro is a very fast card, capable of equalling the performance of a Ti4600.

However, if you're an OpenGL programmer, you can get greater benifit from the FX. First, NV_vertex_program_2 supports looping/flow-control, which the ARB_vertex_program does not. Secondly, NV_fragment_program has more features than ARB_fragment_program or PS2.0.

Humus
02-04-2003, 12:11 PM
Originally posted by PixelDuck:
Yeah, I think it was all fixed point calculations. Not sure of the precision, though. I'm sticking with R300 for the moment, since the FX mostly adds support for the vs/ps 2.0 extensions. K, it's much, atleast when you're doing very high level effects, I mean the instruction count in R300 is 512 (with loops, correct me if I'm wrong) and 256 with no looping (again, correct me if...) and 1024 in NV30 (65,536 with looping), that's a huge difference. But those kinds of instruction counts are only achieved with really complex programs.

In the vertex shader the R9700 can use 256 instructions and with loops it can execute a total of 65026 (255*255 + 1) if I remember it right. In the fragment shader it can do 64 ALU instruction. The GFFX can do 1024.

HalcyonBlaze
02-04-2003, 12:30 PM
Man...where do you guys learn all this stuff? I mean if someone wanted info on each of the individual posts, they'd find pretty much all of it in this single post!

Speaking of information, does anyone know much about the R350? All I know is that it is supposed to use less power than the GF FX and has support for VS/PS 2.0.

- Halcyon

DaveBaumann
02-04-2003, 01:17 PM
Originally posted by Humus:
In the fragment shader it can do 64 ALU instruction. The GFFX can do 1024.

I thought it was 160 for R300 (?).

NitroGL
02-04-2003, 01:31 PM
Originally posted by DaveBaumann:
I thought it was 160 for R300 (?).

It maxes out at 94.

davepermen
02-04-2003, 01:48 PM
and at least at the current gfFX speed, i don't want to be forced to use 1024 instr for realtime.. as they are even slower than the r300 currently. but i think the techical max should be 2x as fast, so there is hope.. still, those 1024 instr, fullscreen, will for sure only work at very low res.. too bad..

we'll see http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Humus
02-04-2003, 04:24 PM
Originally posted by DaveBaumann:
I thought it was 160 for R300 (?).



Well, you only have 64 ALU instructions. But then you also have instructions like texld, dcl, dcl_2d etc (DirectX terms). I'm sure if you max all these out and sum it together you'll end up with 160 instructions. It's like the Radeon 8500 were supposed to have 22 instructions I think, but you only really had 16 ALU instruction, 8 for each phase, the rest was texture sampling.

ehart
02-04-2003, 04:59 PM
These shader numbers can get confusing real quick when the marketing fluff gets involved. The 9700/9500 can have 32 texture instructions, 64 RGB instructions, and 64 alpha instructions. This is where the 160 comes from. ARB_fragment_program and DX9 typically both specify RGB and alpha instructions together, so this is where you will see a 64 instruction limit. It is possble for something like the following to take only one RGB and one alpha instruction though:




ADD res0.rgb, src0, src1;
RCP res1.a, src2.a;


ARB_fragment program on the 9500/9700 will collapse stuff like this into one RGB and one alpha.

-Evan

Zeno
02-04-2003, 07:17 PM
Just thought I'd throw in my 2 cents about your original question.

I had been looking to upgrade from my GeForce3 for the past several months and was waiting for the GeforceFX to come out since I've always used and been happy with NVIDIA cards in the past. This time, though, I went with the Radeon 9700 Pro. Here are the reasons for my decision:

1) I don't like a lot of noise in my computer, and after hearing the FX, I knew it would annoy me.
2) There wasn't that big of a performance difference.
3) Both cards support ARB_Fragment_program, which is why I wanted to upgrade.
4) I was just tired of waiting. I'd been hearing from NVIDIA that the FX would be out "very soon" for at least the past 6 months, and it just keeps getting pushed back.
5) I'd heard that ATI was getting better at driver support and have seen them increase their participation on this board (Evan especially).
6) I wanted to make davepermen happy http://www.opengl.org/discussion_boards/ubb/wink.gif. He puts in so much work for ATI at all the forums I read...I hope he's getting some royalties or something http://www.opengl.org/discussion_boards/ubb/wink.gif

Now that I have the 9700 and have had a chance to experiment a bit with it, here are my thoughts:

1) It's fast. Really fast. The fill rate is amazing.
2) It's nice and quiet.
3) No problems with any commercial apps so far.
4) Development on it is not quite as smooth as with my past NVIDIA cards. It tends not to handle programming errors as gracefully and is more likely to crash windows or freeze up the app if I do something wrong. However, the problems are infrequent so it's not a big deal.
5) I just found out that they have more general support for floating point texture formats such as 1, 2, and 3d textures. I believe NVIDIA only supports texture_rectangle. This is very useful to me at work right now.

Hope this helps,
Zeno

HalcyonBlaze
02-04-2003, 08:15 PM
Thanks zeno! I've been leaning towards a 9700 or a 9500 too since this thread. The FX is kinda out of the question. I doubt i'd ever push it to the max and I don't play computer games. I may ocassionaly download some demos to check out, but i don't by them anymore. Programming wise, I think i should be just fine.

What I might just wait for is the R350 chipset to come out. Then the 9700 might become cheaper!! Yeah i definetely do not want to give up TWO expansion slots!!! Not unless the cards are radically better than all other cards in the market. Anyways, I'm not making the next Doom3, so I should be fine with a R300 or a R350.

- Halcyon

Zeno
02-04-2003, 08:46 PM
In my opinion, the fact that the FX uses a PCI slot is not a big deal. I have five PCI slots and am using only 1 of them. Not only that, but modern graphics cards should be given a little space to breathe, whether they take that space for themselves or not. You don't want a card in the first PCI slot blocking airflow to your graphics card's heat sink/fan unit.

Here's an off-topic question: Why do all graphics cards have the chip and heat sink on the bottom of the card? Wouldn't it be better to put it on top so the heat can rise off of it? Is there some spec that states that you can't have a card take any space above it's slot?

-- Zeno

davepermen
02-04-2003, 09:09 PM
Originally posted by Zeno:
In my opinion, the fact that the FX uses a PCI slot is not a big deal. I have five PCI slots and am using only 1 of them. Not only that, but modern graphics cards should be given a little space to breathe, whether they take that space for themselves or not. You don't want a card in the first PCI slot blocking airflow to your graphics card's heat sink/fan unit.

Here's an off-topic question: Why do all graphics cards have the chip and heat sink on the bottom of the card? Wouldn't it be better to put it on top so the heat can rise off of it? Is there some spec that states that you can't have a card take any space above it's slot?

-- Zeno

to the first one: i plan on getting a shuttle xpc soon. there i would only have one pci slot, exactly the amount i need for video editing.. http://www.opengl.org/discussion_boards/ubb/biggrin.gif
and loosing one pci is still loosing one pci (you payed for it but you cannot use it).. and i think its an evolution in the wrong direction. sound systems get onboard, without cooling, with very much power today, network gets onboard, raidcontrolers on board, everything small, clean and quiet. this is the right direction to make a pc worth to buy for an average family. no huge server with noise, small, quiet, but still powerful and fast. gfFX is the wrong direction in this mentology (while it was designed to be cool and quiet, remember the announcement to use .13? http://www.opengl.org/discussion_boards/ubb/biggrin.gif)

yes, i think, space behind the card is reserved... you could put on it a cooler yourself, much like the passive cooled radeon (quiet.. ahh http://www.opengl.org/discussion_boards/ubb/biggrin.gif), wich has cooling on both sides. i think this is a major issue on the gfFX.. very unbalanced cooling..

i would like to get something for my "advertices for the radeon". its a hard "job", as everyone got trained to love nvidia over the last years (not without reason, i was nvidia-only-fan for quite some while, too). but things have changed, ati works much bether now, and at least the technical leaders they are since the radeon8500. and technical and speed leader by far, they are since the radeon9700.. people forget that very easily, all i want is to open the eyes to realise hey, neighter nvidia, nor ati is perfect, but neighter nvidia, nor ati, is bad.

hm.. btw, haven't had bsod's anymore since quite some while on my radeon.. unlike you..

anyways, i hope things will change for the gfFX, and else i'm currently waiting for the first "real" infos on r350.. i hope i can get to the CeBIT, there they should all be visible.. and hearable http://www.opengl.org/discussion_boards/ubb/biggrin.gif (entering the CeBIT.. hm.. i think nVIDIA is there on the back left, you think so, too? http://www.opengl.org/discussion_boards/ubb/biggrin.gif)

anyways, glad to see the ones with radeon9700 or similar in here happy with their card. this is what we wanna be all in the end, not?

MarcusL
02-04-2003, 11:27 PM
Just a remark about 16-bit being used for cinematic movies.

I think what they meant was that the final image is stored using 16-bit floats. The actual raytracing (and whatnot= calculation is done by the CPU, and AFAIK no CPU that I've heard of can do 16-bit float calcs.

Having them would be nice though, more SIMD-power. http://www.opengl.org/discussion_boards/ubb/smile.gif

davepermen
02-04-2003, 11:42 PM
Originally posted by macke:
Just a remark about 16-bit being used for cinematic movies.

I think what they meant was that the final image is stored using 16-bit floats. The actual raytracing (and whatnot= calculation is done by the CPU, and AFAIK no CPU that I've heard of can do 16-bit float calcs.

Having them would be nice though, more SIMD-power. http://www.opengl.org/discussion_boards/ubb/smile.gif

they don't raytrace, but rasterice. and as far as i know, they have dedicated hardware. and the pixelshading unit is 16bit there, if i got that correct from the nvidia documents..

anyways, 24bits is more than enough for the available instruction count, dunno how much errors you can create on a gfFX if you use 16bit floats, never had them yet to test..

roffe
02-05-2003, 01:03 AM
Originally posted by davepermen:
they don't raytrace, but rasterice..
When they need raytracing some use BMRT (http://www.bmrt.org) , which is renderman compliant. BMRT implements raytracing,radiosity and other GI algos not supported by PRMan.

davepermen
02-05-2003, 01:29 AM
right.. sorry, yes, they partially do, some extended rendermans support it..

that'll make rather difficult for gpu vendors to really announce they can run cinematic effects realtime in the terms of "we can run shrek realtime"..

anyways. it first has to be proofen the 3 different modes on the gfFX (fixed point 12bit, floating point 16 and 32 bit) are really useful. espencially the 12bit fixed doesn't sound very useful to me.. we'll see..

HalcyonBlaze
02-05-2003, 09:33 AM
Just a little FYI:

Nvidia has improved their FX card since the model that was reviewed. The cooling system does not run during the card's 2D mode. So that is pretty much one of the most silent cards for 2D. The 3D mode still has the cooling system activated. They claim that the newer cooling system is 5dB quieter than the older cooling system that got reviewed a week or so ago.

- Halcyon

davepermen
02-05-2003, 10:10 AM
when ever you post such new facts, links would be great. that sounds very well. a short time ago they stated they don't work at it anymore and spit it on the marked the way it got tested.. but its a great idea to not do so http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Adrian
02-05-2003, 10:13 AM
Originally posted by davepermen:
when ever you post such new facts, links would be great. that sounds very well. a short time ago they stated they don't work at it anymore and spit it on the marked the way it got tested.. but its a great idea to not do so http://www.opengl.org/discussion_boards/ubb/biggrin.gif

About half way down this page http://www.hardocp.com/index.html#6462-1

V-man
02-05-2003, 11:33 AM
Does this mean every time you create a GL or d3d window, the fan comes on?

This will sound disturbing if you are a developer and are debugging.

Anyway, my PC already sounds like a vaccuum cleaner and so was my older one. That's one reason I turn up the musac.

HalcyonBlaze
02-05-2003, 12:44 PM
This is where i got my info (http://www.beyond3d.com/#news4183)

Sorry about not putting up the website. I was in a hurry to get to class and I just put this up at the lastsecond.

- Halcyon

Korval
02-05-2003, 01:43 PM
and i think its an evolution in the wrong direction. sound systems get onboard, without cooling, with very much power today, network gets onboard, raidcontrolers on board, everything small, clean and quiet. this is the right direction to make a pc worth to buy for an average family.

And therein lies the flaw in your logic.

The GeForce 5800 FX Ultra is not intended for family use; it's far too powerful for them. They could do just fine with a GeForce4MX.

The 5800Ultra is for gamers and developers. Since gamers and developers already have loud computers (high-end harddrives have fans nowadays), adding one more part to make it even louder isn't that bad.

Also, note that gamers don't tend to have a lot of PCI slots in use. Indeed, for maximum cooling, a game-quality computer only had a network card (if it isn't embedded in the motherboard) and a sound card of some kind. As such, the lack of a PCI slot is nothing to be concerned about.

Lastly, I would point out that the non-Ultra 5800 doesn't take up the extra slot (though I wouldn't suggest putting something there either). It, probably, doesn't make as much noise either. So, if you want nVidia's features without the noise, get the standard 5800.

Humus
02-05-2003, 02:09 PM
The 5800Ultra is for gamers and developers. Since gamers and developers already have loud computers (high-end harddrives have fans nowadays), adding one more part to make it even louder isn't that bad.[/B]

I don't think developers and gamers annoyance of noise is any less than the average "family user". Not every gamers is a overclocking fan and certainly not every developer. Far from every developer values performance the most.
I would consider myself both a gamer and a developer. I have my harddrives in special silence-drive boxes. I bought 5400 rpm drives so they would get too hot inside. I bought a silent CPU-fan with adjustable fan speed to get as little noise as possible.

About the 5800Ultra, even Carmack expressed that it annoyed him, even though he specifically stated that he's not the kind of guy that tends to be annoyed by load fans.

zeckensack
02-05-2003, 04:05 PM
Originally posted by Korval:
The 5800Ultra is for gamers and developers. Since gamers and developers already have loud computers (high-end harddrives have fans nowadays), adding one more part to make it even louder isn't that bad.This is where you're wrong. Gamers sure are concerned about noise. Look at how popular silent computing has become. I moderate a fairly large forum (5700 members) and there are neverending requests for silent power supplies, CPU HSFs, case fans, hard disks, etc. I've seen it over and over, if all else ties up, comparative noise levels are even more important than price.

I can imagine what's going on though. Some person thinks of a gamer. A teen sitting in a dark room in front of a 19" monitor with a 200 Watts stereo+sub combo, playing Zombie Terror Chainsaw Menace at full volume with a sheepish grin. The pumping heavy metal soundtrack mixes with screams and explosions.
Not so.

Gamers occasionally do respect their spouses and neighbors. Compensating for fan noise with sheer audio volume is just stupid.

And it's also false to assume that any game, always even produces enough racket to 'mask' the PC noise. Heck, some games are almost completely silent, and the little noise they do make is important for gameplay, let alone atmosphere (eg Thief, Splinter Cell, Doom 3 probably too).

Sound is an important aspect of any game, not some unrelated utility to help you ignore your various cooling systems.

___________
That being said, the NV30 in itself looks pretty attractive. I'm just not going to tolerate anything that's noisier than my case fan setup.

Tom Nuydens
02-06-2003, 12:54 AM
I agree with Humus and Zeckensack. Before word got out about the loud cooler, I had every intention of buying the first GFFX I could get my grubby little hands on, without thinking twice. Now that the word about the cooler is out, I'm not that anxious to get one anymore. I'm going to wait and see what Gainward has up their sleeve, and if they can't deliver then I will look into the non-ultra 5800 instead. Performance is pretty much my last concern when buying a video card.

-- Tom

Robbo
02-06-2003, 01:13 AM
Well, my faith in NVIDIA has been smashed by the FX release and specs. Especially though, the mp3's I downloaded comparing the sound of the FX cooler to the 9700 pro. It was like listening to an aircraft taking off!

PC noise doesn't bother me at work, but at home its a big NO!

As for the specs, NVIDIA are barely ahead of the 9700 Pro, if at all. I'm not sure what the potential of this tech is, but I'm pretty sure ATI will release something even funkier in a few months and then, for the first time since 3dfx was at its zenith, nvidia will be playing catch-up.

I'm going either for a pro or for the next ATI card!

Adrian
02-06-2003, 01:48 AM
In the video that compared the noise from both cards not only were they demonstrated out of the case but the microphone was moved next to the blower which unsurprisingly gave distorted wind noise. You have to wonder whether this was honest journalism or an attempt to create a big news story. However I do agree that the fx is probably too loud.

Tom Nuydens
02-06-2003, 01:51 AM
I wouldn't say my faith in them has been smashed, although it has been somewhat damaged. This was primarily due to a short dip in driver stability around versions 40.xx and 41.xx, and to a lesser extent due to the NVLeafBlower fiasco. I'm running 42.xx drivers now and they are as rock solid as what I'm used to. Hence, if at least one manufacturer comes out with a quieter card (Ultra or not), I'm still getting one.

I've been using nothing but NVIDIA cards since the Riva128 (unless you count my Voodoo2, which I hated for its crappy OpenGL drivers and got rid of after three months). While ATI's drivers may have improved dramatically over the last months, I still see too many bug reports to trust them enough to switch to a Radeon. Maybe next year http://www.opengl.org/discussion_boards/ubb/smile.gif

-- Tom

DJSnow
02-06-2003, 02:26 AM
@daveperman:

>>
>>at least ati works at compilers to compile
>>gl2 slang to arb_fragment and arb_vertex
>>program o today.
>>

WHHAAAT ? where do you get this information from ? tell me !
(this sounds *RRRROOOOOAAAAAR* -> great !!!!!)

BTW: the cg-compiler from NV does nearly the same, or not ? I haven't looked at this proprietary API now, but i have grabbed this information while flying over some descriptions, forum-threads and such things.
Is it possible to get the final compiled vertexprogram code ? This would make it much easier writing complex shaders and usign them on standard GL_ARB_...._program functions. Do you know about this ??

>>i prefer to smoke my money
yes, this is no problem for people located in switzerland - here in germany it is not as easy to smoke your money as it is in your's country. http://www.opengl.org/discussion_boards/ubb/frown.gif http://www.opengl.org/discussion_boards/ubb/frown.gif http://www.opengl.org/discussion_boards/ubb/frown.gif

HalcyonBlaze
02-06-2003, 04:15 AM
You guys are right that the FX is really too loud. I've heard those mp3s too and it's a HUGE difference. But the thing is, the FX has more featurs than the 9700 PRO and you have to consider the fact that FX is just an intermediate card. Nvidia probably just put out the FX to keep the lead. NVIDIA is working on two chipsets just like ATI is, and I believe that's where all the true power and work and effort will be shown.

- Halcyon

Robbo
02-06-2003, 05:54 AM
To be honest with you Halcyon, the issue isn't neccessarily the spec\sound of the card for me, but the difference between the OTT marketing strategy and the reality.

There are only so many times you can hype up a product and not deliver before your customers start to get fed-up with you. I'm just about at that stage now. After reading all of the specs\fantastic "pre-reviews" and marketing information about this card, I was holding off buying a 9700 Pro - for a long while. So I'm thinking i've wasted my time because the nvidia marketing was mostly bull.

Humus
02-06-2003, 07:28 AM
Originally posted by DJSnow:
@daveperman:

>>
>>at least ati works at compilers to compile
>>gl2 slang to arb_fragment and arb_vertex
>>program o today.
>>

WHHAAAT ? where do you get this information from ? tell me !
(this sounds *RRRROOOOOAAAAAR* -> great !!!!!)

BTW: the cg-compiler from NV does nearly the same, or not ? I haven't looked at this proprietary API now, but i have grabbed this information while flying over some descriptions, forum-threads and such things.
Is it possible to get the final compiled vertexprogram code ? This would make it much easier writing complex shaders and usign them on standard GL_ARB_...._program functions. Do you know about this ??

>>i prefer to smoke my money
yes, this is no problem for people located in switzerland - here in germany it is not as easy to smoke your money as it is in your's country. http://www.opengl.org/discussion_boards/ubb/frown.gif http://www.opengl.org/discussion_boards/ubb/frown.gif http://www.opengl.org/discussion_boards/ubb/frown.gif



The GL2 compiler doesn't work that way. It compiles directly towards to underlying hardware. The way it should be IMO, no need to go the detour by the existing extensions.

DJSnow
02-06-2003, 07:47 AM
@Humus:

mmh, ok - i thought this could be a cool and nice way to program my shaders right today witht the GL2-hlsl and then compile "it down" to "normal" GL_ARB_...._program code which can be used from now on, with todays graphic hardware (not with hardware on the day after tomorrow) - this could be used by the people to avoid todays asm-code typing and it would be "leveling the way for future compatibility with the upcoming GL2-hlsl for your today's shaders" (i hope you know what i mean with my crappy english)

The second point, mentioned by me, relating to the nvidia cg-compiler was my idea because, if this thing (~the nvidia cg compiler) works as i assume/suppose, then it should be possible to generate/copy/use small chunks of the final/resulting programcode for the hardware you are working/running on ?!?!
This would enable a programmer to use the cg-langugage to write his shader programs in a convenient manner and then "copy/paste" parts of the code into his GL_ARB_..._program-code, if you are working with OpenGL - and the same thing would work if you are a DX-freak: copy/paste the code of the final-compiled cg-highlevel-program to your DirectX shader program code -> i thought of this, because the GL2-hlsl is not available or let me say, there is no way to get this shaders running into my program - so, i thought of the "assistance of the nvidia cg-compiler" for the next few weeks/month/years until the GL2 stuff and the "imaginary ATI compiler", mentioned by daveperman, is ready. (and you might know: it could take a long while until this is ready one day)


[This message has been edited by DJSnow (edited 02-06-2003).]

Zeno
02-06-2003, 10:01 AM
Originally posted by Robbo:

To be honest with you Halcyon, the issue isn't neccessarily the spec\sound of the card for me, but the difference between the OTT marketing strategy and the reality.

There are only so many times you can hype up a product and not deliver before your customers start to get fed-up with you. I'm just about at that stage now. After reading all of the specs\fantastic "pre-reviews" and marketing information about this card, I was holding off buying a 9700 Pro - for a long while. So I'm thinking i've wasted my time because the nvidia marketing was mostly bull.


Robbo - I feel exactly the same as you, and I too waited a long time for nothing. NVIDIA even went so far as to lie outright when hyping this product (i.e. "Free anti-aliasing", "it'll be out by Christmas for sure", "256 bit bus is unnecessary", "it is impossible to make a chip like this on .15 micron", etc). The lesson I have taken away from this is to never listen to PR. You can never know when a card will be released. You can never trust how the company says it will perform. Buy the best card that's out when you need it.

As our president once tried to say - "Fool me once, shame on you, fool me twice, shame on me."

-- Zeno

Korval
02-06-2003, 10:45 AM
I feel exactly the same as you, and I too waited a long time for nothing.

The GeForceFX isn't a bad card for developers. It's NV_fragment_program extension is more powerful than the ARB's version (though not a lot more powerful). Granted, from an end-user perspective, if you don't care about image quality or the racket coming from your machine, the 5800Ultra is the better buy. If you do care about either of those, the 9700 is the better buy.

Not to say that nVidia's hype machine wasn't working overtime to decieve the public. I try to avoid listening to any side's PR; I tend to instead put my faith in impartial judges like Anandtech and other review sites.

Zeno
02-06-2003, 10:57 AM
Originally posted by Korval:
The GeForceFX isn't a bad card for developers. It's NV_fragment_program extension is more powerful than the ARB's version (though not a lot more powerful).

Don't get me wrong, I do think the FX is an impressive piece of hardware and NVIDIA's cards have always been good for developers...very solid. I didn't mean to imply otherwise in my original post.

However, if they're going to be 8 months late, take one of my PCI slots, and make me listen to a high pitched turbofan, you'd think they could at least handily outfeature and outperform all older tech.

-- Zeno

Julien Cayzac
02-06-2003, 11:24 AM
Hi guys!

I've been reading this rather interesting discussion for a while, and something seems amazing to me: none of you ever mention the recent rumors about the GeForce FX being dead-born and aborted by NVidia (as stated on The Inquirer website and others)!?

Since there hasn't been an official announcement yet, I think neither Cass, Matt, nor Pat can disclose further info on that subject if it's true.

As an NVidia card owner and long-time NV fan (no pun intended), it would be *great* news to me since the whole thing's getting a bit embarassing (12 layers pcbs, Boeing fan, etc.). I hope NVidia will present a nice Quadro at CeBit and will quickly come out with a new card design.

Julien.

Elixer
02-06-2003, 12:12 PM
Originally posted by Korval:

Not to say that nVidia's hype machine wasn't working overtime to decieve the public. I try to avoid listening to any side's PR; I tend to instead put my faith in impartial judges like Anandtech and other review sites.

Whoa... you think Anandtech is impartial? That is like saying that Tom's hardware is impartial.

The ONLY impartial reviews you will find is from people who actually bought the card/cpu/MB/whatever, and post their feelings about it.

You can almost bet that someone will do a water cooled GF FX card, and or one of those super cooling nitrogen units. I also bet that Nvidia will not sell over 100K FX cards, and the next product out by them should sell well in the low to mid range level.
ATI is still coming out with their next plans, which should also introduce some nice low to mid range products, so finally, game developers will be able to use most of the features from high end cards, without people crying that the game won't work on their voodoo 3,gf 2, gf2/4mx, ati 8500/9000 cards http://www.opengl.org/discussion_boards/ubb/smile.gif

Ostsol
02-06-2003, 12:52 PM
Originally posted by deepmind:
none of you ever mention the recent rumors about the GeForce FX being dead-born and aborted by NVidia (as stated on The Inquirer website and others)!?

The Inquirer? ROTFLMAO! http://www.opengl.org/discussion_boards/ubb/tongue.gif

Korval
02-06-2003, 02:06 PM
However, if they're going to be 8 months late, take one of my PCI slots, and make me listen to a high pitched turbofan, you'd think they could at least handily outfeature and outperform all older tech.

If you're going to criticize the card, it would be nice if the criticisms were valid.

Only the 5800 Ultra takes up an extra PCI slot and sounds like, "a high pitched turbofan." The non-Ultra 5800 has a normal fan and doesn't take up the slot (not that you should use that PCI slot next door anyway, btw. Just for general heat flow from the chip). If you want an 5800, you can't say that nVidia's stopping you because of the gigantic, loud fan, since they're offering you a (slower) alternative.


none of you ever mention the recent rumors about the GeForce FX being dead-born and aborted by NVidia (as stated on The Inquirer website and others)!?

Yeah, right.

nVidia has to ship something. ATi certainly isn't giving them time to screw around; the R350's coming out in a few months with undisclosed features and performance. nVidia has to have something on the shelves that is, at the very least, comperable to what ATi has.

The only thing that was keeping ATi's 9500/9700 line from becoming dominant is the wait for the 5800. If nVidia doesn't actually release it, and instead goes back to the drawing board for 6 months or so, they can kiss any hope of market leadership goodbye. Such a move, further down the line, could even endanger nVidia's lucrative relationship with Microsoft's X-Box, putting ATi as the leading candidate for XBox 2.



Whoa... you think Anandtech is impartial?

Actually, yes. I haven't seen any significant partisanship on their part, even back when nVidia was on top. Though their reviews do tend to slightly favor whomever is on top at the time, they're still mostly impartial.


The ONLY impartial reviews you will find is from people who actually bought the card/cpu/MB/whatever, and post their feelings about it.

I don't consider the reviews of an individual person particularly impartial. You never know when you're talking to some rabid nVidia or ATi fanboy (unless he/she makes a particular rabid comment).


ATI is still coming out with their next plans, which should also introduce some nice low to mid range products

The 9500Pro is already an excellent mid-range solution, selling for under $200, while still providing the DX9.0/GL1.4 feature set. ATi's already making strides in this area.

jwatte
02-06-2003, 02:23 PM
I think 12 bit fixed point is very useful when running good-old fixed-function and register combiner stuff. There are some games and algorithms that expect more or less specific results from specific inputs, which assume fixed precision. Using floating point precision will not give the same results.

dorbie
02-06-2003, 04:33 PM
The inquirer is not the best source of GFX info, although there are some diamonds in the rough there. Some reviewers never seem to forgive the insult of their pet favourite graphics architecture getting slaughtered in the market and take it out on the guys who give us faster better cheaper products with a different architecture that actually works.

It's entertaining to watch them switch sides as the underdog changes, I enjoy the race too. There seems to be an tendency to root for the underdog after a year or two on top and the spin in the reviews reflects this. I just look at the numbers from several sources and make the features call for myself. There's a lot to be said for being on the sidelines and able to choose the card du jour without all the messy business of convincing people that one is better than the other.

Often reviewers opinions are colored by personal slights by marketing/PR and you never get to see that, (not talking about the inquirer) I heard a tale of one engineer at a GFX company punching a very famous reviewer, OK that's more than a 'slight', the guy was totally anti-that company from then on. Another incident with the same guy was that he was pissed off with another company because they didn't fly him first class and dissed them for ages after that.

Elixer
02-06-2003, 10:52 PM
Some review sites get pre-production boards also, and claim that they have 'release' boards. Then when joe public gets one, and finds out it is missing features (as in case of a MB) or the speed is wrong (gfx cards) or something else, the reviewer just shrugs it off, since they want to continue to get "free" stuff from them. Both the sites I mentioned on my other post are guilty of this.

This is nothing new btw, since I can tell you stories of the Savage 4 or the Savage 2000 or even the Rage 128, that all got glowing (paid) "reviews", and what do you know, they found nothing wrong with the drivers... and I could go on and on.

I am so way off topic now though, I will hot back on it for a bit.

I don't think that taking up 2 slots is a major issue, if you choose to go with the ultra, then you got cash to burn, and you most likely have a huge case with tons of fans, and nothing in the 1st slot next to AGP card as Korval said.

What matters to most developers is support (drivers), and feature set, anything extra is a bonus. Just think, at 145 F you can almost cook your hotdogs on the card! http://www.opengl.org/discussion_boards/ubb/wink.gif

Adrian
02-07-2003, 04:32 AM
More rumours about the fx being dumped. http://www.theinquirer.net/?article=7658

dorbie
02-07-2003, 08:38 PM
Sounds Great! The NV35 is an FX derived product and will have better performance and more features(they claim), it's previously been planned for the second half of 2003. The sooner they bring it out the better. Seems like when thay factor in ATI's planned introduction of their next product they don't expect to sell many of the current FX parts. They're only talking about a ~6 month window for this product against stiff competition from ATI, so this does seem credible. 100,000 units in 6 months then onto the next big thing.

[This message has been edited by dorbie (edited 02-08-2003).]

V-man
02-08-2003, 07:47 AM
Originally posted by dorbie:
Often reviewers opinions are colored by personal slights by marketing/PR and you never get to see that, (not talking about the inquirer) I heard a tale of one engineer at a GFX company punching a very famous reviewer, OK that's more than a 'slight', the guy was totally anti-that company from then on. Another incident with the same guy was that he was pissed off with another company because they didn't fly him first class and dissed them for ages after that.

Sounds like cool soap opera.

All reviewers are pretty much influenced by companies. I'm sure they get free offers, perhaps a nice new car on and who knows what.

It's your job to filter out the crap.

About what the Inquirer said, I was thinking that myself when I read that ATI will be releasing the R350 at the end of Feb 2003 as NV said they would release the NV30 at the beginining of Feb 2003.

I just looked at the prices. The R300 still sells for 600$CAN + ?? Not much of a price drop since it's release. 800$ would be the price of a brand new tower + 17 inch screen (with a low end video card).
That's a local store. Price watch shows much cheaper stuff.

jwatte
02-08-2003, 05:58 PM
All reviewers are pretty much influenced by companies. I'm sure they get free offers, perhaps a nice new car on and who knows what.


Where do I become a reviewer? Sign me up! :-)

Seriously, I think reviewers get free hardware (to review), and perhaps promo material like T-shirts and party invitations and stuff. However, to get a carm you'd probably have to be reviewing something like hedge funds, AND have the eary of half of all billionaires in the states or something...

As far as I know, pricewatch.com is the official measurement for "street price."

PixelDuck
02-09-2003, 05:12 AM
nVidia looks like a looser if/when it does so, but it still is a good move. At least in the context of their current situation. Don't know about the new features of R350, but surely if the NV35 adds more capabillities to the original NV30 and a more quite fan AND is realesed, at maximum, shortly after the R350 will go into production, nVidia might just have the chance of recapturing the market. But I doubt nVidia will be able to be so quick. Personally, this is a big fuss cirucus and I'm actually looking forward to what S3 can offer.

Lurking
02-09-2003, 11:34 AM
I would have to say that im upset w/ this "no Ultra" version of the NV30! I have been waiting for it for some time now. I have the first version of geforce3 and have been waiting to upgrade. I waited for this card assuming that it would come out soon after the 9700 pro. I have no idea why nvidia would release the nv30 to reviewers when they knew about the sound issue and such. Now i hear is that the nv35 is coming and i hear about the possible features. Is this another lie? I would prefer not to wait another 4 months to get my hands on a new card including after the R350 comes out. They better hurry up and release the specs soon if they want me to put any money down for a nv35!

gib
02-09-2003, 01:02 PM
I don't know much about video cards (I have a GF4 Ti 4400), but this thread has piqued my interest. When I bought the GF4 it seemed to be generally agreed that nVidia support for Linux was the best. How good are the ATi Linux drivers?

Gib

ToolChest
02-10-2003, 09:48 AM
Originally posted by Lurking:
I would have to say that im upset w/ this "no Ultra" version of the NV30! I have been waiting for it for some time now. I have the first version of geforce3 and have been waiting to upgrade. I waited for this card assuming that it would come out soon after the 9700 pro. I have no idea why nvidia would release the nv30 to reviewers when they knew about the sound issue and such. Now i hear is that the nv35 is coming and i hear about the possible features. Is this another lie? I would prefer not to wait another 4 months to get my hands on a new card including after the R350 comes out. They better hurry up and release the specs soon if they want me to put any money down for a nv35!

Good point, how do we know this isn't just more bs propaganda from nv to take the steam out of the ati 350 release? The nv35 will probably just end up being released this time next year... meanwhile all the nv owners (self included) have held out waiting.

Lurking
02-10-2003, 02:29 PM
now its only up to a couple weeks to find out were my money is going. I love nvidia for its developer and driver support. but now they way they are saying we have the best but we cant tell you when its coming out or that we canelled it has me thinking. So im hoping 2 weeks after the r350 is released that nvidia come up with the specs for this nv35. I buy a new graphics card every year to year and a half and i want to make sure for about 400 dollars my money is going to be spent well. So nvidia please help us out!

dorbie
02-10-2003, 03:49 PM
Geeze, get over yourselves.

If it weren't for a nice effort from ATI you'd be mightily impressed by NV30. If you like it get it, the reviews are in and it ain't going to be **** canned instantly. It looks to be slightly faster than the current ATI for some interesting stuff, with some really great features. The info (RUMOUR) says there will be 100,000 of the chips made by TSMC. Gainward seems to have a card sans blower on the way.

Are the current cards so awful that you don't want to splurge for them? BFD wait for the next ATI, or the next NVIDIA. The NV35 has been on the roadmap for a long time, it's not a short term effort to hurt ATI.
It's strange that you invite NVIDIA to leak their next gen before their current product is on the shelves. I doubt they'd do that quite just yet. Heck you're getting your panties in a wad because of a rumoured ATI release, and you want MORE futures?

You just want these guys to make your life easy by giving you a clear leader, and that ain't happening right now. This is a GOOD thing for us, not a BAD thing. Get over it, spin a bottle, flip a coin, buy a card... or wait, who cares. Just don't blame the companies for playing exactly the game we want them to play...leapfrog.

You're able to buy the fastest most featureful graphics systems there are or have ever been for a few hundred bucks and you're still not happy?

Korval
02-10-2003, 06:04 PM
Often reviewers opinions are colored by personal slights by marketing/PR and you never get to see that, (not talking about the inquirer) I heard a tale of one engineer at a GFX company punching a very famous reviewer, OK that's more than a 'slight', the guy was totally anti-that company from then on. Another incident with the same guy was that he was pissed off with another company because they didn't fly him first class and dissed them for ages after that.

I don't really consider that a valid reason for believing or disbelieving a review. All a good review, and a good reviewer, needs to do is present an argument for a particular position. Having read the reviews on Anandtech, I would say that they put up very good arguments. Occasionally, they leave out certain fact, or get some things wrong, but that's the exception.

How good a review is is simply based on how good the argument is. If a reviewer is out to get a particular companies products, it's usually easy to spot and disregard; the argument will be full of holes.



I would prefer not to wait another 4 months to get my hands on a new card including after the R350 comes out. They better hurry up and release the specs soon if they want me to put any money down for a nv35!


Good point, how do we know this isn't just more bs propaganda from nv to take the steam out of the ati 350 release? The nv35 will probably just end up being released this time next year... meanwhile all the nv owners (self included) have held out waiting.


So im hoping 2 weeks after the r350 is released that nvidia come up with the specs for this nv35. I buy a new graphics card every year to year and a half and i want to make sure for about 400 dollars my money is going to be spent well. So nvidia please help us out!

The anti-ATi sentiment is strong in this thread. I mean, you have a product in the 9700 Pro that is approximately equal to the competing nVidia product released 5 months later. And yet, here are three intelligent, capable programmers, and none of you even consider picking up a 9700 or an R350 card. It's always about nVidia giving you want you want when you want it, when ATi provided this service to some people months ago. Why is that?

dorbie
02-10-2003, 06:12 PM
Anandtech are one of the better review sites IMHO too.

Yes you can spot the holes in biased reviews if you're equipped to.

I have a 9700, it's the most incredible graphics system I've ever owned. Simply amazing, I highly recommend it, the next version due real soon now should be even better.

FWIW I think the issue for the others is that GeForce FX merely catches the 9700 Pro and perhaps noses ahead instead of spanking it. This gives everyone pause because of the timing. If they're buying NOW they are thinking they might as well wait on the next ATI due soon which they expect to take a decisive lead. Compounding their dilema is the NV30 'dead in the water' bull5hit/hysteria because NVIDIA's NV35 is still on the roadmap for later this year.

NV30 may be the shortest lived GFX product from NVIDIA simply because they need to stay on track with NV35 to compete with ATI.

All reasonabe, all sensible. But it's all GREAT for us dumb programmers freeloading on their hardware battle. Sit back & enjoy the show, and remember, whatever you buy will be obsolete in short order.


[This message has been edited by dorbie (edited 02-10-2003).]

Ostsol
02-10-2003, 06:51 PM
Actually, if the GeforceFX weren't so late, we'd probably be quite impressed by it.

Adrian
02-10-2003, 06:52 PM
Originally posted by Korval:
none of you even consider picking up a 9700 or an R350 card. Why is that?

The main reason I'm sticking with NVidia ( for the time being) is that they seem to be first with the extensions I find most useful. Point Sprites, occlusion query, VAR(VAO), PDR.

Another reason is that since there are more NV people out there(I think) and I can only test on one card it makes sense to have the type of card most people have.

[This message has been edited by Adrian (edited 02-10-2003).]

Ysaneya
02-11-2003, 12:52 AM
The main reason I'm sticking with NVidia ( for the time being) is that they seem to be first with the extensions I find most useful. Point Sprites, occlusion query, VAR(VAO), PDR.


I guess it depends on the extensions you find most useful, but for one, i loved the 1.4 pixel shader extension on the 8500, and now the 9700 has ARB_fragment_program, which is not available on the GF3/GF4. My point being, there's also some good extensions on ATI you know..



Another reason is that since there are more NV people out there(I think) and I can only test on one card it makes sense to have the type of card most people have.


Probably true but.. i'd be interested to see a poll about which video card everybody has.. seems like a lot of programmers have moved to ATI this last year (with reasons i have to add).

Y.


[This message has been edited by Ysaneya (edited 02-11-2003).]

PixelDuck
02-13-2003, 09:27 AM
I too have an R9700, bought it as soon as I knew that GFFX would be released only in the beginning of 2003 (and I got my hands on it). And it suplied the minumum requirements for the DX9 specs (for which I mostly code), and it was already available. So why not? It's a good card, with a lot of power and functionality that will last for about another ˝-1 years =) And with a copper cooler (or what ever) on the chip, it purrs like a kitten, not that 70 dB sound, which the like of you might only hear from the heavy music listening neighbour that has bought a one size too big sound system http://www.opengl.org/discussion_boards/ubb/wink.gif

Cheers!

ramonr
02-13-2003, 10:02 AM
Originally posted by Ysaneya:
Probably true but.. i'd be interested to see a poll about which video card everybody has.. seems like a lot of programmers have moved to ATI this last year (with reasons i have to add).


In fact, I have been at nVidia’s The Dawn To Dusk conference, this week, where a Geforce FX card were promised for every attendee and, there, they told us they don’t have them available. So we have to wait some weeks and they will send them to us.
Ramon.

ToolChest
02-13-2003, 10:28 AM
Korval

My GF2 is the only video card I’ve owned that I haven’t had any problems with and yes I have owned an ati card along time ago (All-in-wonder rage pro). I’m sure their stuff is better now, but if you feel you’ve paid for sh*t before it stays with you. As for the r300 I might have picked one up last summer if I was ready to upgrade. Getting one now is pointless, in a few months the r350 will be out and either the r300 will drop in price or the r350 will be worth the extra money. My concern is that nv will claim that the nv35 will be out at any moment causing people that prefer nv to wait and that ‘any moment’ will probably turn into 12 months.

dorbie
02-13-2003, 12:52 PM
Bummer, I never knew about the Dawn to Dusk thing. The training at these things is more valuable than the card IMHO.

Ah never mind, I see now it was in London.

[This message has been edited by dorbie (edited 02-13-2003).]

pkaler
02-13-2003, 01:18 PM
Originally posted by gib:
I don't know much about video cards (I have a GF4 Ti 4400), but this thread has piqued my interest. When I bought the GF4 it seemed to be generally agreed that nVidia support for Linux was the best. How good are the ATi Linux drivers?


Yeah, I have to laugh about all the driver instability talk. The nVidia drivers for Linux are absolutely solid. I can't remember the last time an app locked due to a bad driver. It's usually me doing something stupid and a bunch of glGetErrors usually solves the problem. I can just kill the app anyway. I don't have to worry about BSODs or reboots. I haven't even had to reboot the last x number of times I installed new nVidia drivers. Part of this has to do with the fact that loadable kernel modules and xfree are quite stable.

I am spoiled now. I get annoyed with XP for having to reboot every time I install or uninstall an app or driver.

Right now I'm doing almost all my coding in Linux. I spend an hour or two at the end of the week to port to Windows. Which is still a win for me in productivity.

That's why I am hesitant to purchase an ATI card. nVidia have made a commitment to Linux and IMHO the jury is still out for ATI's drivers.

IMHO hardware vs hardware is a pretty moot argument for nVidia vs ATI. What I look for is solid drivers and extensions with ease of programming.

Yeah, nVidia has a poop load of proprietary extensions, but their dev support makes up for it.

Nutty
02-13-2003, 01:37 PM
In fact, I have been at nVidia’s The Dawn To Dusk conference, this week, where a Geforce FX card were promised for every attendee and, there, they told us they don’t have them available. So we have to wait some weeks and they will send them to us.
Ramon.



hehe! Some things never change. http://www.opengl.org/discussion_boards/ubb/smile.gif It was exactly the same at the Gathering event, where we were supposed to pick up our GF3's. They didn't arrive till months later. Mine was duff too, but it got exchanged.

Nutty

Adrian
02-13-2003, 01:52 PM
Originally posted by dorbie:

Ah never mind, I see now it was in London.


I'm gutted, this is the first I heard of it and I live close to the MOS.



[This message has been edited by Adrian (edited 02-13-2003).]

pocketmoon
02-13-2003, 02:06 PM
Originally posted by ramonr:
In fact, I have been at nVidia’s The Dawn To Dusk conference, this week, where a Geforce FX card were promised for every attendee and, there, they told us they don’t have them available. So we have to wait some weeks and they will send them to us.
Ramon.


Yes, but that was more than compensated by the dancing girls http://www.opengl.org/discussion_boards/ubb/smile.gif)

Lots of confusion over Fp16 vs FP32 in pixel shaders ..

For a taste of the evening... http://homepage.ntlworld.com/pocketmoon/d2d4.jpg

The Nvidia UK guys know how to PARTY!

[This message has been edited by pocketmoon (edited 02-13-2003).]

Adrian
02-13-2003, 02:09 PM
Originally posted by pocketmoon:

Yes, but that was more than compensated by the dancing girls http://www.opengl.org/discussion_boards/ubb/smile.gif)



Damnit, why wasn't it announced here. I'm within walking distance of that place.

Cab
02-14-2003, 01:47 AM
Originally posted by pocketmoon:

Yes, but that was more than compensated by the dancing girls http://www.opengl.org/discussion_boards/ubb/smile.gif)


Are you refering to this girls? http://www.cocodrile.com/~ca66297/album1.html

pocketmoon
02-14-2003, 02:55 AM
Originally posted by Cab:
Are you refering to this girls? http://www.cocodrile.com/~ca66297/album1.html

Yes http://www.opengl.org/discussion_boards/ubb/smile.gif But I see you didn't posted some *hot* pics! Let's just say that even fewer clothes were being worn later in the evening...

Cab
02-14-2003, 04:36 AM
Originally posted by pocketmoon:
Yes http://www.opengl.org/discussion_boards/ubb/smile.gif But I see you didn't posted some *hot* pics! Let's just say that even fewer clothes were being worn later in the evening...




Ok, ok. I didn't want to hurt anybody http://www.opengl.org/discussion_boards/ubb/rolleyes.gif
There is another one... Not too hard http://www.opengl.org/discussion_boards/ubb/wink.gif http://www.cocodrile.com/~ca66297/DSC00050.JPG



[This message has been edited by Cab (edited 02-14-2003).]

Eric
02-14-2003, 04:46 AM
Originally posted by Adrian:
Damnit, why wasn't it announced here. I'm within walking distance of that place.

I usually talk about these events on these forums when I hear they are organized. But I couldn't attend this one so I didn't bother discussing it.

Funny to see the cards were not available at the event... As Nutty said, this was the case for The Gathering 1. But at the Gathering 2 the GF4 was there waiting for us (mind you it was a GF4 Ti4200 with only 64Mb RAM...).

As dorbie said, although it's nice to get the card at such a low price, the most interesting is certainly the talks. On top of that, I met Nutty at the first event and Tom, MattS, Cass, John (Spitzer) and lots of other people at the second one. Always nice to put a face on a name!

Now as far as the ATI vs NVIDIA battle is concerned, I still have a slight preference for the latter: it may be argued that NVIDIA developer relations are better but that's not the main reason why I am sticking to NVIDIA. The (very stupid) reason is that I have very bad memories of trying to make some of my programs run on ATI-based gfx cards. I am talking of pre-Radeons here so you might say (and you'd be dead right) that I should grow up and look again... I'll probably do that...

Regards,

Eric

Matt Halpin
02-14-2003, 07:51 AM
It's a shame I couldn't get to the 'Dawn to Dusk' event, but it did cost a bit (I believe?). The ATI Mojo day in August was free to developers and they had 9700s there in the lecture hall for us to take away. Very nice. http://www.opengl.org/discussion_boards/ubb/smile.gif

Eric
02-14-2003, 08:05 AM
If ATI still have a spare one I'll take it! http://www.opengl.org/discussion_boards/ubb/wink.gif

I can't remember the price for the NVIDIA event but IIRC it was a third of the announced price for a GeForceFX (and you get the card plus 2 days of technical talks for this ridiculous amount!).

Regards,

Eric

Nutty
02-14-2003, 09:33 AM
The Dawn To Dusk event was well cheap. Only 150 quid!.. If I wasn't soo busy, I'd have definitly gone.

Judging by those photo's I missed an awesome event!.. bah http://www.opengl.org/discussion_boards/ubb/biggrin.gif

dorbie
02-14-2003, 10:33 AM
Hmmm.... really valuable training going on there. I assume it has something to do with using Cg to correctly render flesh tones.

[This message has been edited by dorbie (edited 02-14-2003).]

zed
02-14-2003, 10:49 AM
http://www.cocodrile.com/~ca66297/DSC00015.JPG
obviously to much for the guy in middle, whats his hand upto!

buggers the girls, i wanna know was the booze free?
150 quid for all the beer u can drink for 2 days http://www.opengl.org/discussion_boards/ubb/smile.gif

btw message to nvidia - next time dont hire english girls http://www.opengl.org/discussion_boards/ubb/frown.gif

[This message has been edited by zed (edited 02-14-2003).]

Elixer
02-14-2003, 11:59 AM
Hmm, I can't tell without more pics, do the girls have PCI or AGP slots? http://www.opengl.org/discussion_boards/ubb/wink.gif

PixelDuck
02-14-2003, 12:17 PM
Man, that show might have been a blast http://www.opengl.org/discussion_boards/ubb/wink.gif But I wonder one thing... shouldn't it be conserning gfx cards http://www.opengl.org/discussion_boards/ubb/wink.gif

Cheers!

josip
02-14-2003, 01:08 PM
Originally posted by zed:
btw message to nvidia - next time dont hire english girls http://www.opengl.org/discussion_boards/ubb/frown.gif

lol!