X800 versus GF6800 (and side question of noiseX())

I have currently loaned a GF FX5200 card to bootstrap my foray into dev’ing GLSL shaders, but the card is (obviously) not very good platform, so I’m looking to buy either X800 Pro or GF6800 card.

I have browsed old messages on this board, but have come to little conclusion on which card has better support of GLSL features. Some issues I read here that only GF6800 can do proper branching (and also proper non-unrolled loops) that even X800 can’t. However from reading on Ati’s site, it seems that X800 does have hardware capable of proper flow and branching (in context of DX, no mention on OGL or GLSL).

Is this a driver issue or is X800 truly lacking in hardware? Are there any other caveats for either cards? What’s the current state of the driver support?

On noiseX(): I’ve been going through the Orange book and wanted to try implementing the turbulence Sun Surface shader using the built-in noise1() function, but found out that on my setup (FX5200, detonator 61.77) the function returned 0.0 and nothing but 0.0 for all values. Not implemented? Is it (likely to) be implement in the future or should I look for calculating noise myself? (I don’t want to use texture lookups as I need to surface large area with animation, so repeating texture base would be too obvious…)

Also, what kind of instruction counts are usually acceptable? One of my fragment shaders is 41 instructions long (according to an internal compile error dump if I stick “discard;” as first thing after main(); it does phong shading and multipass texturing and couple other tricks) and runs very slow on this card (not really surprising). But what kind of performance can I expect from relatively good cards in this regard with large areas shaded with the fragment shader?

The R420 architecture (that powers the X800) is incapable of performing branching operations in the fragment program, unless you write specialized code for it. This is not a driver issue; this is a hardware issue.

The 6800 is capable of supporting branching, but current nVidia drivers with glslang do not offer this support yet.

As for instruction counts, the FX offered something like 512 or 1024 fragment opcodes, so the 6800 will offer at least that. The R420 can offer something like 1536 opcodes.

How many opcodes you can reasonably use depends greatly on your rendering. If you’re not rendering a lot of pixels (re: low-resolution and no anti-aliasing), then you can use plenty of opcodes. However, if you’re trying to run at 1600x1200 or something, you might want to consider a “depth-first” pass where you render everything, but only writing to the depth buffer (and using a null fragment program). After the depth-first pass, you render everything as normal. This will prevent you from wasting potentially hundreds of opcodes on pixels that will be culled, and hardware tends to be pretty fast at rendering only to the depth buffer.

A developer i would definitely advise to buy a Geforce 6, because it simply has a lot of nice features, which X800 doesn’t.

A gamer might be more satisfied with a X800, because it is quieter, needs less power, is smaller and lighter (and maybe faster, but that depends on the review you read :wink: ).

Jan.

I would get the 6800 because nv drivers are much better at opengl right now. Ati supposedly is rewriting their opengl drivers but I think they only rewrote the parts that doom3 is using to combat nv hw. Also, if you are getting 6800 then see how many vertex units it has because I think by default there should be 12 pixel pipes and 6 vertex units. Not sure how many pixel units. This might be applicable to GT model as well, just not sure. GT and ultra have 16 pixel pipes. Note, the only reason why I won’t buy Ati card right now is because the constant(almost forever) end user complaints about their opengl and linux drivers as well. It been posted at rage3d that Ati right now doesn’t have enough resources so they’re skimping gl in favor for d3d.

Thank you for the responses. I’ll get a GF 6800 model then.

Originally posted by Toni Ylisirniö:
Thank you for the responses. I’ll get a GF 6800 model then.
I believe that’s a wise decision, but again I also believe you can’t go wrong with either model since they perform wicked good.
I got myself an FX 6800 because of the incredible range of extensions that it supports.
I’m more than happy with my purchase now that I can run demos dating back to the GF 2 era that I thought only possible through today’s modern shaders.
I remember talking to some engineers at Nvidia booth in the past Siggraph about their registry combiners, needless to say the boys were pretty proud of what they have brought forth to the industry at that time :smiley:

Before the release of the 6800, I would recommend the ATI cards over the NVIDIA cards. However, with the 6800, I’d currently recommend the 6800 GT. Why not the ultra? Well, the ultra is hot, large, heavy, takes two slots, and is usually quite loud. The GT… isn’t, and it’s often only 10% or so slower than the Ultra.

You’re not going to go entirely wrong with an X800, either, but it’s not the reigning king that the 9700 and 9800 were for the longest of times. (mobility 9700/9800 still seem to out-pace the Go 5200 and 5600, though, if you’re going mobile)

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.