GForce4 extensions

Hi Guys!

Does anybody know what new OpenGL extensions
are available on GForce4?

Besides performance what is the real difference between GForce3 and 4???

The same extensions than GeForce3, as long the drivers do not expose new extensions…

Julien.

There a list in this review:
http://www.digit-life.com/articles/gf4/index1.html

Personally I’m a little dissappointed with the GF4, it doesn’t seam to offer anything new to be excited of
A decent speed bump, a little better shaders, but still not as flexible as Radeon 8500. It feels like I would downgrade if I change my Radeon 8500 to a GF4.

Originally posted by Humus:
It feels like I would downgrade if I change my Radeon 8500 to a GF4.

Correct me if I am wrong, but you’d get a large speed increase (I quickly read Tom’s Hardware review).

The shaders on the GF4 may be less flexible than the Radeon ones but apps hardly use the power of the GF3 anyway so I am wondering whether ATI did not jump too far.

Well, we’ll see what happens !

Regards.

Eric

Sure, there’s not many apps taking advantage of shaders, but I am After working for a while with fragment shaders everything else feels painfully limited.
There were also some talks about maybe 64bit rendering would see the light of the day with GF4, but this doesn’t seam to be the case. All in all it seams like a small step, like GF to GF2, but I thought that the Ti 200/500 series were supposed to be the refresh part and GF4 be a new revolutionary part. It’s a year since GF3 was released, I was hoping for more progress during this time.

I see what you mean. Perhaps they’re hiding something that will be exposed in later drivers ! Or perhaps they ran out of ideas ! (can’t believe this one though…).

Regards.

Eric

P.S.: I have to take a closer look at the specs of the new beast !

[This message has been edited by Eric (edited 02-07-2002).]

From what I read the geForce4-Mx is actually less powerfull feature-wise than the geForce3, no vertex shader and no pixel shaders.
I’ts more like a geForce2 with more fill rate.

Originally posted by Pentagram:
From what I read the geForce4-Mx is actually less powerfull feature-wise than the geForce3, no vertex shader and no pixel shaders.
I’ts more like a geForce2 with more fill rate.

That’s right.

But I think Humus was talking about the TI series (the real new one !).

Regards.

Eric

Yeah, I was talking about the Ti 4x00 series. Regarding the GF4 MX series, well, while the price/performance ratio is all nice and nothing wrong with the card, especially not with the low price tag, I do think the name is badly choosen. The average newbie will walk into the store, think he found an a$$ cheap GF4, takes it home and launches 3dmarks just to find Nature demo “not supported by hardware” along with the shaders tests etc. It’s more like a GF2.5, but the average Joe will think it has more features than a GF3.

Humus,

IMO the GF4 has enough features to compete with the Radeon 8500 in term of shaders. Of course you might sometimes need to do them in 2-3 passes on the GF3/4 instead of 1 pass on the Radeon 8500.
There are not a lot of effects that can’t be done on the GF4 (but can on a R8500).

NVIDIA understood quite well that the speed doesn’t depend on the texturing rate, but on the memory bandwidth.
Of course two passes uses more bandwidth than one, but not by much. NVIDIA went for a “lighter” architecture but more efficient in memory bandwidth usage.

I’m sure the GF4 is way faster than the R8500 even when it has to do more passes to achieve the same effect.
Carmack already pointed this problem when he goes from the GF2 to the GF3: decreasing the number of passes didn’t affect the speed by much, everything being limited by the memory bandwidth.

NVIDIA explained this in one interview. It seems like a logical choice to me. (but could be total BS)

Has anybody heard anything about Texture_Shaders_3 and Vertex_Program_1_1 ?

Seems like there are in fact new elements for the GeForce4-

In DX8, the gf4 has several new pixel-shader ops (ps 1.2, 1.3) which all seem to mirror what one can do with a gf3 under GL with TextureShaders.

Seems strange that NV choose not to expose important pixel features (e.g. dot-depth-replace) in DX8, but did expose them in GL.

Hope the people who raise the “DX_is_more_advanced_than_GL/GL_is_doomed_until_v2.0” notice these sorts of things.

Did anybody understand how really the new gf4 FSAA is superior to the one in gf3? To me it still looks like two samples per pixel, offset from each other by (0.5,0.5). Ok, so the texture sampling was moved by (0.25,0.25), but it doesn’t seem like an enormous improvement to me. For example, for untextured polys, the result is the same. Did I miss something here?

Originally posted by GPSnoopy:
[b]Humus,

IMO the GF4 has enough features to compete with the Radeon 8500 in term of shaders. Of course you might sometimes need to do them in 2-3 passes on the GF3/4 instead of 1 pass on the Radeon 8500.
There are not a lot of effects that can’t be done on the GF4 (but can on a R8500).

NVIDIA understood quite well that the speed doesn’t depend on the texturing rate, but on the memory bandwidth.
Of course two passes uses more bandwidth than one, but not by much. NVIDIA went for a “lighter” architecture but more efficient in memory bandwidth usage.

I’m sure the GF4 is way faster than the R8500 even when it has to do more passes to achieve the same effect.
Carmack already pointed this problem when he goes from the GF2 to the GF3: decreasing the number of passes didn’t affect the speed by much, everything being limited by the memory bandwidth.

NVIDIA explained this in one interview. It seems like a logical choice to me. (but could be total BS)[/b]

Well, I’m not especially concerned about speed. The Radeon 8500 give me all performance I need right now, I could live with GF3 Ti200 or below speed too. However, the flexibility the fragment shaders gives me is invaluable. The arbitrary dependent texture reads is very useful and allows you to some really cool stuff like varying the specular exponent across a surface. I’ll be putting out a demo later tonight on my site using DTR to create a hot air effect around a fire. Another thing, the Radeon 8500 has a range in the shaders if [-8,8], which can really enhance the output as illustrated by the two screenshots in the middle of this page from an engine I’m working on: http://hem.passagen.se/emiper/3d/GameEngine.html

>>I do think the name is badly choosen. The average newbie will walk into the store, think he found an a$$ cheap GF4, takes it home and launches 3dmarks just to find Nature demo “not supported by hardware” along with the shaders tests etc. It’s more like a GF2.5, but the average Joe will think it has more features than a GF3.<<

true thats my reactaion as well, u would expect it to support at least all the stuff of a gf3 (though maybe slower)
i had a similar reaction with my new gf2mx200 i brought it home + got a huge shock it performs glReadPixels slower than my old budget vanta WTF drawpixels though is about 60x quicker (though thats of no use cause i never use it, unlike readpixels) still not complaining to much for everything except readpixels it slaughters the vanta

I am really disappointed with the release of the GeForce4Go and the GeForce4MX. They already announced the GeForce4Go once, under the name “NV17”, and now they’re just renaming it.

nVIDIA have been pushing vertex shaders for a long time, and I had HIGH hopes that we’d see hardware vertex shaders being mainstream by christmas this year. However, with the nFORCE not having them, and 7/8 of all GeForce4s not having them, it’ll remain an esoteric feature and I’ll keep designing for the GeForce2 feature set.

No, I do not like the software emulation; my CPU is busy doing other things thank you very much.

Yeah, same here. I think it’s the first NV release I can remember that was actually a disappointment. And yes, the naming scheme is terrible.

It’s also a bad omen for GL 2.0. It looks as though top-end hardware won’t have full GL2 support until probably the generation after next at best; figure another year and a half. (Though 3Dlabs may be sooner.) And if GF4MX is an indicator of their segmentation plan, it won’t reach the massmarket in new machines for another 2-3 generations after that. By that point most people will have at least one foot off the upgrade treadmill, so user-base penetration will have slowed down a lot.

How long before GL2 apps have any chance of reaching a mainstream audience?

It’s just a speed bump, OK, but it’s still very nice, and should be very good for anything using 4 textures for example.

Remember that the MX is an entry level system with 64MB Ram, and the mid to high end now has 128 MB graphics memory as standard. That’s a very nice development.

Seems to me like we’ve all become a bit spoiled lately. Good things are still on the way, all we have to do is have fun programming these things while we watch NVIDIA and ATI try to knock the stuffing out of each other. It’s great sport.

I wouldn’t assume that the next product rev is a whole year away. I expect ATI and NVIDIA to have a new generation of products on the shelves before Christmas.

I have a LOT of respect for nVIDIA’s hardware, quality, and support. Moving people towards 128 Mb cards is definitely a step in the right direction. I am always running out of texture memory.

However, pushing a non-vertex/pixel shader capable card as the new low-level hardware is a step in the wrong direction. It will definitely have the effect of holding up development of Directx8 level apps for the next year.

The new naming convention is also terribly misleading. It destroys the product name and confuses people who aren’t in the know. I can no longer give system requirements as “this program requires a Geforce4 or better card” because some of the Geforce4’s are really just Geforce2.5’s. It would seem much more logical to me if the Geforce number referred to a core set of capabilities and the MX or Ti label to refer to clock speed.

Why not just introduce the Ti series of Geforce4 and drop prices on the Geforce3? What void are these new MX cards filling that was not already covered by Geforce2 or 3?

So anyway, how can I get my hands on this wolf-man demo?

– Zeno

Originally posted by MikeC:
[b]Yeah, same here. I think it’s the first NV release I can remember that was actually a disappointment. And yes, the naming scheme is terrible.

It’s also a bad omen for GL 2.0. It looks as though top-end hardware won’t have full GL2 support until probably the generation after next at best; figure another year and a half. (Though 3Dlabs may be sooner.) And if GF4MX is an indicator of their segmentation plan, it won’t reach the massmarket in new machines for another 2-3 generations after that. By that point most people will have at least one foot off the upgrade treadmill, so user-base penetration will have slowed down a lot.

How long before GL2 apps have any chance of reaching a mainstream audience?
[/b]

It’s not a bad omen for OGL2.0 because OGL2.0 is not ready yet.

NVIDIA have to make that low end board for around $30 (my guess), so what you suggest may not be possible. On top of this they need to justify their overall price structure. They may think that performance alone does not do that, or they may have no choice. You guys talk about this stuff as if is was easy just to do what you are asking here. To drive the MX pricing you can bet NVIDIA are forecasting huge volumes for it and going out on a limb as it is. Before you complain too bitterly take a look around and see if any other product in that category is even close to it.

[This message has been edited by dorbie (edited 02-07-2002).]