PDA

View Full Version : GForce4 extensions



mardn
02-06-2002, 11:14 PM
Hi Guys!

Does anybody know what new OpenGL extensions
are available on GForce4?

Besides performance what is the real difference between GForce3 and 4???

Julien Cayzac
02-06-2002, 11:24 PM
The same extensions than GeForce3, as long the drivers do not expose new extensions...

Julien.

Humus
02-06-2002, 11:39 PM
There a list in this review:
http://www.digit-life.com/articles/gf4/index1.html

Personally I'm a little dissappointed with the GF4, it doesn't seam to offer anything new to be excited of http://www.opengl.org/discussion_boards/ubb/frown.gif
A decent speed bump, a little better shaders, but still not as flexible as Radeon 8500. It feels like I would downgrade if I change my Radeon 8500 to a GF4.

Eric
02-07-2002, 12:54 AM
Originally posted by Humus:
It feels like I would downgrade if I change my Radeon 8500 to a GF4.

Correct me if I am wrong, but you'd get a large speed increase (I quickly read Tom's Hardware review).

The shaders on the GF4 may be less flexible than the Radeon ones but apps hardly use the power of the GF3 anyway so I am wondering whether ATI did not jump too far.

Well, we'll see what happens !

Regards.

Eric

Humus
02-07-2002, 03:01 AM
Sure, there's not many apps taking advantage of shaders, but I am http://www.opengl.org/discussion_boards/ubb/smile.gif After working for a while with fragment shaders everything else feels painfully limited.
There were also some talks about maybe 64bit rendering would see the light of the day with GF4, but this doesn't seam to be the case. All in all it seams like a small step, like GF to GF2, but I thought that the Ti 200/500 series were supposed to be the refresh part and GF4 be a new revolutionary part. It's a year since GF3 was released, I was hoping for more progress during this time.

Eric
02-07-2002, 03:27 AM
I see what you mean. Perhaps they're hiding something that will be exposed in later drivers ! Or perhaps they ran out of ideas http://www.opengl.org/discussion_boards/ubb/wink.gif ! (can't believe this one though...).

Regards.

Eric

P.S.: I have to take a closer look at the specs of the new beast !


[This message has been edited by Eric (edited 02-07-2002).]

Pentagram
02-07-2002, 04:18 AM
From what I read the geForce4-Mx is actually less powerfull feature-wise than the geForce3, no vertex shader and no pixel shaders.
I'ts more like a geForce2 with more fill rate.

Eric
02-07-2002, 04:28 AM
Originally posted by Pentagram:
From what I read the geForce4-Mx is actually less powerfull feature-wise than the geForce3, no vertex shader and no pixel shaders.
I'ts more like a geForce2 with more fill rate.

That's right.

But I think Humus was talking about the TI series (the real new one !).

Regards.

Eric

Humus
02-07-2002, 07:00 AM
Yeah, I was talking about the Ti 4x00 series. Regarding the GF4 MX series, well, while the price/performance ratio is all nice and nothing wrong with the card, especially not with the low price tag, I do think the name is badly choosen. The average newbie will walk into the store, think he found an a$$ cheap GF4, takes it home and launches 3dmarks just to find Nature demo "not supported by hardware" along with the shaders tests etc. It's more like a GF2.5, but the average Joe will think it has more features than a GF3.

GPSnoopy
02-07-2002, 07:32 AM
Humus,

IMO the GF4 has enough features to compete with the Radeon 8500 in term of shaders. Of course you might sometimes need to do them in 2-3 passes on the GF3/4 instead of 1 pass on the Radeon 8500.
There are not a lot of effects that can't be done on the GF4 (but can on a R8500).

NVIDIA understood quite well that the speed doesn't depend on the texturing rate, but on the memory bandwidth.
Of course two passes uses more bandwidth than one, but not by much. NVIDIA went for a "lighter" architecture but more efficient in memory bandwidth usage.

I'm sure the GF4 is way faster than the R8500 even when it has to do more passes to achieve the same effect.
Carmack already pointed this problem when he goes from the GF2 to the GF3: decreasing the number of passes didn't affect the speed by much, everything being limited by the memory bandwidth.

NVIDIA explained this in one interview. It seems like a logical choice to me. (but could be total BS)

Mahjii
02-07-2002, 08:04 AM
Has anybody heard anything about Texture_Shaders_3 and Vertex_Program_1_1 ?

Seems like there are in fact new elements for the GeForce4-

In DX8, the gf4 has several new pixel-shader ops (ps 1.2, 1.3) which all seem to mirror what one can do with a _gf3_ under GL with TextureShaders.

Seems strange that NV choose not to expose important pixel features (e.g. dot-depth-replace) in DX8, but did expose them in GL.

Hope the people who raise the "DX_is_more_advanced_than_GL/GL_is_doomed_until_v2.0" notice these sorts of things.

Moshe Nissim
02-07-2002, 11:52 AM
Did anybody understand how really the new gf4 FSAA is superior to the one in gf3? To me it still looks like two samples per pixel, offset from each other by (0.5,0.5). Ok, so the texture sampling was moved by (0.25,0.25), but it doesn't seem like an enormous improvement to me. For example, for untextured polys, the result is the same. Did I miss something here?

Humus
02-07-2002, 11:54 AM
Originally posted by GPSnoopy:
Humus,

IMO the GF4 has enough features to compete with the Radeon 8500 in term of shaders. Of course you might sometimes need to do them in 2-3 passes on the GF3/4 instead of 1 pass on the Radeon 8500.
There are not a lot of effects that can't be done on the GF4 (but can on a R8500).

NVIDIA understood quite well that the speed doesn't depend on the texturing rate, but on the memory bandwidth.
Of course two passes uses more bandwidth than one, but not by much. NVIDIA went for a "lighter" architecture but more efficient in memory bandwidth usage.

I'm sure the GF4 is way faster than the R8500 even when it has to do more passes to achieve the same effect.
Carmack already pointed this problem when he goes from the GF2 to the GF3: decreasing the number of passes didn't affect the speed by much, everything being limited by the memory bandwidth.

NVIDIA explained this in one interview. It seems like a logical choice to me. (but could be total BS)

Well, I'm not especially concerned about speed. The Radeon 8500 give me all performance I need right now, I could live with GF3 Ti200 or below speed too. However, the flexibility the fragment shaders gives me is invaluable. The arbitrary dependent texture reads is very useful and allows you to some really cool stuff like varying the specular exponent across a surface. I'll be putting out a demo later tonight on my site using DTR to create a hot air effect around a fire. Another thing, the Radeon 8500 has a range in the shaders if [-8,8], which can really enhance the output as illustrated by the two screenshots in the middle of this page from an engine I'm working on: http://hem.passagen.se/emiper/3d/GameEngine.html

zed
02-07-2002, 12:14 PM
>>I do think the name is badly choosen. The average newbie will walk into the store, think he found an a$$ cheap GF4, takes it home and launches 3dmarks just to find Nature demo "not supported by hardware" along with the shaders tests etc. It's more like a GF2.5, but the average Joe will think it has more features than a GF3.<<

true thats my reactaion as well, u would expect it to support at least all the stuff of a gf3 (though maybe slower)
i had a similar reaction with my new gf2mx200 i brought it home + got a huge shock it performs glReadPixels slower than my old budget vanta WTF drawpixels though is about 60x quicker (though thats of no use cause i never use it, unlike readpixels) still not complaining to much for everything except readpixels it slaughters the vanta

jwatte
02-07-2002, 12:43 PM
I am really disappointed with the release of the GeForce4Go and the GeForce4MX. They already announced the GeForce4Go once, under the name "NV17", and now they're just renaming it.

nVIDIA have been pushing vertex shaders for a long time, and I had HIGH hopes that we'd see hardware vertex shaders being mainstream by christmas this year. However, with the nFORCE not having them, and 7/8 of all GeForce4s not having them, it'll remain an esoteric feature and I'll keep designing for the GeForce2 feature set.

No, I do not like the software emulation; my CPU is busy doing other things thank you very much.

MikeC
02-07-2002, 05:14 PM
Yeah, same here. I think it's the first NV release I can remember that was actually a disappointment. And yes, the naming scheme is terrible.

It's also a bad omen for GL 2.0. It looks as though top-end hardware won't have full GL2 support until probably the generation after next at best; figure another year and a half. (Though 3Dlabs may be sooner.) And if GF4MX is an indicator of their segmentation plan, it won't reach the massmarket in *new* machines for another 2-3 generations after that. By that point most people will have at least one foot off the upgrade treadmill, so user-base penetration will have slowed down a lot.

How long before GL2 apps have any chance of reaching a mainstream audience?

dorbie
02-07-2002, 05:40 PM
It's just a speed bump, OK, but it's still very nice, and should be very good for anything using 4 textures for example.

Remember that the MX is an entry level system with 64MB Ram, and the mid to high end now has 128 MB graphics memory as standard. That's a very nice development.

Seems to me like we've all become a bit spoiled lately. Good things are still on the way, all we have to do is have fun programming these things while we watch NVIDIA and ATI try to knock the stuffing out of each other. It's great sport.

I wouldn't assume that the next product rev is a whole year away. I expect ATI and NVIDIA to have a new generation of products on the shelves before Christmas.

Zeno
02-07-2002, 06:54 PM
I have a LOT of respect for nVIDIA's hardware, quality, and support. Moving people towards 128 Mb cards is definitely a step in the right direction. I am always running out of texture memory.

However, pushing a non-vertex/pixel shader capable card as the new low-level hardware is a step in the wrong direction. It will definitely have the effect of holding up development of Directx8 level apps for the next year.

The new naming convention is also terribly misleading. It destroys the product name and confuses people who aren't in the know. I can no longer give system requirements as "this program requires a Geforce4 or better card" because some of the Geforce4's are really just Geforce2.5's. It would seem much more logical to me if the Geforce number referred to a core set of capabilities and the MX or Ti label to refer to clock speed.

Why not just introduce the Ti series of Geforce4 and drop prices on the Geforce3? What void are these new MX cards filling that was not already covered by Geforce2 or 3?

So anyway, how can I get my hands on this wolf-man demo? http://www.opengl.org/discussion_boards/ubb/smile.gif

-- Zeno

Gorg
02-07-2002, 07:15 PM
Originally posted by MikeC:
Yeah, same here. I think it's the first NV release I can remember that was actually a disappointment. And yes, the naming scheme is terrible.

It's also a bad omen for GL 2.0. It looks as though top-end hardware won't have full GL2 support until probably the generation after next at best; figure another year and a half. (Though 3Dlabs may be sooner.) And if GF4MX is an indicator of their segmentation plan, it won't reach the massmarket in *new* machines for another 2-3 generations after that. By that point most people will have at least one foot off the upgrade treadmill, so user-base penetration will have slowed down a lot.

How long before GL2 apps have any chance of reaching a mainstream audience?


It's not a bad omen for OGL2.0 because OGL2.0 is not ready yet.

dorbie
02-07-2002, 07:43 PM
NVIDIA have to make that low end board for around $30 (my guess), so what you suggest may not be possible. On top of this they need to justify their overall price structure. They may think that performance alone does not do that, or they may have no choice. You guys talk about this stuff as if is was easy just to do what you are asking here. To drive the MX pricing you can bet NVIDIA are forecasting huge volumes for it and going out on a limb as it is. Before you complain too bitterly take a look around and see if any other product in that category is even close to it.

[This message has been edited by dorbie (edited 02-07-2002).]

jwatte
02-07-2002, 09:32 PM
> I wouldn't assume that the next product
> rev is a whole year away. I expect ATI
> and NVIDIA to have a new generation of
> products on the shelves before Christmas.

On the shelves by christmas? I'm sure there will be an NV30, probably released in August or something.

However, when I say "mainstream by christmas" I mean that people have been buying machines (or upgrades) with hardware vertex shading technology all year, and thus putting a product out by then will sell into a sizeable market.

As it is, if it's going to happen, it will happen with ATI hardware, as the 8500LE seems to be only somewhat more expensive than the MX, but pack a bigger punch. Too bad their cards have been easier to use under D3D so far.

JackM
02-07-2002, 10:44 PM
Naming is very confusing, but they pricing is very competetive.

Ti 4400 - $299
Ti 4200 - $199

Also, there is old and forgotten GeForce 3 Ti 200 series, which you can find for less than $140.

P/V shader capable cards already in mainstream, and included in many OEM configurations. Unfortunately, MX series is a step backward.

Unlreated question: is there any good programming tutorials for creating relistic fur effects? That wolf demo looks very cool http://www.opengl.org/discussion_boards/ubb/smile.gif

Thanks, JackM




[This message has been edited by JackM (edited 02-07-2002).]

dorbie
02-07-2002, 11:18 PM
The feature rich mainstream is going to happen eventually, but you aren't going to change the installed base between now and Christmas no matter what you do.

It's a bit too hopeful to expect that the R300 will quickly penetrate the value segment or would force Radeon there either. You're talking about $60 graphics cards. Even when it happens you'll be developing to the next big thing on the high end.

This looks like a cycle, even in two years time we'll probably be discussing the same issues with different features (like better framebuffer precision). The only thing which might help is abstracting the shader support, but that has thorny quality vs performance tradeoffs.

Julien Cayzac
02-07-2002, 11:30 PM
Originally posted by Mahjii:
Seems strange that NV choose not to expose important pixel features (e.g. dot-depth-replace) in DX8, but did expose them in GL.

Hope the people who raise the "DX_is_more_advanced_than_GL/GL_is_doomed_until_v2.0" notice these sorts of things.


IMHO, this is a logical political decision from a company which actively supports OGL.

I won't launch a troll here, but D3D=MS and OGL=ARB. ARB members have decision power over OGL whereas it's MS who decides everything in D3D. Every IHV's interest is a 100% OGL market, I think...

Julien

davepermen
02-08-2002, 04:50 AM
nvidia designed dx8. they worked together with microsoft to generate dx8 and geforce3. i never understood how they generated a gpu wich is designed differently than the dx8 specs so you can't expose the whole dx8-feature-set http://www.opengl.org/discussion_boards/ubb/smile.gif

Humus
02-08-2002, 06:59 AM
Originally posted by davepermen:
nvidia designed dx8. they worked together with microsoft to generate dx8 and geforce3. i never understood how they generated a gpu wich is designed differently than the dx8 specs so you can't expose the whole dx8-feature-set http://www.opengl.org/discussion_boards/ubb/smile.gif

Nvidia didn't design dx8, M$ did. NVidia did give some input, but so did other vendors too. Then there are talks about dx9 being designed around R300, but it's just as much BS, nVidia will have their share input too, aswell as other vendors. It's very much in M$ interest to not make certain vendors influence to the API too strong. It's better for M$ if vendors need to beg them to include certain features into the API instead of M$ having to licence certain technology from someone in a strong position.

Humus
02-08-2002, 07:07 AM
Originally posted by Humus:
I'll be putting out a demo later tonight on my site using DTR to create a hot air effect around a fire.

Quoting myself here http://www.opengl.org/discussion_boards/ubb/smile.gif ... the demo in question can be found here: http://hem.passagen.se/emiper/3d.html

LoLJester
02-08-2002, 03:07 PM
So, are you guys saying that nVidia's GeForce4 Go will not even at least have the features that GeForce3 cards have (i.e pixel shaders and such)?

If true, that's almost like "False Advertisment" and sad at the same time. I mean, it's got the #4 attached to the name; shouldn't this mean at least somehow better than any GeForce3?

I've always hated Computers (upgrades and such), but now I hate them even more.
I mean it's "Ok" if you upgrade and get something for your money, but to upgrade and get the same or just a little better, that's just plain ridiculous. It's like throwing your money out the window.
They should at least tell people (write it on the box, or something) that the hardware they just paid for and own, is still as good as the ones are coming out and, unless you're buying a GeForce4 Ti500, you should stick with what you have.

Anyway, I'm just plain mad, and just blowing some steam. But please someone tell me that this isn't true...Pleeeaaase... http://www.opengl.org/discussion_boards/ubb/smile.gif

Regards.

Elixer
02-08-2002, 03:40 PM
Well, there is always the BitBoys! http://www.opengl.org/discussion_boards/ubb/biggrin.gif

I was going to say kyro also, but it appears they have left the PC market. http://www.opengl.org/discussion_boards/ubb/frown.gif

I really wish the marketing guys at nvidia would drop the games, sure it works on impulse buyers, but for most people who actually go out and read the box/reviews, it
doesn't help them at all.

Lets all hope they don't name the next product GF5!

I would like to see a 64bit Z, and 32bit stencil + 64bit(or 128) percision for frame buffer, of course, this card would need 512MB eDram minimum- so the price just went to $4500 err give or take a few more 0's http://www.opengl.org/discussion_boards/ubb/wink.gif

dorbie
02-08-2002, 04:16 PM
This is not about impulsive buyers, I agree that card naming in general is confusing, but that's not new. No more Pro vs Ultra nonsense at least. Maybe they are taking a lesson from Intel. The whole 'MX' vs some other value name is perhaps confusing but what else can you do? It's a fine line between devaluing your own products while competing with ATI in the mainstream. The GF4 MX name may yet prove to be a mistake but the high end buyers tend to know what they want, and you want to convince the low end buyers that they are getting the latest technology even though they are cheapskates.

Zeno
02-08-2002, 04:30 PM
Originally posted by Elixer:
Well, there is always the BitBoys! http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Ha ha. And I have a card of my own design coming out too. Should be 20 times as fast as these Geforce4's. Bring on my VC money!



I was going to say kyro also, but it appears they have left the PC market. http://www.opengl.org/discussion_boards/ubb/frown.gif


Yea, that sucks. Hopefully whoever buys their ideas will put them to better use than they did. Their design had merit but they put it on really weak hardware, which resulted in mediocre performance and features.



I would like to see a 64bit Z


Geez, I'm all for improvements, but I can't really imagine ever needing more than a 32 bit depth buffer.

A quick calculation tells me that, if your far view frustum was ten thousand meters away, you'd be able to resolve things as small as 5e-16m with a 64 bit depth buffer....that's about the size of a quark!

I'm not modelling at that resolution just yet http://www.opengl.org/discussion_boards/ubb/wink.gif

-- Zeno

Humus
02-09-2002, 01:55 AM
"32bit ought to be enough for anyone". http://www.opengl.org/discussion_boards/ubb/tongue.gif

Yeah, really, for Z I don't see a reason to go beyond 32bit, but for the framebuffer I'd love to see 16bit/channel floating point coming through.

Zeno
02-09-2002, 11:26 AM
Originally posted by Humus:
"32bit ought to be enough for anyone". http://www.opengl.org/discussion_boards/ubb/tongue.gif


That did sound a lot like Gates' famous blunder-quote, didn't it? http://www.opengl.org/discussion_boards/ubb/smile.gif


Yeah, really, for Z I don't see a reason to go beyond 32bit, but for the framebuffer I'd love to see 16bit/channel floating point coming through.

YES! I'm hoping that might be a "secret" ability of the Geforce4 to be announced with later drivers....perhaps part of the reason for stepping the RAM up. I have three or four effects that I do that would really benefit from 64 bit color.

-- Zeno

jwatte
02-09-2002, 03:55 PM
The GeForce4 MX/Go still appears better than the GeForce2 MX/Go, because it has a faster memory subsystem and also some GF3 things like fast Z clears. I don't spit a doubling of my fill rate in the eye. However, I'm still disappointed there's no hardware vertex shaders in most of the cards they're going to sell.

Of course, if they'd hurry up with those DDR 3 GHz Pentium IVs, and Dell started selling Athlons (hah! they appear 0wnz0r3d by Intel), then maybe software vertex shaders wouldn't seem quite as annoying. But still annoying enough.

dorbie
02-09-2002, 08:18 PM
Zeno, I've seen wishfull thinking but you really take the biscuit. I'm sure much better fragment precision is on the way, but it's not part of GF4. NVIDIA would have been screaming about it if it was.

Elixer
02-09-2002, 09:33 PM
You know, I was trying to build a galaxy to scale, that is why I needed the 64bit Z! http://www.opengl.org/discussion_boards/ubb/smile.gif

I was also thinking that the GF4s would offer something along the lines of Tru-form that ATI offers to get rid of square elbows and such. Guess not.
At this rate the R300 should really kick some arse! That is, if their drivers don't kick back... http://www.opengl.org/discussion_boards/ubb/wink.gif


side note:
I was making my way through usenet, and it seems that the latest conspiracy theory is that MS told Nvidia not to make too much of an advance in the low end, since that is where X-box lies.

jwatte
02-10-2002, 08:15 AM
Elixer,

It's much more likely that they didn't want to advance the qualitative position of the low end, because they don't want to re-design their nFORCE chip set just yet.

It would make sense to keep going with a GeForce2 in the north bridge as long as GF4 isn't a qualitative improvement over GF3.

Dollars to donuts this summer's update is something naff, like 64-bit framebuffers, or an accumulation buffer, or some procedural geometry interface in front of the vertex shaders (which is where you'd stick something TruForm-like), or infinite passes with conditionals in the pixel shaders, or something along those lines. Or all of it :-)

Then they'd want to roll THAT into the chip set for the on-board low end, and we'll all be slobbering about a potentially vastly advanced low end at this time next year.

Well, I can dream, can't I? (The nice thing about these happy pills is that the above plan seems to make business sense. :-)

davepermen
02-10-2002, 08:51 AM
uf we want to use 64bit z all our data, incl matrices and vertex shaders and models etc have to be stored in 64bit doubles.. much much memory..
but i cant wait for the moment vertexarrays and pixelarrays (namely textures) are the same: float-arrays

Zeno
02-10-2002, 12:05 PM
Originally posted by jwatte:
Elixer,

Dollars to donuts this summer's update is something naff, like 64-bit framebuffers, or an accumulation buffer, or some procedural geometry interface in front of the vertex shaders (which is where you'd stick something TruForm-like), or infinite passes with conditionals in the pixel shaders, or something along those lines. Or all of it :-)


Hey Dorbie, I think Jwatte just stole my biscuit http://www.opengl.org/discussion_boards/ubb/wink.gif

-- Zeno

Mr_Smith
02-10-2002, 12:36 PM
A (fur) demo with source code i believe can be found on PowerVRs website www.powervr.com (http://www.powervr.com) for the kryo cards (would probably work with any card).

And i cant wait for the new ATI Card surposedly having dx9 support unlike the gf4 which has probs with dx9 (aparently, dont blame me if im wrong! http://www.opengl.org/discussion_boards/ubb/smile.gif)

dorbie
02-10-2002, 07:01 PM
Zeno, the biscuit is still yours :-), I think jwatte is correct on some points. I don't think it's as conspiratorial, I don't see how the low end would be held back waiting on next generation freeD in nForce2, that is wishfull, but not as wishfull as hoping it's already here waiting to be turned on in GF4 :-). NVIDIA trying to maintain differentiation between high and low end is plausible (they will always try and do this), and the high precision futures rumours have been circulating for a while. I just doubt that when extended range & precision framebuffers arrive they'll be available as entry level products. I hope they are, but I doubt it.

OK how about some fun, what will NVIDIA's next generation graphics product be called? GeForce is ruled out says the CEO, will it be some other *Force name?

Anyone care to guess?

Zeno
02-10-2002, 08:12 PM
Originally posted by dorbie:
Zeno, the biscuit is still yours :-)

So you're saying that it is more likely that the next generation nvidia card will have an accumulation buffer, a pre-T&L geometry generator, 64 bit framebuffer AND conditional pixel shaders than it is that the Geforce4 has a hidden feature like 64 bit color? Get real.

It's not a conspiricy theory either, just a possibility. After all, how many important Geforce3 features were not enabled or announced at release? 3d texturemaps? Occlusion query? Shadow maps? Others?

I was just making a guess based on past behavior. I don't actually think that there IS 64 bit color support (and now that I think about it an accumulation buffer is more likely) but if it is I wouldn't be too surprised that they didn't mention it right away.

-- Zeno

dorbie
02-11-2002, 01:20 AM
One is yet to be determined, the other has been determined and is impossible. You shouldn't take it so seriously.

Jwatte presented a grocery list, but didn't insist on it all, but some of the features he mentioned kinda flow from the others or are potentially redundant. Discussion about better precision has nothing directly to do with an accumulation buffer, it has to do with _significantly_ better than 8 bits per component information with a real sign bit and signed arithmetic, and some kind of extended range support (i.e no clamping at 1.0 either directly or through some other shenanigans) perhaps through a programmable video LUT. When you have that it doesn't really matter that you have to output your result to the framebuffer and you can do many passes (or do it transparently) without a significant negative quality impact. It also makes sense to put a decent front end strategy together for this, because you really need real shader language to support it (see the NVIDIA Stanford work). Branches will probably be supported in such a language even if the hardware doesn't support it the way you intuit it, after all even today you have a stencil buffer and alpha buffer support, if you need a branch then take your pick. Do you even need an accumulation buffer with such a system? Perhaps not, but it becomes very easy to support in hardware when you have designed the above features in hardware already. Again copying to other buffers or doing fast stencil tested or blended image copies could make it even easier/faster to compile shaders for.

This precision feature is widely anticipated (you even expect it in GF4!!), I expect it will be a high end thing, but as you can see, many things naturally flow from a feature like that. IMHO the chance of it arriving across the product range (since NVIDIA is rev'ing the whole product line each release now and being creative to do it) is marginally greater than zero, the probability of it arriving in the next high end PC card is only slightly less than 1.

On the feature anticipation level jwatte is spot on IMHO, even if I don't think the full grocery list is possible, most of it is certainly feasible. His theory on NVIDIA marketing strategy is just a bit too 'black helicopters' for me.


[This message has been edited by dorbie (edited 02-11-2002).]