NV35 extensions

Anyone know if NV35 is planning on bringing any new extensions to the table? Cass? I’ll appreciate the fact that you prolly cannot say what, a yes or no would be nice.

ta.

Hi Nutty! Long time no see…

I am just being curious here: do you realize your question sounds strange??? I mean, almost any new card brings new extensions…

What do you have in mind asking this? (surely you have a precise idea on what you’d like to see?).

Regards,

Eric

Ati has that f-buffer so I imagine nv matching it. Other than that I have no idea but it’s an interesting question. I would like to see an arb vp2 like d3d9 has.

We’ll find out soon enough – NV35 is rumoured to be launched at E3, isn’t it?

My wishlist would include:

  • generalized floating-point texture support
  • alpha blending on float buffers
  • “multiple rendertargets” à la D3D

I hope it won’t be too big an improvement over NV30, though – it would be quite frustrating to find out that I upgraded two months early

– Tom

I am just being curious here: do you realize your question sounds strange??? I mean, almost any new card brings new extensions…

Most refresher products dont seem to bring that much newer stuff.

What I’d like to see are;

a) Better support for floating point textures/framebuffers.

I saw in some specs floating around, something mentioning shadow volume acceleration. Dunno what that is, maybe its just a cunning way to describe double sided stenciling.

Nutty

Originally posted by Nutty:

a) Better support for floating point textures/framebuffers.

Personally I think the floating-point texture/framebuffer support for the fx cards are really good(at least for opengl). What did you have in mind, besides fp speed? Easier to use RTT environment?

From what I gather, the only floating point texture you can have is 2D, and you have to enable NV_texture_rectangle also. I dont see why you need to do this.

Standard 1D, 2D, and cubemap texture targets would’ve been much nicer.

My vote would be to extend the
floating point frame buffer from
being pbuffer only to being available
as a chooseable visual.

Right now the Nvidia FX cards only offer
8bit/channel visuals. Upgrading this
to 12bit/channel RGBA would be so nice!

I don’t think nVIDIA will support something like ARB_vp2 until it’s actually specified and ratified. They have their own NV vertex programming model which is substantially more powerful than ARB_vp, and more powerful than DX9 vp2.0, too, if memory serves. That’s enough, right?

i don’t think there’s anything new comming. while i hope they get floatingpoint textures right. nv30 hw really sucks in this part.

but comparing gf4 to gf3, there wasn’t much new on software side as well.

my opinions:
they should drop the 16/32bit float support and just go for 24bit, thats the minimal spec for dx9 and gl => it would be best to only support that. at least for nvidia.
they should get real floatingpoint support for 1d,2d,3d,cubemap textures.

some nice to have things for future hw:
floatingpoint with bilinear and all.
rgbe texture formats with bilinear, too.
bicubic filtering.
TEXLD instruction with 2 texcoords that define the anysotropic line to filter on.
last espencially useful when additionally you can set the sample-count as well directly…
this, combined with the shadow-modes to sample, compare and then filter would lead to raycasting trough volumes for example. or so
other things that come to mind?

Cool - you guys must have done some great stuff with nv30 (especially you, dave - you’ve run up against the limits pretty quickly). Where’s the demos?

i don’t have an nv30, never wanted one. i knew the limits before the card even came out. long before. the card has flaws in the design i knew right from the start. they made it more easy for me to choose the radeon9700pro. a much “rounder” product to me.

this not even by looking at the hw/performance specs. now, as they are out, and suck in performance, and the fast one even sucks air, i got prooved to not have choosen the wrong side currently.

i really hope the best for the nv35 for nvidia. they have done a lot wrong the last year…

Nnn, I just think we’re spoilt at the moment. We should try to squeeze cool stuff out of what we’ve got - the masses won’t have nv30/9700’s in their machines for at least 3 years (optimistic view).
Look at the xbox for examples of what can be achieved using geforce4 technology.

The biggest ss is that gf3/4 can’t have ARB_fp, and thinking about one more codepath is really annoying, it’s 2003
But the s
t is also in the fact, that those old cards arn’t dying that fast, TNT2/GF2MX/GF4MX for instance (non grpx ppl usualy purchases PIV 3.06HT + GF2MX440 or leaves integrated Intel graphics without vp, and then asks stupid questions: “why I cant run Splinter Cell”, “I spent >1000$ on my PC, but can’t get games runing well” ). Maybe supercheap FX5200 will change the situation.

[This message has been edited by M/\dm/
(edited 05-06-2003).]

Originally posted by knackered:
Nnn, I just think we’re spoilt at the moment. We should try to squeeze cool stuff out of what we’ve got - the masses won’t have nv30/9700’s in their machines for at least 3 years (optimistic view).
Look at the xbox for examples of what can be achieved using geforce4 technology.

problem with current technology is that you really need to hack around to get something working. no way for artists to life and design how they want to…

Because the GeForce 4 MX was the big seller in the “4” generation, I believe two texture units will be the default fallback path for quite some time to come.

Whether you do separate paths for Radeon (3 units), GF3 (4 units), R200 (6 units) before you get to ARB_f_p is up to your needs and available time, I suppose. I’d be OK with going for ARB_f_p, and then fall back to a 2-unit path on cards that don’t have it.

nVidia history:

TNT2 == speeded TNT
GeForce2 == speeded GeForce
GeForce4 == speeded GeForce3

…what does that tell you?

There were nearly no additional HW functionality in any of these “even numbers”. The only changes were additional pipelines, better memory interfaces and new drivers. Don’t expect too much from NV35…

There were nearly no additional HW functionality in any of these “even numbers”. The only changes were additional pipelines, better memory interfaces and new drivers. Don’t expect too much from NV35…

That’s not entirely true.

The GeForce 4 added some non-trivial functionality to the texture shaders (hence texture_shader_3, which is not supported on GeForce 3 cards).

It is entirely possible that NV35 added a bit to the hardware capabilities.

mark, cough shaders, reg.combiners, cube textures cough.

I would also start with an arb code path. Then write ihv specific paths. I’m doing this now. I wrote tex.env. code and now I’m playing with reg.combiners and all I can say WOW! I think I can shave off one pass and have my specular look much better than n^2 it’s in my tex.env. code. Well worth the effor of learning reg.cmb. Then again I hear arb vertex program is better designed than ihv ext. so it’s probably not worth to spend time on ihv code path. So you kinda have to decide like this whether something is worth it or not.

Originally posted by Korval:
[b] That’s not entirely true.

The GeForce 4 added some non-trivial functionality to the texture shaders (hence texture_shader_3, which is not supported on GeForce 3 cards).[/b]

True. But still, those interesting “quantum leaps” in technology happens every second major release.

TNT: multitexturing + stencil buffer + 32 BPP rendering
GeForce: HW T&L + nice combiners
GeForce3: vertex & fragment programmability
GeForceFX: floating point at fragment level + better programmability

  • some other very useful stuff of course, but these were the fundamental changes to the pipeline (I may be missing some).

Ok, of course some “minor” additional HW features might be interesting, but I think at present the standardization at the driver level is more interesting (e.g. VBO and shading languages) and worth spending time on (from an OpenGL developer point of view, I mean). But that’s my view of things.

OT: In terms of HW I would like to see things going more in the direction of 3DLabs VP arhcitecture (e.g. a proper memory hierarchy and HW support for context switching).