PDA

View Full Version : NV35 extensions



Nutty
05-05-2003, 05:11 AM
Anyone know if NV35 is planning on bringing any new extensions to the table? Cass? I'll appreciate the fact that you prolly cannot say what, a yes or no would be nice. http://www.opengl.org/discussion_boards/ubb/smile.gif

ta.

Eric
05-05-2003, 07:21 AM
Hi Nutty! Long time no see...

I am just being curious here: do you realize your question sounds strange??? I mean, almost any new card brings new extensions...

What do you have in mind asking this? (surely you have a precise idea on what you'd like to see?).

Regards,

Eric

JD
05-05-2003, 08:53 AM
Ati has that f-buffer so I imagine nv matching it. Other than that I have no idea but it's an interesting question. I would like to see an arb vp2 like d3d9 has.

Tom Nuydens
05-05-2003, 11:02 AM
We'll find out soon enough -- NV35 is rumoured to be launched at E3, isn't it?

My wishlist would include:
- generalized floating-point texture support
- alpha blending on float buffers
- "multiple rendertargets" la D3D

I hope it won't be too big an improvement over NV30, though -- it would be quite frustrating to find out that I upgraded two months early http://www.opengl.org/discussion_boards/ubb/smile.gif

-- Tom

Nutty
05-05-2003, 11:25 AM
I am just being curious here: do you realize your question sounds strange??? I mean, almost any new card brings new extensions...

Most refresher products dont seem to bring that much newer stuff.

What I'd like to see are;

a) Better support for floating point textures/framebuffers.

I saw in some specs floating around, something mentioning shadow volume acceleration. Dunno what that is, maybe its just a cunning way to describe double sided stenciling.

Nutty

roffe
05-05-2003, 12:06 PM
Originally posted by Nutty:

a) Better support for floating point textures/framebuffers.


Personally I think the floating-point texture/framebuffer support for the fx cards are really good(at least for opengl). What did you have in mind, besides fp speed? Easier to use RTT environment?

Nutty
05-05-2003, 01:30 PM
From what I gather, the _only_ floating point texture you can have is 2D, and you have to enable NV_texture_rectangle also. I dont see why you need to do this.

Standard 1D, 2D, and cubemap texture targets would've been much nicer.

tbfx
05-05-2003, 02:55 PM
My vote would be to extend the
floating point frame buffer from
being pbuffer only to being available
as a chooseable visual.

Right now the Nvidia FX cards only offer
8bit/channel visuals. Upgrading this
to 12bit/channel RGBA would be so nice!

jwatte
05-05-2003, 03:30 PM
I don't think nVIDIA will support something like ARB_vp2 until it's actually specified and ratified. They have their own NV vertex programming model which is substantially more powerful than ARB_vp, and more powerful than DX9 vp2.0, too, if memory serves. That's enough, right?

davepermen
05-06-2003, 01:14 AM
i don't think there's anything new comming. while i hope they get floatingpoint textures right. nv30 hw really sucks in this part.

but comparing gf4 to gf3, there wasn't much new on software side as well.

my opinions:
they should drop the 16/32bit float support and just go for 24bit, thats the minimal spec for dx9 and gl => it would be best to only support that. at least for nvidia.
they should get real floatingpoint support for 1d,2d,3d,cubemap textures.

some nice to have things for future hw:
floatingpoint with bilinear and all.
rgbe texture formats with bilinear, too.
bicubic filtering.
TEXLD instruction with 2 texcoords that define the anysotropic line to filter on.
last espencially useful when additionally you can set the sample-count as well directly..
this, combined with the shadow-modes to sample, compare and _then_ filter would lead to raycasting trough volumes for example. or so http://www.opengl.org/discussion_boards/ubb/biggrin.gif
other things that come to mind?

knackered
05-06-2003, 01:19 AM
Cool - you guys must have done some great stuff with nv30 (especially you, dave - you've run up against the limits pretty quickly). Where's the demos?

davepermen
05-06-2003, 03:17 AM
i don't have an nv30, never wanted one. i knew the limits before the card even came out. long before. the card has flaws in the design i knew right from the start. they made it more easy for me to choose the radeon9700pro. a much "rounder" product to me.

this not even by looking at the hw/performance specs. now, as they are out, and suck in performance, and the fast one even sucks air, i got prooved to not have choosen the wrong side currently.

i really hope the best for the nv35 for nvidia. they have done a lot wrong the last year..

knackered
05-06-2003, 04:44 AM
Nnn, I just think we're spoilt at the moment. We should try to squeeze cool stuff out of what we've got - the masses won't have nv30/9700's in their machines for at least 3 years (optimistic view).
Look at the xbox for examples of what can be achieved using geforce4 technology.

M/\dm/\n
05-06-2003, 05:02 AM
The biggest s**s is that gf3/4 can't have ARB_fp, and thinking about one more codepath is really annoying, it's 2003 http://www.opengl.org/discussion_boards/ubb/frown.gif
But the s**t is also in the fact, that those old cards arn't dying that fast, TNT2/GF2MX/GF4MX for instance (non grpx ppl usualy purchases PIV 3.06HT + GF2MX440 http://www.opengl.org/discussion_boards/ubb/mad.gif or leaves integrated Intel graphics without *****vp*****, and then asks stupid questions: "why I cant run Splinter Cell", "I spent >1000$ on my PC, but can't get games runing well" http://www.opengl.org/discussion_boards/ubb/mad.gif http://www.opengl.org/discussion_boards/ubb/mad.gif ). Maybe supercheap FX5200 will change the situation.

[This message has been edited by M/\dm/\n (edited 05-06-2003).]

davepermen
05-06-2003, 05:27 AM
Originally posted by knackered:
Nnn, I just think we're spoilt at the moment. We should try to squeeze cool stuff out of what we've got - the masses won't have nv30/9700's in their machines for at least 3 years (optimistic view).
Look at the xbox for examples of what can be achieved using geforce4 technology.

problem with current technology is that you really need to hack around to get something working. no way for artists to life and design how they want to..

jwatte
05-06-2003, 10:43 AM
Because the GeForce 4 MX was the big seller in the "4" generation, I believe two texture units will be the default fallback path for quite some time to come.

Whether you do separate paths for Radeon (3 units), GF3 (4 units), R200 (6 units) before you get to ARB_f_p is up to your needs and available time, I suppose. I'd be OK with going for ARB_f_p, and then fall back to a 2-unit path on cards that don't have it.

marcus256
05-06-2003, 12:28 PM
nVidia history:

TNT2 == speeded TNT
GeForce2 == speeded GeForce
GeForce4 == speeded GeForce3

...what does that tell you?

There were nearly no additional HW functionality in any of these "even numbers". The only changes were additional pipelines, better memory interfaces and new drivers. Don't expect too much from NV35...

Korval
05-06-2003, 03:19 PM
There were nearly no additional HW functionality in any of these "even numbers". The only changes were additional pipelines, better memory interfaces and new drivers. Don't expect too much from NV35...

That's not entirely true.

The GeForce 4 added some non-trivial functionality to the texture shaders (hence texture_shader_3, which is not supported on GeForce 3 cards).

It is entirely possible that NV35 added a bit to the hardware capabilities.

JD
05-06-2003, 03:37 PM
mark, cough *shaders, reg.combiners, cube textures* cough.

I would also start with an arb code path. Then write ihv specific paths. I'm doing this now. I wrote tex.env. code and now I'm playing with reg.combiners and all I can say WOW! I think I can shave off one pass and have my specular look much better than n^2 it's in my tex.env. code. Well worth the effor of learning reg.cmb. Then again I hear arb vertex program is better designed than ihv ext. so it's probably not worth to spend time on ihv code path. So you kinda have to decide like this whether something is worth it or not.

marcus256
05-08-2003, 01:03 AM
Originally posted by Korval:
That's not entirely true.

The GeForce 4 added some non-trivial functionality to the texture shaders (hence texture_shader_3, which is not supported on GeForce 3 cards).

True. But still, those interesting "quantum leaps" in technology happens every second major release.

TNT: multitexturing + stencil buffer + 32 BPP rendering
GeForce: HW T&L + nice combiners
GeForce3: vertex & fragment programmability
GeForceFX: floating point at fragment level + better programmability

+ some other very useful stuff of course, but these were the fundamental changes to the pipeline (I may be missing some).

Ok, of course some "minor" additional HW features might be interesting, but I think at present the standardization at the driver level is more interesting (e.g. VBO and shading languages) and worth spending time on (from an OpenGL developer point of view, I mean). But that's my view of things.

OT: In terms of HW I would like to see things going more in the direction of 3DLabs VP arhcitecture (e.g. a proper memory hierarchy and HW support for context switching).

Eric Lengyel
05-08-2003, 01:31 AM
Originally posted by Nutty:
Anyone know if NV35 is planning on bringing any new extensions to the table?

Yes, NV35 supports the GL_NV_depth_bounds_test extension. (It might end up being an EXT extension.) This lets you test the frame buffer depth value (not the fragment depth value) against a specific range and reject fragments where it fails. Very useful for stencil shadows, but not supported on NV30. This extension is discussed in the forthcoming book The OpenGL Extensions Guide:

http://www.terathon.com/books/extensions.html

V-man
05-08-2003, 01:56 AM
>>>Maybe supercheap FX5200 will change the situation.<<<

I dont know what will happen in the markets, but the 5200 has some serious performance problems.
Some benchmarks using pixel shaders (DX9) were getting a peak performance of 5 FPS on moderately complex scene.

I dont know if the 5600 is any better, but you can safely say that you can't use fp on 5200 or else you will get lots of complaints.

I agree, a fallback to 2 tex units is necessary.

In fact, let me ask you guys this. What games actually make use of register_combiners?
NOT combiner2! Or how about ATI_fragment_shader?
I know that some old engines out there were using ARB_texture_combiners, but I never heard of a game using vendor specific stuff.

It seems to me that the capabilities of the Gf1, Gf2, Gf3, Gf4 were wasted (on the PC anyway).

PS: I'm talking about 2002 and previous games.

Julien Cayzac
05-08-2003, 02:00 AM
Originally posted by Eric Lengyel:
This extension is discussed in the forthcoming book The OpenGL Extensions Guide


Hi Eric,
I really liked your former book, as it can be used as a memento for quick reference. Is this new book written in the same spirit?

BTW, your C4 engine looks great. Maybe you should include a portfolio on your site, with C4's licensees http://www.opengl.org/discussion_boards/ubb/smile.gif

Julien.

tfpsly
05-08-2003, 02:04 AM
Yep I agree. Today, aimaing at something like a gf2/4mx doesn't sound that stupid.
People aiming at pure fp path will find out that their program will not be able to run on a good part of the market.

On the other hand, even if the 5200 is a very slow product at fp, I would still choose this card if I was to upgrade today :
* cheap
* all the required features (unlike gf4mx when it came out)
* good linux drivers (I'm coding exclusively under linux and then I port my code under windows - "port" meaning test & debug).

The last point means that ati is a final NO! Unless they learn how to code drivers that are neither slow like hell nor unstable http://www.opengl.org/discussion_boards/ubb/smile.gif

M/\dm/\n
05-08-2003, 02:14 AM
I have one FX5200 on table, & I'll test it with dets 43.51 WHQL when I'll get home. Then we'll see is that fp worth somthing. Although I'm sure it's faster tnah NV30 emu http://www.opengl.org/discussion_boards/ubb/biggrin.gif