PDA

View Full Version : To use extensions



KRONOS
09-17-2002, 05:04 AM
This question goes to the NVidia guys (cass, etc)... Are NV_element_array and EXT_stencil_two_side, emulated with the 40.41 drivers, accelerated in hardware in a GeForce 2, 3 or 4? The drivers could expose this, right? I'm not talking about NV_float_buffer, or NV_vertex_program2, those have to have dedicated hardware. But for the first two, that could be easly exposed. Or not?

And something else. In the paper "NV30 OpenGL Driver Essentials", it says: "Working for OpenGL 1.X & 2.X inclusion". Who are they kidding?! With all their capabilities and ideas, they could indeed help deliver and even better GL2. But, as NVidia made a position regarding GL2?! Or joined the party?! No....

Programing with NVidia hardware as became a pain! They have so many extension going on that, if one writes something to work with NVidia hardware, it won't work anywhere else. Better call it "NVidia's OpenGL". Example: create the much cool VAR. Oopss!! Forgot something!! Let's create VAR2. Vertex programs?! Let's make two versions, 1.0 and 1.1. It seems to me that, what NVidia as in inteligence, lacks in wisdom. Don't take me wrong, I love NVidia, I believe all the technology wonders they create/implement are indeed great, but not properly used. As an example: why wasn't NV_element_array included imediatly in the NV_vertex_array_range? God, it is sooooo obvious. Now I have to write two code paths, one for the GeForce 2,3 and 4, and one for the next NV30. Well, that is great, isn't it?

davepermen
09-17-2002, 05:25 AM
stencil_two_side is not necesarily easy, as it has to switch stencilmode per triangle then..

think of switching texture per triangle.. can't be done automagically as well http://www.opengl.org/discussion_boards/ubb/biggrin.gif

element_array, possible.. have't read it, but i think that can come..

hehe, nvidia and opengl2. its like microsoft and opengl.. http://www.opengl.org/discussion_boards/ubb/biggrin.gif forget it http://www.opengl.org/discussion_boards/ubb/biggrin.gif

hehe, about the extensions.. hehe. you get now why i hate the way nvidia works? a) they have extensions for everything (its not a bug, its a feature, of the next ext http://www.opengl.org/discussion_boards/ubb/biggrin.gif), b) their exts are hw-near but terrible to use (partially because of a cracy zw design and that, so we don't have yet fragmentshaders with ps1.1 capabilities in gl.. somehow there is no real fit except dropping so much features..) c) new gpu, fully new extensionset, and everything gets in a sweet circle to code once for each gpu..

you get now why i move over to ati asap? just waiting for an r300.. http://www.opengl.org/discussion_boards/ubb/biggrin.gif
i'm quite gl2-prove then, have a low amount of well designed extensions, and they are the same on all atis. most of the exts are even arbs, or at least exts, used and provided by others like matrox (or nvidia sometimes!) as well..

fresh
09-17-2002, 12:32 PM
Why would it have to switch stencil op for every triangle? It could emulate it internally by rendering the mesh twice. One for front face polys, and once for back face polys. Essentially what all of us are already doing.

R300 does support this extension of course http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Zeno
09-17-2002, 02:21 PM
Why would it have to switch stencil op for every triangle?

Because if you didn't, you would have to "remember" all the triangles that the user has thrown at you, or at least remember them in groups of a preset size limit. There may not be a mechanism to pull geometry out of the pipeline and store it, and even if there were, it wouldn't be very OpenGL-like to not do the rendering immediately.

Cards with double sided stencil built in can just store two stencil states and do things on the fly.

-- Zeno

fresh
09-17-2002, 03:35 PM
Originally posted by Zeno:
Because if you didn't, you would have to "remember" all the triangles that the user has thrown at you, or at least remember them in groups of a preset size limit. There may not be a mechanism to pull geometry out of the pipeline and store it, and even if there were, it wouldn't be very OpenGL-like to not do the rendering immediately.

Cards with double sided stencil built in can just store two stencil states and do things on the fly.

-- Zeno


What I meant was that it'd be easy for the driver to emulate two sided stencil tests by submitting the same geometry twice with different stencil ops. It's exactly the same as we're already doing in software.

flo
09-17-2002, 11:23 PM
fresh:

It is not possible for the driver to emulate two sided stenciling through rendering the mesh twice with different stencil ops. Color or depth buffer content could have been changed after the first evaluation of the mesh, and the result of the second evaluation pass would not need to be as expected. For the result, it does matter whether the rendering order is ABCABC or AABBCC.

The driver needs indeed to change stencil settings for every triangle.

flo

[This message has been edited by flo (edited 09-18-2002).]

KRONOS
09-18-2002, 04:06 AM
What about: if the triangle isn't back facing, then it as to be front facing. It's just a matter of setting the stencil to increment to back facing triangles, and to decrement otherwise. Just change the state and it will do the opposite: increment when front facing, and decrement otherwise. Just have to check the triangle one. The trick is having the stencil to perform a "if-else".

KRONOS
09-18-2002, 04:11 AM
In my previous post I meant "Just have to check the triangle ONCE". And to clear things better, the stencil would have to do:

if (front facing) increment
else decrement

or

if (back facing) increment
else decrement

Lars
09-18-2002, 06:17 AM
You seem to be focused on stencil volumes, because, with 2 sided stencil you can define totaly different stencil operations for each side including value, mask, functions and operations.

This is a bit more that has to be done per triangle.

Lars

(But it would be nice if it would work on pre nv30/r300 ;-))