OpenGL & NVidia extensions

Hi all,
We (in my team) are now starting to use many of the NVidia extensions as register combiners (which is really powerful indeed…), vertex array range, shader customisation and look forward for geforce3 capabilities…

The problem is that our applications become more & more NVidia graphic cards dependant (because their extensions are really usefull and we want in our applications to have those top fashion effects) and not anymore good old portable opengl…

Here comes my question:
Does the technical directions & improvements choosen by NVidia will become soon opengl standards or not?
If not, does it means that all those brains efforts to get into nvidia’s extensions are useless because opengl will choose completely different technical solutions for the same results?
Are we wrong to spend nights understanding those F@%?!!g combiners althought the result is wonderful…

Waiting for your critics & feelings…

David

Well I’d rather stay in the standard part of OpenGL.
Even if Nvidia’s extensions become standard, the function will at least be renamed, maybe worst : Nvidia doesn’t respect the EXT_ naming conventions.
There might be worst alteration in the extensions apis before they get accepted by the ARB.

And I don’t like seeing the whole market being in the hand of only one 3d card developper… But this is an other story.

Originally posted by Sancho:

Here comes my question:
Does the technical directions & improvements choosen by NVidia will become soon opengl standards or not?

The first step is already made. Now the official OpenGL registry contains the GL_NV_register_combiners and other cool NV extentions…


If not, does it means that all those brains efforts to get into nvidia’s extensions are useless because opengl will choose completely different technical solutions for the same results?

The OpenGL is an open library so the OpenGL extensions mechanism is an inherent part of the GL. If you like to use the latest (sure, vendor-specific) 3D innovations, you will probably use OpenGL extensions. If you need HW portability more, you will use the core OGL functionality. You have choice, and this is the main, this is why we like OpenGL.
Alexei.

P.S. And I don’t think “those brains efforts” are useless cause NV cards are very popular (at least in Russia ).

[This message has been edited by Alexei_Z (edited 05-10-2001).]

[This message has been edited by Alexei_Z (edited 05-10-2001).]

Sly, I feel you’re being unfair to nVidia. I am glad that they (and ATI and SGI) develop extensions that allow us to use more powerful facilities of our graphics cards. I think extensions are great and a necessary step in evolving the API. If you look at the extension registry, nVidia isn’t the developer with the most extensions, SGI is.

There is a certain process to developing an extension (I don’t know it) but it involves one company proposing the extension (like GL_NV_register_combiners) then once support from other companies is introduced, it may become an ARB extension (I think). I’m pretty certain nVidia is following the rules and respects all naming conventions, including EXTs (as would ATI and SGI).

It may not be the case that all extensions will become standard but some will. Which? I can’t say - probably no one knows. But just think what would happen if we didn’t have extensions. OpenGL 1.0 didn’t have texturing. Where would we be if no-one had introduced that as a new feature? I use quite a lot of OpenGL1.2 functionality and I’d be lost if nVidia didn’t provide support for it - Microsoft sure as hell doesn’t want to.

I’d go for it with the extensions, even commercially. You could always program with #ifdefs for lower spec’d cards. You’d also give us good reason for buying the newer cards. At worst, if extensions change names to ARB or EXT, you’d have to change one or two function pointers and recompile - the function behaviour shouldn’t change. As you probably know David, the price you pay for added speed and functionality is loss of portability - true no matter what the API - but you can make your code portable anyway with ugly #ifdefs.

Hopefully Matt or Cass will step in and explain the extension naming convention. In the meantime, David, check out the extension registry at http://oss.sgi.com/projects/ogl-sample/registry/ It’s got some information on the naming process and multi-vendor support.

Sorry for the rant. Just my two cents

Sancho, it’s likely that most of these extensions (at least, the more powerful ones) won’t become “part of OpenGL” (in the form of EXT or ARB extensions). For example, vertex shaders have a completely different interface suggested by ATI, and it could become an ARB extension instead. NVIDIA extensions like combiners are pretty “to the metal” and won’t fit other chips. It’s one thing that NVIDIA seems to proud of (that its OpenGL extensions expose the hardware better than Direct3D).

What I’d do is, if possible, use extensions that are supported by both NVIDIA and ATI (these being the major players in the consumer 3D market). For example, I’ll try to use EXT_texture_env_dot3 and EXT_texture_combine, instead of NV_register_combiners, even though the combiners are more powerful.

This can be a little difficult, but I think that it’s better to address more than one chip family. If combiners prove a lot more efficient for your needs, then you can decide to still implement ATI compatible code for these features, when possible.

Sly, since extension functions are pointers within your code, it should be easy to make them point to a function with a different name taken from an extension with a different name, as long as the function interface wasn’t changed when moving the extension to an ARB one.

ffish, OpenGL 1.0 didn’t have an extension mechanism, either And AFAIK it did have texturing, but not texture binding. And I don’t think that #ifdefs are good for choosing a rendering pass, since you have to do that at runtime, not compile time. “You’d also give us good reason for buying the newer cards” - I don’t think that this is high on a developer’s wish list. If I have to buy a new card to run someone’s program effectively, I might choose not to buy it (especially if I just recently bought a well featured 3D card like a Radeon). It’s better that the developer addresses all cards as best as possible, IMO (i.e., provide the best image quality and speed for every accelerator). A tough task indeed…

> If you need HW portability more, you will use the core OGL functionality. You have choice, and this is the main, this is why we like OpenGL.

Exactly. And that’s why I stick to code compatible with 80% of the card at least.
The happy few having a GeForce3 will want to have the full power of their card being used by 3d engine, of course.
But you can’t release a video game saying “Geforce only, sorry”.
The solution would be to code severall low level module, one for geforce3, one for geforce 2/1, one for card without multitexturing, one for the other cards… But I don’t have enough time for this now.

And of course, when an extension gets officially included into standard opengl, and/or is used by most of the cards, I use it. I’m very happy of ARB_Multitexturing.

Looks like this issue has been covered pretty well. I would add that hardware innovation is a speculative task. Some things you know will get used other things you hope will. Register combiners exposes “everything” GeForce class hardware can do in fragment coloring one (maybe not-so-simple) extension. If you want to do something with GeForce, you can quickly determine whether it’s possible at all using register combiners.

We hope that there are enough GeForce users out there that using the NV extensions is attractive.

You can still write an app that has only one code path, but it will necessarily be least-common-denominator. Even if you targetted only NVIDIA hardware, you’d need to use different extensions across TNT/GeForce(1|2)/GeForce3 to get to all the functionality available. It is better to find software mechanisms to reduce the burden of supporting diverse hardware than to simply settle for LCD.

There are a number of groups working on software solutions that strive insulate apps from the hardware (or even graphics API) specifics. The idea being, you get the best possible representation on whatever hardware is available. This is a hard problem, but there are some really smart people working on it.

Hope this rambling helps…

Cass

ET3D, comments noted.

Re OpenGL1.0, we have some ancient SGI Indys at uni with 1.0 and I was just upset when a simple program I wrote using texturing wouldn’t compile since it didn’t recognize some of my glTex* calls. Apart from those computers 1.0 is before my time so I was talking out of ignorance

Re #ifdefs I agree that too many is a bad thing, but sometimes they are a necessary evil, like using gotos (very occasionally). I like to see developers push the hardware and I for one am grateful to nVidia for their extensions and I’ll use #ifdefs if I have to. I’m in a position where I have access to an Onyx2 and so I do use #ifdefs to distinguish between my nVidia (at home) and the Onyx (at uni).

Re buying new cards. I don’t think a program should have a high baseline, but it should take advantage of higher end card features if the card supports them. I agree developers shouldn’t require a hardware upgrade to run their program, but I am happy for my GeForce2 when I play a game that uses extension features.

BTW, #ifdefs aren’t visible at runtime or even compile time since they are performed by the preprocessor. This would mean a developer would have to recompile for every target platform, true, but there is no runtime cost, unlike using global conditional parameters in if or switch statements. I hope that’s right or all my programming knowledge will be turned upside down!

I’m not trying to argue your and Sly’s point that portability is good - I agree - but I’m a technophile and I like features that expose hardware and advance hardware development. Personally I’d say program to the lowest common denominator, but have code that enables higher end features as an option.

Cass, keep up the good work. This board is evidence of how popular extensions are:

We hope that there are enough GeForce users out there that using the NV extensions is attractive.

I guess I’m up to four cents now

P.S. good luck for the GeForce3 ET3D!!

Thanks for all your answers and comments !

David.

I think the best way around the problem, rather than using if statements all over the place is to simply use function pointers.

During startup of the program, work out what extensions are available for rendering etc and set up a list of rendering related functions pointers. Then, instead of calling render functions directly, call them using the correct function pointer. This way, you get portability and no run time cost without recompiling for every platform.

Either that or separate out your graphics calls in a .dll like most game developers do.

Originally posted by Benjy:
[b]I think the best way around the problem, rather than using if statements all over the place is to simply use function pointers.

During startup of the program, work out what extensions are available for rendering etc and set up a list of rendering related functions pointers. Then, instead of calling render functions directly, call them using the correct function pointer. This way, you get portability and no run time cost without recompiling for every platform.

Either that or separate out your graphics calls in a .dll like most game developers do.[/b]

It’s exactly what I do !

[This message has been edited by Sancho (edited 05-11-2001).]

Err … that’s what I do too! Dunno why I got carried away talking about #ifdefs. Must be all the assignments I’ve got due in are messing with my head

how bout this? using different technologies depending on board you have… for example

if it supports register_combiners => use em
else if it supports texture_dot3 => use it!
else if it supports texture_add/combine etc => use it
else dont use it

takes a bit of work, but its worth it, cause then you have an engine wich is very great looking on gf ( gf3 with texture_shader if it has it so use it ), but works on other boards, too without looking too bad

and if you are very very very good you will find someone wich uses an ati radeon at home and wich possibly set up

if texture3d => use it

why not? its great… at the end its just a set up for the rendering, the rendering itselfs is the same ( if you organiced your code good )… hope that gave an idea… and yes, its more work, but yes, its worth it