PDA

View Full Version : OpenGL 2.0 too cutting edge?



bismuti@boeing
01-26-2005, 11:37 AM
Here at Boeing we are developing software for our engineers and customers that is due to be released in roughly 6 months.

We would like come up with a standard by which we require our customers to support which is as flexible as possible (hardware, os independent).

One proposal is to draw the line at OpenGL 2.0, in other words, require our users to have on their machine an implementation of OpenGL that is complient with version 2.0, and then design our software carefully based on this criteria.

There are 2 concerns:

1. Version 2.0 is too cutting edge. It may be too much of a burden to require users to have this version running, as opposed to say Version 1.5. (Mesa, for example, is still at 1.5)

2. We are interested in using the OpenGL Shading language, and in particular wish to use a non-proprietary version of the shading language such as Cg. This may or may not be a problem as we are currently using NVidia cards and it is not clear to me whether or not i can develope using a shading language other than Cg.

Any feedback would be appreciated.

Peter Bismuti
Boeing
Renton, WA

PkK
01-26-2005, 11:54 AM
The OpenGL shading language is available through GL_ARB_vertex_shader and GL_ARB_fragment_proram extensions on both Nvidia and ATI cards. Requiring OpenGL 1.5 and these extensions would give you the features you need and still be supported on more Platforms: Nvidia GeForceFX and above, ATI Radeon 9500 and above, 3Dlabs Wildcat Realizm.

PkK
01-26-2005, 12:01 PM
Sorry, I meant GL_ARB_vertex_shader and GL_ARB_fragment_shader.

Jan
01-26-2005, 01:39 PM
If you are working for Boeing, that means, that you probably want to show off your airplane models.

In this case you
a) certainly have a very complex model (many vertices...)
b) certainly want it to look impressive

That means, go with the latest stuff. If you are working with high-res models, you need a powerful graphics-card, meaning Geforce FX+ or Radeon 9500+.

And those have all you need, anyway, including glsl.

And, look at the prices, a Geforce 6 (6600) costs 180 euro, is that really such a problem for your users?

Considering, that it has to be ready in 6 months, which is a long time in the field of computer graphics, i would really not bother about Geforce 4 and Radeon 9200 cards.

Hope that helps,
Jan.

al_bob
01-26-2005, 02:17 PM
2. We are interested in using the OpenGL Shading language, and in particular wish to use a non-proprietary version of the shading language such as Cg. Cg will compile down to ARB_fragment_program and ARB_vertex_program, which (AFAIK) are supported by the ATI Radeon 9500 and later GPUs.

Cg won't generate the most optimal code for ATI hardware though. That's up to ATI to write a proper back-end for their chips.

Korval
01-26-2005, 04:08 PM
Version 2.0 is too cutting edge. It may be too much of a burden to require users to have this version running, as opposed to say Version 1.5. (Mesa, for example, is still at 1.5)Technically, right now, nobody (except maybe 3D Labs) has an implementation of GL 2.0.

The real question is more of what functionality you're looking for. Do you really need serious shader power? If so, requiring some of the ARB shading extensions (ARB_vertex/fragment_shader or ARB_vertex/fragment_program) is reasonable. Otherwise, specifying 1.5 functionality is all you need. None of the shading extensions are part of any pre-2.0 GL version, so you have to ask for them via the extension mechanism.


We are interested in using the OpenGL Shading language, and in particular wish to use a non-proprietary version of the shading language such as Cg.There are several things wrong with that statement.

1: Cg is owned by nVidia. It has limitted support on non-nVidia hardware and is not recommended for use on such hardware.

2: The OpenGL Shading Language is owned by the ARB as a whole, which is composed of a number of companies, of which nVidia is one.

As such, I would consider Cg more proprietary than glslang. In general, using Cg binds you to some degree to nVidia hardware. While you can compile Cg for ARB shading extensions, the results are optimized for nVidia hardware and run slower than necessary on ATi hardware.

In short: if you aren't going to use ARB_vertex/fragment_program directly, just use glslang. It is supported by both nVidia and ATi (and others), though neither has an implementation that could reasonably be called "final". nVidia's tends to be more stable than ATi's, though.

WyZ
01-27-2005, 09:32 AM
While you can compile Cg for ARB shading extensions, the results are optimized for nVidia hardware and run slower than necessary on ATi hardware. That's also what I thought before I had to convert some of my shaders to Cg in order to add different shading algorithms on a FX5200 (no support for GLSL on those cards).

I was quite amazed to see most of my converted shaders run faster on my ATI X800XT PE (Catalyst 4.12) than their GLSL version. And the conversion consisted mostly in replacing vec3 by float3 so I can not yet understand (maybe nVidia's compiler does a better job) what caused performance gains as high as 100 fps (from 250 to 350 fps) wihtout any apparent visual differences.

I still prefer to code shaders in GLSL, but I would say that depending on what you intend to do with your shaders, converting from CG to GLSL or the other way around should take you about 20 minutes per shader (unless they are quite complex). You could even convert only part of your shaders so that you can choose between Cg and GLSL upon execution, depending on what hardware is present in the machine.

Having said that, choosing between Cg and GLSL does not seem such a critical decision at this time in your case, and it might allow you to start experimenting with shaders. You might even try both shading languages and choose which one you prefer in a month or two, knowing that which ever language you decide to stick with, you can convert all your shaders to that language without to much efforts.

Hope it helps!

jra101
01-27-2005, 09:49 AM
Originally posted by WyZ:
That's also what I thought before I had to convert some of my shaders to Cg in order to add different shading algorithms on a FX5200 (no support for GLSL on those cards).GeForce FX 5200 GPUs do support all the GLSL extensions (ARB_fragment_shader, ARB_vertex_shader, ARB_shader_objects, ARB_shading_language_100), perhaps you were running an older driver?

V-man
01-27-2005, 11:51 AM
2. We are interested in using the OpenGL Shading language, and in particular wish to use a non-proprietary version of the shading language such as Cg. This may or may not be a problem as we are currently using NVidia cards and it is not clear to me whether or not i can develope using a shading language other than Cg. I suggest you go for GLSL, because drivers are getting better on a monthly basis.
From what I understand, the GLSL compiler on Nvidia is based on Cg.
Still, keep in mind that you may run into bugs.
The hardware should be a FX 5200 and above. Best to go with FX5900 and above.

Good luck.