glslang... when?

Hi,

Does anyone have any idea of when I, a nvidia geforce user, will be able to work with the Opengl Shading Language?

Thanks

GL_NEVER ???

glEnable(GL_REALLY_REALLY_HOPE_NOT)

When you say “GeForce” what do you mean? Geforce FX? If so, then you can expect a “glslang” extension for it. If you mean Geforec 1-4 then no.(if you are just developing on a tight budget, you can’t go past the cheap Geforec FX 5200 ~75 USD)

what do you mean…? glslang won’t be enabled on a GeForce 4 ??? NOOOOOOOOOOOOOOO

gf4 doesnt have the fragment pipeline to support it… Maybe ( since the language is splitted in 2) you can have glSlangs vertex part?..

i also cant wait to work w/ glslang. But until nvidia comes out w/ drivers that support it i guess we just have to use asm.

Originally posted by Lurking:
i also cant wait to work w/ glslang. But until nvidia comes out w/ drivers that support it i guess we just have to use asm.

What’s wrong with using Cg? Cg is a pretty nice tool to use atm. Sure beats having to write shaders in asm IMHO.

-SirKnight

to be honest i really dont have anything against cg. the only thing is that as much as i would like to have one standard setup for opengl i would like it to be run by ARB and not just one company (Nvidia). I love nvidia but the thing is that when that ARB supports it you know both Nvidia and Ati will go all the way with those features. But when you have nvidia running the shader language you give all the power for desicions to nvidia. This is one of the reasons i dont work w/ direct x. If nvidia keeps going the way it is ati could take its place. Then we would have a static language cg. Also i like the idea of bringing glslang into the core of opengl which cg cant do cause they have support for both direct x and opengl.

Originally posted by Lurking:
i also cant wait to work w/ glslang. But until nvidia comes out w/ drivers that support it i guess we just have to use asm.

You can use Cg as offline tool, and get it to compile shaders into ARB_fragment_program and ARB_vertex_program. This way you can keep your run-time clean from single vendor API’s, without hacking in lower level program strings.

Once GL2 is widely available then you’ll be able to clean up your workflow. I certainly look forward to this day, and hope the Nvidia support it as well as they currently support GL1.4.

Robert.

I certainly look forward to this day, and hope the Nvidia support it as well as they currently support GL1.4.

Amen

Reading between the lines in nv faqs I’ve gather that nv is feverishly working on gl2. I think when the nv40/r400 cards arrive we’ll have some gl2 functionality in their drivers.

i guess my question is why wouldn’t we have that functionality in the current NV35’s. Seeing that these cards contain the power to handle fragment shaders well. I just got my 5900 ultra and am very happy with it. But i dont see why you wouldn’t be able to use glslang on any NV3x model card?

ati already has glslang support in their latest drivers (Catalyst 3.5) and it works.
somebody inside here already posted something like that. You can get a delphi version of it
from DelphiGL .

At the moment i prefer ASM Shader, because the glslang part inside of the catalyst driver is not official and all the functions changed when updating from cat 3.4 to 3.5 .

see you,
Azrael.

Cg is NOT NV only, it works for ARB_f_p too, that is only pure f_p profile that Ati supports. Moreover all you get in the end is ASM, and as far as I know it should work with GF3/GF4 under fp20.
But I think gl slang can be put upon any card that has reg combiners, although that’s though task.
As stated Cg is NVs gift to the world, so why we shouldn’t use it?

Another thing: Doom3 is using ARB_vertex & fragment programs, right?

pedrosl:
Another thing: Doom3 is using ARB_vertex & fragment programs, right?

Here’s what JC wrote in his .plan file:

[b] The R300 can run Doom in three different modes: ARB (minimum extensions, no specular highlights, no vertex programs), R200 (full featured, almost always single pass interaction rendering), ARB2(floating point fragment shaders, minor quality improvements, always single pass).

The NV30 can run DOOM in five different modes: ARB, NV10 (full featured, five
rendering passes, no vertex programs), NV20 (full featured, two or three
rendering passes), NV30 ( full featured, single pass), and ARB2. [/b]

Some “funny” facts about ARB vertex/fragment programs, from the specs:

[b]
ARB_vertex_program

Contact Pat Brown, NVIDIA Corporation

IP Status: NVIDIA claims to own intellectual property related to this extension, and has signed an ARB Contributor License agreement licensing this intellectual property.

Status: Microsoft claims to own intellectual property related to this extension. [/b]

[b] ARB_fragment_program

Contact Benj Lipchak, ATI Research

IP Status: Microsoft claims to own intellectual property related to this extension. [/b]

M/\dm/
:

As stated Cg is NVs gift to the world, so why we shouldn’t use it?

Thinking this way, Direct3D is Microsoft’s gift to the world too, so why we shouldn’t use it?

[This message has been edited by nasty_moderator (edited 07-09-2003).]

[This message has been edited by matt_weird (edited 07-09-2003).]

Because it ties you down to a single operating platform, and forces you to wait for the next version of D3D, before new hardware features are revealed.

Cg has neither of these limitations… Cg is a tool, not an API like OpenGL or D3D.

Nutty ,that was sarcastic ,mind you , and the point was about the “gift” basis , not about whatsoever it is . Should i put some more winks next time to prove it?

That’s a lot of winking faces!

-SirKnight