Any ideas when Nvisia will release their OpenGL drivers that include GLSL?

I am waiting for over a month now… I am not in the mood to learn any sort of assembler like language, since I have done that with intel asm and now I don’t use that anymore. I am not fond of using cg either since it doesn’t work well on anything but Nvidia. I want a clean start with GLSL but it seems Nvidia is not very devoted to support this new ARB extension…

Any ideas how long it will take for them to release their new drivers with GLSL support ?

I also don’t like it but I find it very wise of nvidia to delay glSlang support. Maybe we’ll have it in NV40 launch driver.

Originally posted by Zengar:
I also don’t like it but I find it very wise of nvidia to delay glSlang support. Maybe we’ll have it in NV40 launch driver.

What’s so wise about that? The only field where NV nowadays was in front of ATI was their developer support, and now they’re even falling behind ATI in that. So after designing not so top-notch HW, loosing one exclusive partner after another (Abit was the latest one) and cheating on customers, now they’re even struggling in regards of developer support.

The way ATI did with their beta glSlang-Support is the best way a hardware vendor could go (NV did this with nv30emulate for example), so I totally can’t unterstand why it takes NV so long to implement that into their oh so great drivers.
Sure, there are glslang-features you can’t use on current ATI-HW, but at least you can get into glSlang.

I won’t say that NV will go the same way as 3DFX, cause they not only rely on gfx-hardware. But they aren’t making a good figure right now.

P.S. : Sorry for my OT-Rant, but I really don’t unterstand what NV is trying to do…selfdestruction is surely not the way to success.

P.P.S. : I somewhere read that NVidia intended to release drivers with glSlang-support by the end of the year. But as you may have noticed, it just didn’t happen

[This message has been edited by PanzerSchreck (edited 01-10-2004).]

and they keep waiting…

The way ATI did with their beta glSlang-Support is the best way a hardware vendor could go (NV did this with nv30emulate for example), so I totally can’t unterstand why it takes NV so long to implement that into their oh so great drivers.
Sure, there are glslang-features you can’t use on current ATI-HW, but at least you can get into glSlang.

Well, there’s something to be said for being able to trust your glslang compiler. I would prefer waiting a few weeks or months for a hard version of glslang that is a mostly bug-free compiler, than having to deal with one that is moderately buggy.

Granted, ATi’s glslang support, not including features that require emulation, is pretty decent.

[This message has been edited by Korval (edited 01-20-2004).]

Giving the fact that many errors will be detected by users trying to do some unconventional stuff, I think that it’s better to release a beta version of an extension - but tell ppl that the implementation is faulty.
If I know that GLslang support isn’t finished, I’ll not use it in a commercial application - but at least I can play with it.

I could be wrong, but Mesa 6 has support for OpenGL 1.5, so this means glSlang is available to everyone.

Originally posted by imr1984:
I could be wrong, but Mesa 6 has support for OpenGL 1.5, so this means glSlang is available to everyone.

That’s wrong. glSlang is not an OpenGL 1.5 feature, so it’s not included with Mesa 6.

I stand corrected. glSlang is part of GL2 then?

Yes, GLslang is a core part of GL2 but yet avaiable through extensions.

The reason why Nvidia delays glSlang may be that it isn’t compartible with nvidias hardware. NV40, if we can believe the rumours, whould also contain fixed-point math units(16 bit precision). glSlang doesn’t mirror it. It’s the same as in DirectX 9.0 - originally it had to contain fixed-point, then it was dropped. Nvidia, which desined NV30 already had no time change it. I think that fixed-pont math is a very powerfull feature - all normal calculation can be carried out with it at a very hight quality. 16-bit fixedpoint grants the same precision as ATI’s 24-bit floats.
No wonder that Nvidia isn’t quick to support a feature which doesn’t match it’s hardware.
glSlang has several serious issues - that’s mine opinion - and I don’t understand why everyone is so fond of it. The idea is very nice, but the language is a bit shabby.

FX card’s do support the features of GLSlang.
The implementation of GLSlang is actuarly only a compiler that compiles the GLSlang code into GPU assembler.
In theory you should be able to run GLSlang shader’s on a GF3(allthough not as blazingly fast as you would like it to).
I think the reason Nvidia hasn’t released it yet is because they want to release a fully fuctional version instead of a beta like ATI did.
I don’t think that they will release the new driver’s at the launch of nv40 since it’s to far off.
My guess is that Nvidia will release a GLSlang supporting driver within a month.

I mean glSlang is not compartible with nvidias hardware and not the other way round. I just recalled all the PACK/UNPACK instructions. They ARE usefull, but I don’t see support for them in glslang. Don’t take me too serious but:
Why should nvidia support a shading language that doesn’t match it’s hardware? It will be the same situation as with DX.
As different programmable hardware have different features it is a way too big simplification to create the one SL. It will need another extension mechanism it want’s to be “the one”, and we will finally end writing difeferent paths for different cards. I don’t comsider it bad, other way. Different versions of the same SL shader are not so annoying as different versions of assembler programs(written in different assembler). I liked Cg-s profile funcions.
Why not
#ifdef GeForceFX

#else
#ifdef Raeon

#endif

?
Wouldn’t it be the best? I don’t see any form of API that would unify the hardware. Nvidia will never encourage glSlang in it’s nowadays form. And not because they are bad, selfisch and arrogant(later maybe ) but because they have another approach.

@NVidia :
I could think of two reasons why NV hasn’t published a driverset with glsl-support. The first one is that they’re slightly upset by the fact that the ARB decided against cG as the glsl, so NVidia wants to hinder the success of glslang. The second reason (maybe more likely than the first) is that they kind of “panicked”. They haven’t had really competitive hw since two generations (if you count the R9700 as one generation and the R9800 as the other) and rumors have that ATI’s R420 is one month ahead of NVidia’s NV40. And since the R420 is said to already have taped out, NVidia surely has better things to do than to implement glSlang-Support. So I think they won’t have it in their drivers until the NV40 is out.

@ATI’s beta glslang-implementation :
As already stated : It’s better to have some beta-implementation you can play with than to have nothing. Since ATI released their first driverset with glSlang, I’ve been dabbling around with it for many hours and I’ve also enjoyed using it due to it’s clear HL-syntax (although I don’t really like C++ )

R300 and R350 are in the same generation.
I don’t know the codenames for the 9600XT and 5700 but it looks like Nvidia did something right with the 5700.

There are cases where the 5700 is faster and vice-versa, instead of always beeing below.
Example: http://www.hardocp.com/article.html?art=NTU5LDE=

As for GLSL… I dont know. Why don’t you get in touch with one of the developers?

Originally posted by Zengar:
[b] I mean glSlang is not compartible with nvidias hardware and not the other way round. I just recalled all the PACK/UNPACK instructions. They ARE usefull, but I don’t see support for them in glslang. Don’t take me too serious but:
Why should nvidia support a shading language that doesn’t match it’s hardware? It will be the same situation as with DX.
As different programmable hardware have different features it is a way too big simplification to create the one SL. It will need another extension mechanism it want’s to be “the one”, and we will finally end writing difeferent paths for different cards. I don’t comsider it bad, other way. Different versions of the same SL shader are not so annoying as different versions of assembler programs(written in different assembler). I liked Cg-s profile funcions.
Why not
#ifdef GeForceFX

#else
#ifdef Raeon

#endif

?
Wouldn’t it be the best? I don’t see any form of API that would unify the hardware. Nvidia will never encourage glSlang in it’s nowadays form. And not because they are bad, selfisch and arrogant(later maybe ) but because they have another approach.[/b]

There’s nothing preventing nVidia to add fixed point math, pack/unpack, and other features to the GLSL. They just need a regular extension.

I think the main reason why nVidia doesn’t support GLSL yet is because they are behind the schedule, not because they don’t intend to support it. Remember that adding GLSL support is a large project. It can easily take half a year to get to beta status, well, depending on how many people you put on the project of course. If nVidia started to implement it around the time the spec was ratified, they may still have a couple of months to beta status.

Humus, about a week ago I started writing my own HLSL compiler. It’s a pascal-like language, rather simple(but not less poverfull as glslang). I didn’t spent a lot of time on it till now, but it can produce some usefull output now. It converts the shader to ARB_vp, ARB_fp or NV_vp2, NV_fp. If I will continue it with the same tempo, it’ll be ready within a month. Now look: If I, a person, which doesn’t have any experiance with compilers can do it so fast, Nvidia is able to do it much faster. Don’t forget: they have their unified compiler and cg. They could simply combine it together. It took me about 2 hours to write a full-featured lexical scanner! So I assume, it’s not so diffucult to make the cg compiler support glslang.
So: after I made my own experiance with compilers I don’t think that they are so difficult to write. They doesn’t have to cover complicated memory management, stack, OOP etc.

nVidia WILL support Glslang, they already have GLslang enabled drivers available in limited beta. It’s just a question of time.

@Zengar : That has already been done, but only for ARB_VP/ARB_Fp : http://www.sourceforge.net/projects/fxpascal
I used it before glSlang and it’s a great alternative to it.

So: after I made my own experiance with compilers I don’t think that they are so difficult to write. They doesn’t have to cover complicated memory management, stack, OOP etc.

And, what about the features that nVidia (and ATi) cards don’t support? Like looping in the fragment shader, texture accesses (ugh) in the vertex shader, etc? The latter can be emulated on the CPU relatively easily, but the former requires a full software rasterizer. That is not trivial to write.