Please improve the glSlang spec!

I´ve been working with glSlang ever since ATI released their first drivers with support for it.

All in all, i think it´s quite a nice language, but then having a high-level language is always “quite nice” compared to an assembler-language.

But now i am trying to do a bit more complex stuff than only bumpmapping, and pretty fast i stumbled across annoying limitations of glSlang.

One thing is, that i cannot get the inverse matrix of something (for example the modelview-matrix).

In ARB_fp this is implemented natively, so the data is there anyway, glSlang simply does not give you the ability to access it ! Why is that? glSlang is supposed to be future-proof, but how can it be future-proof, if it doesn´t even expose everything CURRENT hardware is capable of ??

BTW: I tried to compute that inverse by myself, but i get horrible calculation-errors, due to floating point errors (or “double”-point errors :stuck_out_tongue: )

I know this has been said before, but seeing how slow the ARB works sometimes, they might have forgotten about it already.

And since i am going to buy a Geforce 6 anyway, someday, i wouldn´t mind, if nVidia would add such stuff on their own.

Do i have to go back to ARB_fp now, to get my app running? Seems so. It´s really sad.

Jan.

In ARB_fp this is implemented natively, so the data is there anyway, glSlang simply does not give you the ability to access it ! Why is that? glSlang is supposed to be future-proof, but how can it be future-proof, if it doesn´t even expose everything CURRENT hardware is capable of ??
That’s not a hardware feature.

BTW: I tried to compute that inverse by myself, but i get horrible calculation-errors, due to floating point errors
Then you’re probably doing it wrong.

Yeah, I have the same thought about that, Jan.

I want GLSL to implement a instruction of the inverse matrix computation too, for example, Projective Texturing or Shadow Mapping.

Hello, I think that is very easy compute the inverse or the traspose of every matrix in the host application, and load it as a uniform, this taked me about 15 minutes, because I haven’t the invert rutine coded in my source, and I have no rounding errors caused by floating point, (at least, noticable errors). It would be nice that glsl will provide us a built in uniform that does the inverse/transpose for us, but isn’t a hard task do it with our hands :slight_smile:

P.S: I’m almost sure that the inverse/transpose calculation isn’t done in the gpu.

Originally posted by Korval:
[quote]In ARB_fp this is implemented natively, so the data is there anyway, glSlang simply does not give you the ability to access it ! Why is that? glSlang is supposed to be future-proof, but how can it be future-proof, if it doesn´t even expose everything CURRENT hardware is capable of ??
That’s not a hardware feature.
[/QUOTE]Does that change anything ?

Then you’re probably doing it wrong.

The only way i know is the Gaussian algorithm. I do it correctly, that´s for sure, but i know that algorithm is not the most stable one.

Wait, i just found something with Kramer´s rule, i´ll have to try that.

Anyway, wouldn´t it be absolutely logical to give the programmer all the data which is available anyway?

The glSlang-spec definitely has to be improved. There are other threads, which adress several other issues. I only wanted to make clear, that there is need for certain functionality and that people are not satisfied with its current state.

I hope the ARB doesn´t see that completely different.

Jan.

You can use gl_NormalMatrix.

gl_NormalMatrix is mat3 and it’s represent as ModelView.Inverse.Transpose.

When you rumbling with a math in vertex shader you will see that gl_NormalMatrix very usefull in computations.

yooyo

Ok, with Kramer´s rule, i get perfect inverse matrices. One problem solved.

Yooyoo, thanks, but i cannot use that matrix, i actually need the Inverse of (Modelview*Projection), which is not available in glSlang.

Anyway, this is not about one specific problems, but about glSlang in general. It´s not perfect (although it´s a very good start already) and everyone should agree on that and try to make it perfect.

Jan.

Does that change anything ?
I was correcting a factual error in your post.

Anyway, wouldn´t it be absolutely logical to give the programmer all the data which is available anyway?
Is it available? How do you know that specific code didn’t have to be written for ARB_vp in each implementation to allow for inverse matrices and so forth to be exposed to the user?

The glSlang-spec definitely has to be improved.
This is certainly true. Glslang is not a good shading language. It is adiquate, but not good. Some bad choices were made.

Though I doubt any of them are going to be fixed in the near future.

The driver would most certainly need some logic to detect relevant changes in the variuos matrices, so it could produce timely inverses, and so on. I doubt that this is a trivial task for the driver to perform.

Originally posted by Jan:

Yooyoo, thanks, but i cannot use that matrix, i actually need the Inverse of (Modelview*Projection), which is not available in glSlang.

Im not sure, but I try to dump nvoglnt.dll and I found gl_ModelViewProjectionMatrixInverse
string. It seems that nVidia driver expose this matrix in GLSL. I can’t check is this true, because I don’t have FX based board right now :frowning: .

Here more strings:
gl_ModelViewMatrixInverse
gl_ProjectionMatrixInverse
gl_ModelViewProjectionMatrixTranspose
gl_ModelViewProjectionMatrixInverse
gl_TextureMatrixInverse

yooyo

If these matrixes can be created by HW without any cost , then I think it is ok, but I rather create these matrixes myself when I need them instead of having a driver to create them each time.

However if it could be detected by the driver that the user requests a matrix and only then calculate it, it could be a good feature to have them in the GLSL API.

I find it more important to have a proper error syntax standard and a api to get info of what is HW accell or not!!

Glslang is not a good shading language. It is adiquate, but not good. Some bad choices were made.

Though I doubt any of them are going to be fixed in the near future.
What issues with GLSL do you think are the reason for it not being good enough? Would you rate CG or HLSL Good?

I have check nvidia GLSL matrix support and all matrices can be used in GLSL shaders. Im using FW 61.12 @ FX5900. Here GLSL example:

  
------------- nvtest.vert -------------
varying vec4 col;

void main()
{
	gl_Position = ftransform();
	
	// Uncomment any of following lines
	col = gl_ModelViewMatrixInverse * gl_Vertex;
	//col = gl_ProjectionMatrixInverse * gl_Vertex;
	//col = gl_ModelViewProjectionMatrixTranspose * gl_Vertex;
	//col = gl_ModelViewProjectionMatrixInverse * gl_Vertex;
	
}
 
------------- nvtest.frag -------------
varying vec4 col;

void main()
{
	gl_FragColor = col;
}
 

I know this matrices are not supported in GLSL but because GLSL are not “officialy” supported by NV and ATI maybe we can request to support this matrices? Let someone check my shader on ATI and 3DLabs hw, so we can be sure about usage of it.

yooyo

yooyo:

I just checked your example shaders on ATI Radeon 9800 Pro.
But there’s no support in the Catalyst 4.3!
(I haven’t checked with Catalyst 4.4 or 4.5, because they are too slow!)

Originally posted by S&W:
[b]yooyo:

I just checked your example shaders on ATI Radeon 9800 Pro.
But there’s no support in the Catalyst 4.3!
(I haven’t checked with Catalyst 4.4 or 4.5, because they are too slow!)[/b]
Can you send email to ATI dev.rel. and ask them about it. Im sure it is trivial task for driver developers.

It is better to extend GLSL right now and developers can be free to use such features, instead of detecting is hw support such features and write different codepath’s.

yooyo

Maybe (I hope) that many of those features are included in the fist GLSL update. I hope that this will come soon!

Actually, if the driver were able to store a dirty bit with each matrix, set when a matrix is modified, then it would need only check this bit for each matrix on program bind, recalculate all the dirty matrices, then clear the dirty bits. If the driver is able to do something to this effect, the overhead should be minimal when compared to what one might do on the CPU.

But there is still the uncertainty of what constitutes a dirty matrix in the context of a specific program; if the program being bound doesn’t use the inverse(mvp), for example, the driver would need persistent knowledge of this fact in order to avoid unnecessary updates, which probably means some extra steps in the semantic phase of the compiler, and some extra state for the driver.

I’m sure no one would want the driver blindly computing inverses all day. I suspect this added complexity is the reason the official introduction has been delayed. But who knows, it could be that they can’t agree on the matrix names. I’m in favor of shorter names. :slight_smile:

FYI, The following matrices have been added to an interim version of the shading language spec. Hopefully, you’ll see them in reality sometime soon.

uniform mat4 gl_ModelViewMatrixInverse;
uniform mat4 gl_ProjectionMatrixInverse;
uniform mat4 gl_ModelViewProjectionMatrixInverse;
uniform mat4 gl_TextureMatrixInverse[gl_MaxTextureCoords];

uniform mat4 gl_ModelViewMatrixTranspose;
uniform mat4 gl_ProjectionMatrixTranspose;
uniform mat4 gl_ModelViewProjectionMatrixTranspose;
uniform mat4 gl_TextureMatrixTranspose[gl_MaxTextureCoords];

uniform mat4 gl_ModelViewMatrixInverseTranspose;
uniform mat4 gl_ProjectionMatrixInverseTranspose;
uniform mat4 gl_ModelViewProjectionMatrixInverseTranspose;
uniform mat4 gl_TextureMatrixInverseTranspose[gl_MaxTextureCoords];

What issues with GLSL do you think are the reason for it not being good enough? Would you rate CG or HLSL Good?
We had a couple of threads about this. It came up when we found that nVidia’s glslang compiler was playing fast-and-loose with the spec, allowing for things like C-style casts and autopromotion of integer values.

Both of these are expressly disallowed by the spec, and both of them should be allowed.

Because of this, Cg (HLSL is basically the same language) is a nicer language to use than glslang.

Originally posted by Korval:
[b] [quote]What issues with GLSL do you think are the reason for it not being good enough? Would you rate CG or HLSL Good?
We had a couple of threads about this. It came up when we found that nVidia’s glslang compiler was playing fast-and-loose with the spec, allowing for things like C-style casts and autopromotion of integer values.

Both of these are expressly disallowed by the spec, and both of them should be allowed.

Because of this, Cg (HLSL is basically the same language) is a nicer language to use than glslang.[/b][/QUOTE]Call me crazy but those two reasons are things I actually like about GLSL above Cg.
Besides, I think these are just “personal taste” issues and putting an extra .0 after float constants and a float(…) around casts is not any reason to think that a language is not “good”.

(Also consider that since GLSL is compiled by the driver we want as few ambiguities as possible in code. ie. if I have a sin(float x) and sin(int x) I would pefer to not let the driver choose which one is called )