PDA

View Full Version : Please improve the glSlang spec !



Jan
05-22-2004, 04:58 PM
Iīve been working with glSlang ever since ATI released their first drivers with support for it.

All in all, i think itīs quite a nice language, but then having a high-level language is always "quite nice" compared to an assembler-language.

But now i am trying to do a bit more complex stuff than only bumpmapping, and pretty fast i stumbled across annoying limitations of glSlang.

One thing is, that i cannot get the inverse matrix of something (for example the modelview-matrix).

In ARB_fp this is implemented natively, so the data is there anyway, glSlang simply does not give you the ability to access it ! Why is that? glSlang is supposed to be future-proof, but how can it be future-proof, if it doesnīt even expose everything CURRENT hardware is capable of ??

BTW: I tried to compute that inverse by myself, but i get horrible calculation-errors, due to floating point errors (or "double"-point errors :p )

I know this has been said before, but seeing how slow the ARB works sometimes, they might have forgotten about it already.

And since i am going to buy a Geforce 6 anyway, someday, i wouldnīt mind, if nVidia would add such stuff on their own.

Do i have to go back to ARB_fp now, to get my app running? Seems so. Itīs really sad.

Jan.

Korval
05-22-2004, 05:46 PM
In ARB_fp this is implemented natively, so the data is there anyway, glSlang simply does not give you the ability to access it ! Why is that? glSlang is supposed to be future-proof, but how can it be future-proof, if it doesnīt even expose everything CURRENT hardware is capable of ??That's not a hardware feature.


BTW: I tried to compute that inverse by myself, but i get horrible calculation-errors, due to floating point errors Then you're probably doing it wrong.

S&W
05-22-2004, 05:49 PM
Yeah, I have the same thought about that, Jan.

I want GLSL to implement a instruction of the inverse matrix computation too, for example, Projective Texturing or Shadow Mapping.

Ffelagund
05-23-2004, 01:37 AM
Hello, I think that is very easy compute the inverse or the traspose of every matrix in the host application, and load it as a uniform, this taked me about 15 minutes, because I haven't the invert rutine coded in my source, and I have no rounding errors caused by floating point, (at least, noticable errors). It would be nice that glsl will provide us a built in uniform that does the inverse/transpose for us, but isn't a hard task do it with our hands :)

P.S: I'm almost sure that the inverse/transpose calculation isn't done in the gpu.

Jan
05-23-2004, 02:28 AM
Originally posted by Korval:

In ARB_fp this is implemented natively, so the data is there anyway, glSlang simply does not give you the ability to access it ! Why is that? glSlang is supposed to be future-proof, but how can it be future-proof, if it doesnīt even expose everything CURRENT hardware is capable of ??That's not a hardware feature.
Does that change anything ?



Then you're probably doing it wrong.
The only way i know is the Gaussian algorithm. I do it correctly, thatīs for sure, but i know that algorithm is not the most stable one.

Wait, i just found something with Kramerīs rule, iīll have to try that.

Anyway, wouldnīt it be absolutely logical to give the programmer all the data which is available anyway?

The glSlang-spec definitely has to be improved. There are other threads, which adress several other issues. I only wanted to make clear, that there is need for certain functionality and that people are not satisfied with its current state.

I hope the ARB doesnīt see that completely different.

Jan.

yooyo
05-23-2004, 02:53 AM
You can use gl_NormalMatrix.

gl_NormalMatrix is mat3 and it's represent as ModelView.Inverse.Transpose.

When you rumbling with a math in vertex shader you will see that gl_NormalMatrix very usefull in computations.

yooyo

Jan
05-23-2004, 04:21 AM
Ok, with Kramerīs rule, i get perfect inverse matrices. One problem solved.

Yooyoo, thanks, but i cannot use that matrix, i actually need the Inverse of (Modelview*Projection), which is not available in glSlang.

Anyway, this is not about one specific problems, but about glSlang in general. Itīs not perfect (although itīs a very good start already) and everyone should agree on that and try to make it perfect.

Jan.

Korval
05-23-2004, 11:13 AM
Does that change anything ?I was correcting a factual error in your post.


Anyway, wouldnīt it be absolutely logical to give the programmer all the data which is available anyway?Is it available? How do you know that specific code didn't have to be written for ARB_vp in each implementation to allow for inverse matrices and so forth to be exposed to the user?


The glSlang-spec definitely has to be improved.This is certainly true. Glslang is not a good shading language. It is adiquate, but not good. Some bad choices were made.

Though I doubt any of them are going to be fixed in the near future.

plasmonster
05-23-2004, 11:45 AM
The driver would most certainly need some logic to detect *relevant* changes in the variuos matrices, so it could produce timely inverses, and so on. I doubt that this is a trivial task for the driver to perform.

yooyo
05-23-2004, 05:03 PM
Originally posted by Jan:

Yooyoo, thanks, but i cannot use that matrix, i actually need the Inverse of (Modelview*Projection), which is not available in glSlang.
Im not sure, but I try to dump nvoglnt.dll and I found gl_ModelViewProjectionMatrixInverse
string. It seems that nVidia driver expose this matrix in GLSL. I can't check is this true, because I don't have FX based board right now :( .

Here more strings:
gl_ModelViewMatrixInverse
gl_ProjectionMatrixInverse
gl_ModelViewProjectionMatrixTranspose
gl_ModelViewProjectionMatrixInverse
gl_TextureMatrixInverse

yooyo

ToolTech
05-23-2004, 09:16 PM
If these matrixes can be created by HW without any cost , then I think it is ok, but I rather create these matrixes myself when I need them instead of having a driver to create them each time.

However if it could be detected by the driver that the user requests a matrix and only then calculate it, it could be a good feature to have them in the GLSL API.

I find it more important to have a proper error syntax standard and a api to get info of what is HW accell or not!!

Fastian
05-24-2004, 12:22 AM
Glslang is not a good shading language. It is adiquate, but not good. Some bad choices were made.

Though I doubt any of them are going to be fixed in the near future.What issues with GLSL do you think are the reason for it not being good enough? Would you rate CG or HLSL Good?

yooyo
05-24-2004, 02:23 AM
I have check nvidia GLSL matrix support and all matrices can be used in GLSL shaders. Im using FW 61.12 @ FX5900. Here GLSL example:


------------- nvtest.vert -------------
varying vec4 col;

void main()
{
gl_Position = ftransform();

// Uncomment any of following lines
col = gl_ModelViewMatrixInverse * gl_Vertex;
//col = gl_ProjectionMatrixInverse * gl_Vertex;
//col = gl_ModelViewProjectionMatrixTranspose * gl_Vertex;
//col = gl_ModelViewProjectionMatrixInverse * gl_Vertex;

}

------------- nvtest.frag -------------
varying vec4 col;

void main()
{
gl_FragColor = col;
}
I know this matrices are not supported in GLSL but because GLSL are not "officialy" supported by NV and ATI maybe we can request to support this matrices? Let someone check my shader on ATI and 3DLabs hw, so we can be sure about usage of it.

yooyo

S&W
05-24-2004, 04:23 AM
yooyo:

I just checked your example shaders on ATI Radeon 9800 Pro.
But there's no support in the Catalyst 4.3!
(I haven't checked with Catalyst 4.4 or 4.5, because they are too slow!)

yooyo
05-24-2004, 04:49 AM
Originally posted by S&W:
yooyo:

I just checked your example shaders on ATI Radeon 9800 Pro.
But there's no support in the Catalyst 4.3!
(I haven't checked with Catalyst 4.4 or 4.5, because they are too slow!)Can you send email to ATI dev.rel. and ask them about it. Im sure it is trivial task for driver developers.

It is better to extend GLSL right now and developers can be free to use such features, instead of detecting is hw support such features and write different codepath's.

yooyo

Corrail
05-24-2004, 04:53 AM
Maybe (I hope) that many of those features are included in the fist GLSL update. I hope that this will come soon!

plasmonster
05-24-2004, 08:20 AM
Actually, if the driver were able to store a dirty bit with each matrix, set when a matrix is modified, then it would need only check this bit for each matrix on program bind, recalculate all the dirty matrices, then clear the dirty bits. If the driver is able to do something to this effect, the overhead should be minimal when compared to what one might do on the CPU.

But there is still the uncertainty of what constitutes a dirty matrix in the context of a specific program; if the program being bound doesn't use the inverse(mvp), for example, the driver would need persistent knowledge of this fact in order to avoid unnecessary updates, which probably means some extra steps in the semantic phase of the compiler, and some extra state for the driver.

I'm sure no one would want the driver blindly computing inverses all day. I suspect this added complexity is the reason the official introduction has been delayed. But who knows, it could be that they can't agree on the matrix names. I'm in favor of shorter names. :)

John Kessenich
05-24-2004, 10:04 AM
FYI, The following matrices have been added to an interim version of the shading language spec. Hopefully, you'll see them in reality sometime soon.

uniform mat4 gl_ModelViewMatrixInverse;
uniform mat4 gl_ProjectionMatrixInverse;
uniform mat4 gl_ModelViewProjectionMatrixInverse;
uniform mat4 gl_TextureMatrixInverse[gl_MaxTextureCoords];

uniform mat4 gl_ModelViewMatrixTranspose;
uniform mat4 gl_ProjectionMatrixTranspose;
uniform mat4 gl_ModelViewProjectionMatrixTranspose;
uniform mat4 gl_TextureMatrixTranspose[gl_MaxTextureCoords];

uniform mat4 gl_ModelViewMatrixInverseTranspose;
uniform mat4 gl_ProjectionMatrixInverseTranspose;
uniform mat4 gl_ModelViewProjectionMatrixInverseTranspose;
uniform mat4 gl_TextureMatrixInverseTranspose[gl_MaxTextureCoords];

Korval
05-24-2004, 11:05 AM
What issues with GLSL do you think are the reason for it not being good enough? Would you rate CG or HLSL Good?We had a couple of threads about this. It came up when we found that nVidia's glslang compiler was playing fast-and-loose with the spec, allowing for things like C-style casts and autopromotion of integer values.

Both of these are expressly disallowed by the spec, and both of them should be allowed.

Because of this, Cg (HLSL is basically the same language) is a nicer language to use than glslang.

sqrt[-1]
05-24-2004, 01:00 PM
Originally posted by Korval:

What issues with GLSL do you think are the reason for it not being good enough? Would you rate CG or HLSL Good?We had a couple of threads about this. It came up when we found that nVidia's glslang compiler was playing fast-and-loose with the spec, allowing for things like C-style casts and autopromotion of integer values.

Both of these are expressly disallowed by the spec, and both of them should be allowed.

Because of this, Cg (HLSL is basically the same language) is a nicer language to use than glslang.Call me crazy but those two reasons are things I actually like about GLSL above Cg.
Besides, I think these are just "personal taste" issues and putting an extra .0 after float constants and a float(..) around casts is not any reason to think that a language is not "good".

(Also consider that since GLSL is compiled by the driver we want as few ambiguities as possible in code. ie. if I have a sin(float x) and sin(int x) I would pefer to not let the driver choose which one is called )

evanGLizr
05-24-2004, 02:08 PM
Originally posted by sqrt[-1]:

Originally posted by Korval:

What issues with GLSL do you think are the reason for it not being good enough? Would you rate CG or HLSL Good?We had a couple of threads about this. It came up when we found that nVidia's glslang compiler was playing fast-and-loose with the spec, allowing for things like C-style casts and autopromotion of integer values.
[...]
Call me crazy but those two reasons are things I actually like about GLSL above Cg.

Besides, I think these are just "personal taste" issues and putting an extra .0 after float constants and a float(..) around casts is not any reason to think that a language is not "good".

(Also consider that since GLSL is compiled by the driver we want as few ambiguities as possible in code. ie. if I have a sin(float x) and sin(int x) I would pefer to not let the driver choose which one is called )The point is not so much that you must define floating point constants as, say, 1.0, what is ridiculous is that the spec goes to great lengths to define, for example, vector by scalar multiplications, but forgets or decides not to define integer by floating point multiplications (or other cross type operations).

One of the ways of solving that is by type promotion, the other is by combinatorial explosion of the operations.

C++ has clear-defined rules for promotions/overloading resolution, etc. There's no compiler ambiguity involved.

Korval
05-24-2004, 05:37 PM
Call me crazy but those two reasons are things I actually like about GLSL above Cg.You're crazy.

:cool:


Besides, I think these are just "personal taste" issues and putting an extra .0 after float constants and a float(..) around casts is not any reason to think that a language is not "good".Why not? This is a language that has to be used, usually by programmers. If they don't like it, if it has annoying "gotcha's" in it for no real benifit, then that's plenty reason enough to call the language not good.


if I have a sin(float x) and sin(int x) I would pefer to not let the driver choose which one is calledIf you have those two in C++, there's no ambiguity as to which may be called. The ANSI C++ spec defines which one gets called in all circumstances. Now, the user may not know the spec well enough to know the answer, but that's not the spec's fault.

azazello
05-24-2004, 10:20 PM
The following matrices have been added to an interim version of the shading language spec. Hopefully, you'll see them in reality sometime soon. Thanks :-)

sqrt[-1]
05-25-2004, 02:35 AM
One of the ways of solving that is by type promotion, the other is by combinatorial explosion of the operations.

C++ has clear-defined rules for promotions/overloading resolution, etc. There's no compiler ambiguity involved.[/QB]I don't really see the big deal with manual type promotion. Even in C++ with most compilers you will get warnings when combining float and integers in the same math statement (without manually promoting them to the same type).

(ie with
float a=5.0f;
int c = a;

you have to use:
float a=5.0f;
int c = int(a);

to get rid of warnings -which most programmers do)

(another example would be passing a float to a function that takes an integer)

Also (just a guess) but perhaps conversions between float-> int etc are not as "free" as they are in C++ and the GLSL people wanted to ensure they are only done at a users' request?

However, this is not something I really want to quibble about so I'll agree to disagree :D .

Cab
05-25-2004, 03:13 AM
Originally posted by Korval:

What issues with GLSL do you think are the reason for it not being good enough? Would you rate CG or HLSL Good?We had a couple of threads about this. It came up when we found that nVidia's glslang compiler was playing fast-and-loose with the spec, allowing for things like C-style casts and autopromotion of integer values.

Both of these are expressly disallowed by the spec, and both of them should be allowed.

Because of this, Cg (HLSL is basically the same language) is a nicer language to use than glslang.In our company we have a C++ style document that has been improving during the last ten years.
There is one thing that were added six/seven years ago. Basically, in a short description:
- You must use the f sufix in every floating point value and every floating value should be written in its float format: 1.0f, 0.0f, ...
There is another one (added 4/5 years ago) that mainly says:
- You must use the constructor syntax for type conversion (int(fVal), float(iVal), ...). Or the C++ reinterpret_cast type cast operator: ptrtytpe2=reinterpret_cast<type2*>(ptrtype1)

This document is used by every programmer that is working or has been working in our company and noone has complained about that.
In fact, this document has been created by suggestions and revisions of seniors programmers.

V-man
05-25-2004, 07:30 AM
float a=5.0f;
int c = a;The issue was should the compiler automatically interpret (Not Cast!) an int to a float?

Example : float thing = 0;

Since many coders have the habit of doing this, the answer is yes.
The question further developed ...
Should a compiler allow this (See front page for votes)?

Who needs casting and functions like sin(int x)?

ScottManDeath
05-25-2004, 09:50 AM
There is one thing that were added six/seven years ago. Basically, in a short description:
- You must use the f sufix in every floating point value and every floating value should be written in its float format: 1.0f, 0.0f, ...
There is another one (added 4/5 years ago) that mainly says:
- You must use the constructor syntax for type conversion (int(fVal), float(iVal), ...). Or the C++ reinterpret_cast type cast operator: ptrtytpe2=reinterpret_cast<type2*>(ptrtype1)
In fact I'm doing this, except using the f sufix, without ever reading a design document. ;)

Btw, I'm no professional developer (yet), just have to study a year.