PDA

View Full Version : Tom's GLSL new demo



evanGLizr
04-09-2004, 12:34 PM
I've been testing Tom\'s new GLSL demo (http://www.delphi3d.net/) under ATI 9700 Pro with Catalyst 3.10 and there are a few things to note to make it work:

a) First, Tom uses "casting" instead of construction to create variables, but GLSL only allows construction there is no typecast operator, constructors are used instead. (GLSL 1.051 spec, pg. 24).

Wrong: Vec3 vec = (Vec3) V_eye;
Right: Vec3 vec = Vec3(V_eye);b) ATI's GLSL implementation is very picky with types, in particular it doesn't promote integers to floats, which is a bummer. That comes from a cumbersome part of the spec where you cannot promote integers as floats (not even integer constants): No promotion or demotion of the return type or input argument types is done. (GLSL spec 1.051 pg. 35) and Converting between scalar types is done as the following prototypes indicate... (GLSL spec 1.051 pg. 24).

The problem is not the lack of promotion itself, but how this translates to operating numbers. This problem appears in all the clamp, max and pow functions, as well as when adding subtracting or multiplying integers by floats. So it turns out that you can do float * vec3, but you cannot do int * vec3 or float * int (doh!).

It's specially painful in the case of pow, where you cannot pow a float to an integer (the function prototypes specify a unique genType for all the parameters).

c) GLSL spec already has a Reflect function, although it's the reverse negated from the one Tom implemented. Overloading Reflect with Tom's was giving me grief, so I just commented it out and changed the invocation to match GLSL.

d) In GLSL you cannot alias varying variables: you cannot initialize them in the vertex shader nor in the fragment shader.

varying vec4 Ca = gl_TexCoord[0];Is wrong, causes a compile error and has to be changed to

vec4 Ca = gl_TexCoord[0];and when used in the vertex shader, remember to write them out to gl_TexCoord[0] again so the fragment shader receives the proper values.

An even better solution would be to change the application to get the handle for "Ca" instead of hardcoding it to the 0 texture coordinate.

e) Log doesn't exist, you have to use Log2.

f) Because there's no promotion/demotion, you cannot multiply a vec3 by a vec4, i.e.

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];
vec3 specular = { 0, 0, 0 };
const vec3 esheen = vec3( 0.1, 0.2, 0.5 ); // Environment sheen
...

Wrong:
specular = specular + pow(sin, (1/breathe*5)) * dot(L, V) * Cs * esheen;

Right:
specular = specular + pow(sin, (1.0/breathe*5.0)) * dot(L, V) * Cs.xyz * esheen;These are the corrected shaders:

Phong.glsl

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

/* vec3 reflect(in vec3 N, in vec3 L)
{
return 2*N*dot(N, L) - L;
}*/

void main(void)
{
vec3 V = normalize(vec3(V_eye));
vec3 L = normalize(vec3(L_eye));
vec3 N = normalize(vec3(N_eye));

float diffuse = clamp(dot(L, N), 0.0, 1.0);

vec3 R = -reflect(L, N);
float specular = clamp(pow(dot(R, V), 16.0), 0.0, 1.0);

gl_FragColor = Ca + (Cd*diffuse) + (Cs*specular);
}blin.glsl

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

void main(void)
{
vec3 V = normalize(vec3(V_eye));
vec3 L = normalize(vec3(L_eye));
vec3 N = normalize(vec3(N_eye));

float diffuse = clamp(dot(L, N), 0.0, 1.0);

vec3 H = normalize(L + V);
float specular = clamp(pow(dot(N, H), 32.0), 0.0, 1.0);

gl_FragColor = Ca + (Cd*diffuse) + (Cs*specular);
}rim.glsl

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

const float edgeWidth = 0.3;

float bias(float value, float b)
{
return (b > 0.0) ? pow(value, log2(b) / log2(0.5)) : 0.0;
}

void main(void)
{
vec3 V = normalize(vec3(V_eye));
vec3 L = normalize(vec3(L_eye));
vec3 N = normalize(vec3(N_eye));

float diffuse = clamp(dot(L, N), 0.0, 1.0);

float edgeScale = bias(1.0 - dot(V, N), edgeWidth);
edgeScale = max(0.7, 4.0*edgeScale);
diffuse = diffuse * edgeScale;

gl_FragColor = Ca + (Cd*diffuse);
}lambert.glsl

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

void main(void)
{
vec3 L = normalize(vec3(L_eye));
vec3 N = normalize(vec3(N_eye));

float diffuse = clamp(dot(L, N), 0.0, 1.0);

gl_FragColor = Ca + (Cd*diffuse);
}sharpspecular.glsl

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

const float sharpness = 0.2;

void main(void)
{
vec3 V = normalize(vec3(V_eye));
vec3 L = normalize(vec3(L_eye));
vec3 N = normalize(vec3(N_eye));

float diffuse = clamp(dot(L, N), 0.0, 1.0);

vec3 H = normalize(L + V);
float specular = clamp(pow(dot(N, H), 32.0), 0.0, 1.0);

float w = 0.18 * (1.0 - sharpness);
specular = smoothstep(0.72 - w, 0.72 + w, specular);

gl_FragColor = Ca + (Cd*diffuse) + (Cs*specular);
}thinplastic.glsl

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

const float sharpness = 0.2;

void main(void)
{
vec3 V = normalize(vec3(V_eye));
vec3 L = normalize(vec3(L_eye));
vec3 N = normalize(vec3(N_eye));

float diffuse = clamp(dot(L, N), 0.0, 1.0);

vec3 H = normalize(L + V);
float specular = clamp(pow(dot(N, H), 32.0), 0.0, 1.0);

float w = 0.18 * (1.0 - sharpness);
specular = smoothstep(0.72 - w, 0.72 + w, specular);

gl_FragColor = Ca + (Cd*diffuse) + (Cs*specular);
}velvet.glsl

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

const float backscatter = 0.25;
const float edginess = 4.0;
const float sheen = 0.7;

void main(void)
{
vec3 V = normalize(vec3(V_eye));
vec3 L = normalize(vec3(L_eye));
vec3 N = normalize(vec3(N_eye));

float diffuse = clamp(dot(L, N), 0.0, 1.0);

float cosine = clamp(dot(L, V), 0.0, 1.0);
float shiny = sheen * pow(cosine, 16.0) * backscatter;

cosine = clamp(dot(N, V), 0.0, 1.0);
float sine = sqrt(1.0 - cosine);
shiny = shiny + sheen * pow(sine, edginess) * diffuse;

gl_FragColor = Ca + (Cd*diffuse) + (Cs*shiny);
}vertex.glsl

const vec4 AMBIENT = vec4( 0.1, 0.1, 0.1, 1.0 );
const vec4 SPECULAR = vec4( 1.0, 1.0, 1.0, 1.0 );
uniform vec4 light;

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

void main(void)
{
V_eye = gl_ModelViewMatrix * gl_Vertex;
L_eye = (gl_ModelViewMatrix * light) - V_eye;
N_eye = vec4(gl_NormalMatrix * gl_Normal, 1.0);

gl_Position = gl_ProjectionMatrix * V_eye;
V_eye = -V_eye;

Ca = AMBIENT;
Cd = gl_Color;
Cs = SPECULAR;

gl_TexCoord[0] = Ca;
gl_TexCoord[1] = Cd;
gl_TexCoord[2] = Cs;
gl_TexCoord[3] = V_eye;
gl_TexCoord[4] = L_eye;
gl_TexCoord[5] = N_eye;
}sheen.glsl

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

const vec3 esheen = vec3( 0.1, 0.2, 0.5 ); // Environment sheen
const vec3 lsheen = vec3( 0.3, 0.4, 0.5 ); // Light sheen
const vec3 gsheen = vec3( 0.4, 0.35, 0.3 ); // Glow sheen
const float breathe = 0.8; // Sheen attenuation

void main(void)
{
vec3 V = normalize(vec3(V_eye));
vec3 L = normalize(vec3(L_eye));
vec3 N = normalize(vec3(N_eye));

float diffuse = clamp(dot(L, N), 0.0, 1.0);

vec3 H = normalize(L + V);

float cos = dot(N, V);
float sin = sqrt(1.0 - pow(cos, 2.0));
vec3 specular = vec3( 0, 0, 0, 1 );

specular = specular + pow(sin, (1.0/breathe*5.0)) * dot(L, V) * Cs.xyz * esheen;
specular = specular + pow(sin, (1.0/breathe*5.0)) * dot(L, N) * Cs.xyz * lsheen;
specular = specular + pow(cos, (breathe*5.0)) * dot(L, N) * Cs.xyz * gsheen;

gl_FragColor = Ca + (Cd*diffuse) + vec4(specular, 1.0);
}

Cab
04-09-2004, 05:14 PM
I've just sent an email to Tom with those issues (before reading this post) with modified shaders.
Instead of using built-in varying variables to pass data from vertex to fragment program (somethin that I think is useful if you have a fragment program that can be used with vertex fixed pipeline or with custom vertex shader), I think that, in this case, it is better to use user-defined varying variables.
There are the correct shaders using this way:

blinn.glsl

varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

void main(void)
{
vec3 V = normalize(vec3(V_eye));
vec3 L = normalize(vec3(L_eye));
vec3 N = normalize(vec3(N_eye));

float diffuse = clamp(dot(L, N), 0.0, 1.0);

vec3 H = normalize(L + V);
float specular = clamp(pow(dot(N, H), 32.0), 0.0, 1.0);

gl_FragColor = Ca + (Cd*diffuse) + (Cs*specular);
}
lambert.glsl

varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

void main(void)
{
vec3 L = normalize(vec3(L_eye));
vec3 N = normalize(vec3(N_eye));

float diffuse = clamp(dot(L, N), 0.0, 1.0);

gl_FragColor = Ca + (Cd*diffuse);
}Phong.glsl

varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

vec3 reflect(vec3 N, vec3 L)
{
return 2.0*N*dot(N, L) - L;
}

void main(void)
{
vec3 V = normalize(vec3(V_eye));
vec3 L = normalize(vec3(L_eye));
vec3 N = normalize(vec3(N_eye));

float diffuse = clamp(dot(L, N), 0.0, 1.0);

vec3 R = reflect(N, L);
float specular = clamp(pow(dot(R, V), 16.0), 0.0, 1.0);

gl_FragColor = Ca + (Cd*diffuse) + (Cs*specular);
}rim.glsl

varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

const float edgeWidth = 0.3;

float bias(float value, float b)
{
return (b > 0.0) ? pow(value, log2(b) / log2(0.5)) : 0.0;
}

void main(void)
{
vec3 V = normalize(vec3(V_eye));
vec3 L = normalize(vec3(L_eye));
vec3 N = normalize(vec3(N_eye));

float diffuse = clamp(dot(L, N), 0.0, 1.0);

float edgeScale = bias(1.0 - dot(V, N), edgeWidth);
edgeScale = max(0.7, 4.0*edgeScale);
diffuse = diffuse * edgeScale;

gl_FragColor = Ca + (Cd*diffuse);
}sharpspecular.glsl

varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

const float sharpness = 0.2;

void main(void)
{
vec3 V = normalize(vec3(V_eye));
vec3 L = normalize(vec3(L_eye));
vec3 N = normalize(vec3(N_eye));

float diffuse = clamp(dot(L, N), 0.0, 1.0);

vec3 H = normalize(L + V);
float specular = clamp(pow(dot(N, H), 32.0), 0.0, 1.0);

float w = 0.18 * (1.0 - sharpness);
specular = smoothstep(0.72 - w, 0.72 + w, specular);

gl_FragColor = Ca + (Cd*diffuse) + (Cs*specular);
}sheen.glsl

varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

const vec3 esheen = vec3( 0.1, 0.2, 0.5 ); // Environment sheen
const vec3 lsheen = vec3( 0.3, 0.4, 0.5 ); // Light sheen
const vec3 gsheen = vec3( 0.4, 0.35, 0.3 ); // Glow sheen
const float breathe = 0.8; // Sheen attenuation

void main(void)
{
vec3 V = normalize(vec3(V_eye));
vec3 L = normalize(vec3(L_eye));
vec3 N = normalize(vec3(N_eye));

float diffuse = clamp(dot(L, N), 0.0, 1.0);

vec3 H = normalize(L + V);

float cos = dot(N, V);
float sin = sqrt(1.0-pow(cos, 2.0));
vec3 specular = vec3(0.0, 0.0, 0.0);

specular = specular + pow(sin, (1.0/breathe*5.0)) * dot(L, V) * vec3(Cs) * esheen;
specular = specular + pow(sin, (1.0/breathe*5.0)) * dot(L, N) * vec3(Cs) * lsheen;
specular = specular + pow(cos, (breathe*5.0)) * dot(L, N) * vec3(Cs) * gsheen;

gl_FragColor = Ca + (Cd*diffuse) + vec4(specular, 1.0);
}thinplastic.glsl

varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

void main(void)
{
vec3 V = normalize(vec3(V_eye));
vec3 L = normalize(vec3(L_eye));
vec3 N = normalize(vec3(N_eye));

float diffuse_f = clamp(dot(L, N), 0.0, 1.0);
float diffuse_b = clamp(dot(L, -N), 0.0, 1.0);

vec3 H = normalize(L + V);
float specular = clamp(pow(dot(N, H), 32.0), 0.0, 1.0);

gl_FragColor = Ca + 0.8*(Cd*diffuse_f) + 0.2*(Cd*diffuse_b) + (Cs*specular);
}velvet.glsl

varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

const float backscatter = 0.25;
const float edginess = 4.0;
const float sheen = 0.7;

void main(void)
{
vec3 V = normalize(vec3(V_eye));
vec3 L = normalize(vec3(L_eye));
vec3 N = normalize(vec3(N_eye));

float diffuse = clamp(dot(L, N), 0.0, 1.0);

float cosine = clamp(dot(L, V), 0.0, 1.0);
float shiny = sheen * pow(cosine, 16.0) * backscatter;

cosine = clamp(dot(N, V), 0.0, 1.0);
float sine = sqrt(1.0 - cosine);
shiny = shiny + sheen * pow(sine, edginess) * diffuse;

gl_FragColor = Ca + (Cd*diffuse) + (Cs*shiny);
} vertex.glsl

const vec4 AMBIENT = vec4( 0.1, 0.1, 0.1, 1.0 );
const vec4 SPECULAR = vec4( 1.0, 1.0, 1.0, 1.0 );
uniform vec4 light;

varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

void main(void)
{
V_eye = gl_ModelViewMatrix * gl_Vertex;
L_eye = (gl_ModelViewMatrix * light) - V_eye;
N_eye = vec4(gl_NormalMatrix * gl_Normal, 1.0);

gl_Position = gl_ProjectionMatrix * V_eye;
V_eye = -V_eye;

Ca = AMBIENT;
Cd = gl_Color;
Cs = SPECULAR;
} Also, you can note that instead using things like:
vec3 V=normalize(vec3(V_eye));
You can use:
vec3 V=normalize(V_eye.xyz);
This is the same as V_eye.xyz is a vec3. This way can be more convenient in some context (for readability).

Hope this helps.

Cab
04-09-2004, 05:28 PM
Originally posted by evanGLizr:

...
c) GLSL spec already has a Reflect function, although it's the reverse negated from the one Tom implemented. Overloading Reflect with Tom's was giving me grief, so I just commented it out and changed the invocation to match GLSL.

...

sheen.glsl

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

const vec3 esheen = vec3( 0.1, 0.2, 0.5 ); // Environment sheen
const vec3 lsheen = vec3( 0.3, 0.4, 0.5 ); // Light sheen
const vec3 gsheen = vec3( 0.4, 0.35, 0.3 ); // Glow sheen
const float breathe = 0.8; // Sheen attenuation

void main(void)
{
vec3 V = normalize(vec3(V_eye));
vec3 L = normalize(vec3(L_eye));
vec3 N = normalize(vec3(N_eye));

float diffuse = clamp(dot(L, N), 0.0, 1.0);

vec3 H = normalize(L + V);

float cos = dot(N, V);
float sin = sqrt(1.0 - pow(cos, 2.0));
vec3 specular = vec3( 0, 0, 0, 1 );

specular = specular + pow(sin, (1.0/breathe*5.0)) * dot(L, V) * Cs.xyz * esheen;
specular = specular + pow(sin, (1.0/breathe*5.0)) * dot(L, N) * Cs.xyz * lsheen;
specular = specular + pow(cos, (breathe*5.0)) * dot(L, N) * Cs.xyz * gsheen;

gl_FragColor = Ca + (Cd*diffuse) + vec4(specular, 1.0);
}I like the use of built-in functions (as recommended) so the phong shader should look like:

blinn.glsl

varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

void main(void)
{
vec3 V = normalize(vec3(V_eye));
vec3 L = normalize(vec3(L_eye));
vec3 N = normalize(vec3(N_eye));

float diffuse = clamp(dot(L, N), 0.0, 1.0);

vec3 R = -reflect(L, N);
float specular = clamp(pow(dot(R, V), 16.0), 0.0, 1.0);

gl_FragColor = Ca + (Cd*diffuse) + (Cs*specular);
}In the original sheen.glsl by Tom, he had:
vec3 specular = { 0, 0, 0 };
(probably he has ported this code from Cg). You have modified it in your code as:
vec3 specular = vec3( 0, 0, 0, 1 );
That should be:
vec3 specular = vec3(0.0, 0.0, 0.0);

Hope this helps

Korval
04-09-2004, 05:38 PM
If this shader was so buggy, how did he get it to compile? Or did he compile it on a deficient (re: nVidia) glslang implementation?

Mark Kilgard
04-09-2004, 08:42 PM
Originally posted by Korval:
If this shader was so buggy, how did he get it to compile? Or did he compile it on a deficient (re: nVidia) glslang implementation?That's funny. The shader works on NVIDIA's implementation but yet you call NVIDIA's implementation deficient.

Wouldn't it seem the implementation upon which the shader does not work is the deficient one?

Strict GLSL has a lot of deficiencies that will frustrate anyone used to C-style languages. An incomplete standard library (C's frac function is called fract, huh?), lack of reasonable type promotion, lack of casting constructs, inability to modify varying and uniform data, inability to override standard library functions, failure to support passing structs to functions, forcing control flow conditionals to be of type boolean when C and C++ (and Cg) implicitly compare numeric values to zero, etc. is all very limiting and frustrating.

A shading language should not be a straightjacket that forces you to write shaders in cumbersome ways just to satisfy a language specification that left out many of the practices that make C and C++ such rich, useful languages. A good language should make a programmer more productive.

I don't know about you, but typing "3.0" when "3" would suffice isn't productive in my book. It doesn't help you be more creative writing shaders or make your life easier; it's just annoying.

There's stuff about GLSL that just makes reasonable programmers wonder "what were they thinking??" such as the decision to have row-major arrays (C-style) yet column-major matrices (FORTRAN-style). Every new language since, oh, 1971 has gone row-major as did Direct3D and OpenGL's ARB-endorsed assembly language interfaces. Mixing the two conventions is just weird and prone to confusion. Unfortunately, these are decisions baked into the language so there's nothing that can be done about it (it is exposed externally to the language by the fact that attributes in a matrix are columns, not rows).

NVIDIA's GLSL implementation has a lot of Cg heritage so that constructs that make sense in C and C++ typically "just work as you'd expect" in GLSL.

I'm not saying NVIDIA's GLSL implementation is perfect (it's still a work in progress - improving rapidly), but it allows very lengthy shaders to be written today that can run in hardware much more often than other implementations. Vertex branch and loop constructs work well. Fragment shaders can be very very long. The standard library is compatible with the more functional Cg standard library. The imlementation doesn't generate a bunch of errors due to odd GLSL language limitations that no programmer would tolerate if writing C or C++.

- Mark Kilgard

bobvodka
04-09-2004, 09:11 PM
All of which, imo is a bad plan, whats the point in having a spec to follow if one of the versions of it doesnt stick to the spec at all? Ok, sure its annoying for the code not to work how we expect C code to work, but its not C code, its OGLSL, so while its constructs/layout might the same if the spec says it should do something one way then thats how it should be done (unless ofcourse an extension is registered which allows for these 'extensions' but it shouldnt be allowed in the case of the basic version, it just makes the whole concept of the spec laughtable.
Heck, MS do this all the time, its called 'Embrace and Extend' and everyone gives 'em a hard time, Nvidia do it and you are giving 'em praise.

If the compiler doesnt follow the spec, its broken and therefore deficient in its current state, I truely hope Nvidia fix it so that it works the CORRECT way in future.

As for how it was compiled, maybe it was done on a newer driver revision, i'm led to belive that there have been significate (sp?) advances in the state of the GLSL backend in recent driver updates...

edit: however, that isnt the case, i'm on the 4.4 drivers and it bombs out on startup with the default code...

Overmind
04-10-2004, 02:30 AM
When I write a GLSL shader, my first priority is to get it working on every graphic card. That is the reason why the extension is called ARB and not NV or ATI...

If every compiler let you do things not in the spec, this would not be an easy task. I prefer having an "inconvinient" language over having many undocumented extensions. I have to stick to the spec anyways to remain compatible, but I would prefer if the compiler would tell me when I write something that is not correct instead of silently ignoring it.

With the current nVidia implementation I have to test every shader with an ATI card to see if it works because the fact that it works on an nVidia card doesn't imply that it is comformant to the spec. So in the end this "more convinient" language actually means more work for developers...

About the column-major matrices: This actually makes sense in 3D graphics programming. If you see a matrix as transformation between coordinate frames, you are interested in the columns of the matrix, not in the rows, because the columns represent the new base vectors and the origin.

Zengar
04-10-2004, 02:44 AM
Deficient?

Nvidia simply introduces new extensions to make GLSL easier and nicer. I find this syntax much more convenient, because it bases on already existing languages. Why no type overloading in GLSL? To make something new? :D Nvidias GLSL works very nice to me and, finally, it's up to you to write compartible shaders. There is an option in NVemulate to enable shader conformance test. Anyway we will end writing different shaders for Nvidia and ATI, but this would be much more convenient then using old API's.

Tom Nuydens
04-10-2004, 04:41 AM
Thanks for the comments, everyone. Yes, the shaders were written on an NVidia card, using driver version 57.10. I'll update the demo with the fixed shaders after the weekend.

Mark, while I agree with most of the issues you have with the GLSL spec as is, I disagree that making your compiler more forgiving is a good thing to do. Sure, the language extensions you provide are very sensible. Unfortunately, because I don't own an ATI card to test on, it also means I'm bound to get complaints when I release my demo to the public and half of my audience can't run it.

I hadn't noticed the "shader conformance" option Zengar mentions, but I'll look into it. If it indeed forces the GLSL compiler to comply to the spec, then I strongly urge NVidia to make this behaviour the default when the first official GLSL drivers get released.

-- Tom

Jan
04-10-2004, 04:43 AM
In my opinion nVidia does some stuff which is not really good for their image. For example a pop-up-blocker (!?) for the internet-explorer...

Another thing is not to stick to a spec. I think this is an annoying thing, because this takes you by the hand and helps you to produce wrong, incompatible code. Even the bible says, that the easy way is not always the right one.... ;)

Honestly, if the spec defines a humble language (although i donīt think it really is), then the SPEC has to be changed, so that everybody can agree on it. And i think nVidia did have influence on the spec, when it was made. So now STICK TO IT !
Otherwise, change the spec. Now is still time to do this, because there is no completely working implementation anyway, and no professional program to use it either.

If a company claims to supports glSlang, i want it to support GLSLANG, not NV_glslang, nor ATI_glslang nor NV_CG_slang, etc.

Compatibility is in this case more important than c-stylishnes. And most people are able to learn those small differences quite easy anyway.

Jan.

Zengar
04-10-2004, 06:30 AM
I don't quite understand what are you talking about.

Nvidia supports GLSL as the specs say. I was able to run all GLSL demos written by Humus(ATI card) on my GFFX5600 without any problems. Currenly noise functions are unsupported and maybe something else. But I don't understand why providing extensions to GLSL that make use of GFFX features could be seen as something "bad for image" (??!!). How does it make nvidia not spec-conformant?
I'm happy for being able to use fixed types or packing instructions in my GLSL programs. Also, cg provides some additional functions which are not present in GLSL. This extension(NV_Cg_shader) isn't thought to be portable, so where's the problem? It's up to you if you want to use it.
@Tom: it's "strict shader portability warnings" option. Never tried if it works thought :D

bobvodka
04-10-2004, 07:28 AM
because as we've seen people like Tom will write code they 'think' is proper GLSL code but isnt and reqiures a rewrite to make things go properly.
The whole point of GLSL is a platform/hardware independant spec which allows you to write one version and run it and get the correct result on different hardware, by allowing people to relax the spec on their cards Nvidia have caused a sitution currently where things that are written on their version of GLSL dont work on anyone elses cards because they require strick GLSL which follows the spec correctly.

end of the day, its still a beta release of the backend, so mistakes can be expected, but i'm with Tom on strongly urging Nvidia to make sure this kinda thing CANT happen in the final version.

marco_dup1
04-10-2004, 10:57 AM
Originally posted by Mark Kilgard:

A shading language should not be a straightjacket that forces you to write shaders in cumbersome ways just to satisfy a language specification that left out many of the practices that make C and C++ such rich, useful languages. A good language should make a programmer more productive.
But I read code much more than I write them. Look at Perl, mostly the source isn't readable. Look at Python, it's IMHO a joy to read. C/C++ is also hard to read. I'm like more restricted languages in the kind of writing because of that.

Ok, it's a little bit annoying to let 1 not 1., but isn't really that hard. I like the idea to forbid implicit casts.

That I really miss is a good noise implementation in hardware. I hope the noise will look simular on all implementation.

Dr^Nick
04-10-2004, 11:36 AM
I have to disagree with Nvidias stance on "extending the specification", but thats not really the issue; That Nvidia feels the specification already needs extending concerns me.

I recall a poll on OpenGL.org some time ago about adopting Cg, GLslang, or "taking the best of both".

When I voted "take the best of both" I wasn't voting for the GLslang proposal to be refined.
Instead I was hoping for something more like Cg refined and extended, taking parts from GLslang that tailored it more to GL interfaces.

It would seem that it was taken as "Keep going with GLslang", and here we come to the issue where the "better parts" of Cg were abandoned.

I realize these votes might not have actually meant anything, but I half-expected the working group to respond.

Personally I'd like to see GLSL re-thought, and heavily overhauled.

Of course, this is just my opinion, and your milage may vary.

simongreen
04-11-2004, 02:08 PM
Before this thread gets too out of control...

It is NVIDIA's intention to provide a 100% conformant GLSL implementation. Our extended features are there just for the convenience of developers.

Obviously the whole point of having a standard shading language is that you can write your shaders once and they will run anywhere. We already have a "-strict" compiler option that gives warnings if your code is non-conformant, and future versions of our driver will have an option to enforce GLSL conformance.

Personally, I think GLSL as a language is fine - once you've used one C-style shading language you've used them all! There are a few wierd idiosyncrasies in the spec, but we're working with the ARB to resolve these. I don't think a device driver is really the right place for a high level language compiler, but that's another story.

Humus
04-11-2004, 04:09 PM
I must agree that having to write "1.0" instead of just "1" is kinda annoying. It's no biggie though. Maybe a GL_ARB_shading_language_101 can relax that restriction?

bobvodka
04-11-2004, 07:00 PM
Originally posted by simongreen:
We already have a "-strict" compiler option that gives warnings if your code is non-conformant, and future versions of our driver will have an option to enforce GLSL conformance.
I'm kinda hoping that the option to enforce GLSL conformance will be a default 'on' sitution before we get too much code which will only run on one card, yes being able to turn conformance off to aid with developement is a good thing but there just seems something backwards to me about having to turn on a feature of the API.. :confused:

As long as the ARB agree to changes and both Nvidia and ATI get drivers out which support the spec and give the same results then personaly i'm not that fussed about what gets changed and with the relative newness of the spec it is important to get the idiosyncrasies sorted out now, before we get too much legacy code (or we'll end up in a sitution like the VC7.1 C++ compiler where you have to turn on for-loop scope complience...)

V-man
04-11-2004, 08:38 PM
On Cat 4.3 and 4.4 the colors look all screwed up for me, for *all* the examples.

It looks all multicolored triangles.

Anybody else having this problem?

Of course, running similar shaders in my own code work fine.

Yes, I too would like to see some of these conditioned relaxed, unless there is a strong reason not to have this.

evanGLizr
04-11-2004, 09:26 PM
Originally posted by V-man:
On Cat 4.3 and 4.4 the colors look all screwed up for me, for *all* the examples.

It looks all multicolored triangles.

Anybody else having this problem?
Did you remember to change vertex.glsl?.
The correct code is in my first post; I don't think cab's solution of using Cs, Cd... directly as varying instead of gl_TexCoord[n] works without changing the application (although that would be the recommended fix if you could modify the app), because I believe the the app is hardcoded to use fixed slots for the program attributes (gl_TexCoord[0..5]) instead of querying for Cs, Cd, etc.

Cab
04-11-2004, 11:38 PM
Originally posted by evanGLizr:
Did you remember to change vertex.glsl?.
The correct code is in my first post; I don't think cab's solution of using Cs, Cd... directly as varying instead of gl_TexCoord[n] works without changing the application (although that would be the recommended fix if you could modify the app), because I believe the the app is hardcoded to use fixed slots for the program attributes (gl_TexCoord[0..5]) instead of querying for Cs, Cd, etc.No. All the gl_TexCoord[...] are just varying parameters and are not used as input parameters for the vertex.glsl.
If you read vertex.glsl code, it just uses gl_Vertex, gl_Normal and a uniform variable 'light' as input values.
(Ca, Cd, Cs, V_eye, L_eye, N_eys) = (gl_TexCoord[0], ..., gl_TexCoord[5]) are just output values from the vertex shader.
So you don't need to change the app and the solution works properly :)
Hope this helps.

Korval
04-12-2004, 12:09 AM
I thought my comment on nVidia's implementation being "deficient" would raise an eyebrow or two, but Mark Kilgard himself... wow ;)


Wouldn't it seem the implementation upon which the shader does not work is the deficient one?No. Since the shader violates the spec in several places, properly compiling it without error is the wrong behavior.

The typecast-operator alone should have immediately thrown up a syntax error. But, like a good Cg compiler, it just took it.

It's like having an implementation of ARB_fragment_program that, when shadow textures are bound, does the depth compare operation in clear defiance of the spec... oh wait, nVidia's GL implementation does that too... :rolleyes: ;)

The point is that it is perfectly acceptable to call an implementation of an extension that does not follow the spec deficient.


Strict GLSL has a lot of deficiencies that will frustrate anyone used to C-style languages.Which is both true and a perfectly legitimate thing to bring up when the language was being defined. And I'm pretty sure you guys did. However, you lost.

The correct decision at that point is to accept the loss and do what the spec says. It is not acceptable to violate parts of the spec just because you just don't agree with them, even if the disagreement is perfectly reasonable and rational. This confuses shader writers who need cross-platform portability. Suddenly, what seemed like a perfectly valid shader on one card fails to even compile on another.

Look, I agree that there's a lot of nonsense in glslang. I can't say I'm happy with the language; there's lots of stuff in there that looks like it was added solely to be different and wierd. I probably would have preferred that Cg became the OpenGL shading language, or something similar to it. But we have to adhere to specs, even those we disagree with. If we don't, we create chaos and further weaken OpenGL.


inability to override standard library functionsWait. It has that. I forget what you have to do, but I definately remember reading about precisely how to do it in my OpenGL Shading Language book.


NVIDIA's GLSL implementation has a lot of Cg heritage so that constructs that make sense in C and C++ typically "just work as you'd expect" in GLSL.But it doesn't have to. 3DLabs was "nice" (read: desperate for attention) enough to provide a full parser for glslang that would catch the vast majority of errors that nVidia's compiler lets through. The idea for releasing this was so that there would be some conformity in compilers. Apparently, you just decided to shoehorn glslang into nVidia-glslang.

If you're having a meeting with your lead programmer, and he comes to a decision you don't agree with, then you argue with him. Either you convince him that he's wrong or you don't. However, when the meeting is over and a decision is made, you either follow through or quit. Back-dooring the language like this is just unprofessional.

I was getting pretty stoked for an NV40-based card. But this complete and total lack of willingness to ahdere to a spec, more than anything, even the news of ATi upping the number of pipes in their new chips, is sufficient reason to keep a Radeon in my computer. At least, I can be sure that any shaders I write will work anywhere...

The correct response to, "Your compiler is in violation of the spec" is not, "We don't agree with the spec because it's silly." The correct response is, "We recognise this to be an error, and we will fix the problem at our earliest convienience." I would have accepted, "Our glslang compiler was built by shoehorning our Cg compiler to accept the language. Doing this, however, did leave langauge constructs that Cg provides open to the glslang input path. We intend to correct this as our glslang implementation matures."


Our extended features are there just for the convenience of developers.Extending the language is one thing. Perfectly reasonable with valid extension strings/specs. Changing it's syntax, making a syntax acceptable that isn't, is quite another.

Zengar
04-12-2004, 02:20 AM
Originally posted by Korval:
]No. Since the shader violates the spec in several places, properly compiling it without error is the wrong behavior.
:confused: The shader doesn't violate the spec. The shader is compiled according to NV_Cg_shader spec, which is described.
So it's definitively NOT a wrong behaviour.

Enabling shader conformance on the driver by default is a very bad idea. A user with nvidia driver doesn't need to get any warnings. The solution will be, for example, adding a pragma or a define command to a shader that will be disabling all Gg features.

marco_dup1
04-12-2004, 03:59 AM
Originally posted by simongreen:

It is NVIDIA's intention to provide a 100% conformant GLSL implementation. Our extended features are there just for the convenience of developers.
Waht about noise. Will it be fast?



Personally, I think GLSL as a language is fine - once you've used one C-style shading language you've used them all! There are a few wierd idiosyncrasies in the spec, but we're working with the ARB to resolve these. I don't think a device driver is really the right place for a high level language compiler, but that's another story.Ok C-Style isn't IMHO the best design but wide known. What is in your opinion the problem to have the compiler in the driver? If I get enough standardize feddback I see no problem. But I'm not a guru. :-)

bobvodka
04-12-2004, 06:43 AM
Originally posted by Zengar:
:confused: The shader doesn't violate the spec. The shader is compiled according to NV_Cg_shader spec, which is described.
So it's definitively NOT a wrong behaviour.
But we are talking about a shader which was described as using "using the OpenGL Shading Language" (taken directly from the news post) and as such it should be expected that code which is for the OGSL would compile on any OGL release which supports the OGSL spec, however the examples written by Tom DONT conform to the spec (not his fault, its just how the drivers are written) thus they are not valid OGSL shaders thus the compiler is broken/wrong. If Tom was putting out a Cg shader, then fine, wonderfull no problem carry on regardless, but he belived he was making a OGSL shader, not some weird hybrid, the compiler never should have let his code pass and we wouldnt be having this discussion now.



Enabling shader conformance on the driver by default is a very bad idea. A user with nvidia driver doesn't need to get any warnings. The solution will be, for example, adding a pragma or a define command to a shader that will be disabling all Gg features.On the flip side, you write your code you belive to be conforming, you release it to the world at large and the world at large goes "lovely, now why doesnt this code work on my Radeon/Deltachrome/FireGL/Wildcat?". At which point all the critics point and say "look, this is why D3D is better, it doesnt have this problem with its base shader language, opengl is Just Bad(tm)".
Conformance should be on by default, that way everyone has a common base to aim for, turning if off via a define is again, fair enuf, you are saying to the compiler "I know better, do it this way" but that shouldnt be the default operation.
By all means relax the restrictions later if needed (much like how the Render Target extension is being made), or even have the gfx card maker produce a {NV|ATI}_Custom_Shader extension if you REALLY feel the need so that the program can use a shader designed for that card (although, this does run contrary to the nature of the OGSL).
Default conformance off = shaders produced which only work on one series of cards = versioning problems = the GL_CLAMP issue all over again.

(if you hadnt guessed, i'm very much "pro standards", regardless of who is breaking what)

Overmind
04-12-2004, 07:52 AM
Originally posted by Zengar:
Enabling shader conformance on the driver by default is a very bad idea. A user with nvidia driver doesn't need to get any warnings. The solution will be, for example, adding a pragma or a define command to a shader that will be disabling all Gg features.Agreed, forcing the user to activate/deactivate language conformance is bad, but that's not necessary.

A good solution would be having a "#pragma GL_NV_*insert name here*" in the shader or a glEnable(GL_NV_*insert name here*) or something like this when you want to use the nVidia language extensions. This way, everyone who wants to use the language extensions can without bothering the user to do something, but it is not possible to "accidently" use them like Tom did.

In my opinion, extensions should be turned on when you want them, not turned off when you don't want them. If I want to use something, I have no problem with having to turn it on, but if I don't want to use it, I don't even bother if its there, let alone turning it off.

evanGLizr
04-12-2004, 08:52 AM
Originally posted by Cab:

Originally posted by evanGLizr:
I don't think cab's solution of using Cs, Cd... directly as varying instead of gl_TexCoord[n] works without changing the application No. All the gl_TexCoord[...] are just varying parameters and are not used as input parameters for the vertex.glsl.
[...]
So you don't need to change the app and the solution works properly :) Doh, my bad, you are absolutely right.

Zengar
04-12-2004, 08:53 AM
I don't want to sound like a funboy ;) , and I agree with all of your arguments.
But: if Tom can't read the spec it's not Nvidia's problem, isn't it? Driver documentation describes all the differences quite clearly.
(didn't mean to offend you Tom, you are indeed one of the few people on the net that I trully respect and like)
My opinion is that you are making an elephant of a fly. Althought I agree that enabling command would be better that my proposal. (I'm the one who doesn't like any standards, if you haven't noticed :D )

V-man
04-12-2004, 09:08 AM
Originally posted by evanGLizr:
Did you remember to change vertex.glsl?.
The correct code is in my first post; I don't think cab's solution of using Cs, Cd... directly as varying instead of gl_TexCoord[n] works without changing the application (although that would be the recommended fix if you could modify the app), because I believe the the app is hardcoded to use fixed slots for the program attributes (gl_TexCoord[0..5]) instead of querying for Cs, Cd, etc.[/QB]Nice catch. vertex.glsl was the only problem.

I tried Cab's solution also and that works as well for me and should on all systems.

---------------------------------
varying vec4 Ca = gl_TexCoord[0];
varying vec4 Cd = gl_TexCoord[1];
varying vec4 Cs = gl_TexCoord[2];

varying vec4 V_eye = gl_TexCoord[3];
varying vec4 L_eye = gl_TexCoord[4];
varying vec4 N_eye = gl_TexCoord[5];
---------------------------------

this is suppose to alias but only works on NV?
It's best to have a reserved keyword like in ARBvp+fp

ALIAS thing = the_other_thing;

cause otherwise it looks like it's using some other attribute.

Tom Nuydens
04-12-2004, 10:23 AM
I've uploaded a new zip with Cab's bugfixed shaders in it. I hope it works for everybody now. Thanks a bunch, Carlos!

I also looked into the "strict shader portability warnings" option in NVemulate. My original shaders compile with or without it. Simon said "future versions of our driver will have an option to enforce GLSL conformance", so I guess the current checkbox in NVemulate does something else. No doubt it's described elaborately in the documentation I didn't read ;)

-- Tom

jra101
04-12-2004, 10:43 AM
The strict option in nvemulate turns on warnings that will show up in your info log.

Mazy
04-12-2004, 01:16 PM
I must agree with the ones that thinks that you should relax the tookens by putting a #PRAGMA something in the file, instead of relaxing it by default.

Mark and Simon, you claim its to help developers? i say its not helping at all, as a developer you more than often want the program to work on any glsl implementation, and that means sticking to the agreed specification whatever you like it or not, so relaxing by default only creates frustrated programmers. And even if the compiler itself actually can be said to be conformat as long as it compiles glslang shaders the same cannot be said about shaders using things that arent in the spec. So when/if nvidia puts out examples of glslang code, the shaders better confirm to the spec, or its not glslang examples anymore. I really hope you help the developers by NOT allowing compliation of non-glslang shaders by default, and let you unlock the relaxed tookens by a #Pragma.

Corrail
04-12-2004, 01:54 PM
Originally posted by Tom Nuydens:
I've uploaded a new zip with Cab's bugfixed shaders in it. I hope it works for everybody now. Just downloaded and tried it! Works on Radeon 9800 Pro with Catalyst 4.4! Nice work, Tom! :-)

M/\dm/\n
04-13-2004, 02:37 AM
I like all the additional stuff in NV GLSL implementation.
Of course, compiler should warn us about incompatible code, but hey anyone recalls DEBUG???
And, as all this stuff is writen in drivers, it's easier for one company to include innovative approaches & let others to update, by patching docs & without adding a GL_GLSL_101, GL_GLSL_102, GL_GLSL_102b, GL_GLSL_103, GL_GLSL_105a, ... GL_GLSL_199. As long as old shaders compile it's OK, but newer will allow to do more.

c0ff
04-13-2004, 03:14 AM
Originally posted by M/\dm/\n:
I like all the additional stuff in NV GLSL implementation.But this is NOT GLSL.


Originally posted by M/\dm/\n:
Of course, compiler should warn us about incompatible code, but hey anyone recalls DEBUG???
And, as all this stuff is writen in drivers, it's easier for one company to include innovative approaches & let others to update, by patching docs & without adding a GL_GLSL_101, GL_GLSL_102, GL_GLSL_102b, GL_GLSL_103, GL_GLSL_105a, ... GL_GLSL_199. As long as old shaders compile it's OK, but newer will allow to do more.When this "additional stuff" becomes GLSL then it will be a good thing to expose by default as the appropriate GLSL version. Now it is not.

Humus
04-13-2004, 05:01 AM
Originally posted by Korval:
[QB]I thought my comment on nVidia's implementation being "deficient" would raise an eyebrow or two, but Mark Kilgard himself... wow ;) Quite an achievement! :D
But you're still not at my level. ;) I once managed to get an email from him because of a topic on opengl.org. Now THAT'S an achievement!! :D

brcain
04-13-2004, 06:01 AM
Originally posted by Mark Kilgard:

Strict GLSL has a lot of deficiencies that will frustrate anyone used to C-style languages.Much thanks to you nVIDIA folks for putting effort in to making Cg (and GLSL) easier for the developer -- language constructs, the SDK, and support for Linux. I hope to see something like the CgTutorial for GLSL.

Again, much thanks for your contributions!!! They are appreciated by many.

Mazy
04-13-2004, 06:45 AM
I still have no ide how breaking apart the language in to different noncomaptible sections are going to help developers..

John Kessenich
04-13-2004, 08:25 AM
Wouldn't it seem the implementation upon which the shader does not work is the deficient one?Usually with languages, the compiler that catches the most errors is the better one.


Strict GLSL has a lot of deficiencies that will frustrate anyone used to C-style languages. An incomplete standard libraryA hardware vendor is allowed to extend the standard library. We should have no problems here.


lack of reasonable type promotion, lack of casting constructs, Casts are done with constructors.

E.g. vec4(v), not (vec4)v.

Note that C++ language design recognized problems with C-style typecasts, and did its best to deprecate it.


inability to modify varying and uniform data, Sounds like good SIMD design to me.


inability to override standard library functions, Not true, the spec allows overriding standard library functions.


failure to support passing structs to functions, Not true, the spec allows passing structs to functions.


A shading language should not be a straightjacket that forces you to write shaders in cumbersome ways just to satisfy a language specification that left out many of the practices that make C and C++ such rich, useful languages. In my study of languages, much of what you call rich and useful is considered error prone, and some of that is supported only for backward compatibility to times before people knew better.

However, I would support some auto-promotions, once carefully architected into the language.


There's stuff about GLSL that just makes reasonable programmers wonder "what were they thinking??" such as the decision to have row-major arrays (C-style) yet column-major matrices (FORTRAN-style). Not true. The language doesn't have row-major arrays, it just has one-dimensional arrays. This should be extended in the next version, and support for row-major matrices should come along with it. Arrays should become first class objects at the same time. (In my opinion.)

I'm am quite aware of perhaps more deficiencies in the language than Mark is. The reason they exist was to have something fully thought out and consistent to ship with soon. Adding in extra features before they can be added carefully to the language leads to chaos. The language we are starting with is simple and solid, and that's the right first step.

JohnK


- Mark Kilgard

JD
04-13-2004, 10:44 PM
Let me tell you from my experience reading other people c/c++ code you do want the lang. do things one way and not multiple ways. C++ is king in doing one thing zillion ways so what must you then do when you come across a code written in some unfamiliar way to you? That's right, open up the c++ standard and learn that new syntax rule(s). That is a major time sink. The discrepancy among coders is what's the problem is and allowing them to write shaders in non-standard way. Isn't STL a savior? I think so.

Jan
04-14-2004, 06:23 AM
Originally posted by JD:
Let me tell you from my experience reading other people c/c++ code you do want the lang. do things one way and not multiple ways. C++ is king in doing one thing zillion ways so what must you then do when you come across a code written in some unfamiliar way to you? That's right, open up the c++ standard and learn that new syntax rule(s). That is a major time sink. The discrepancy among coders is what's the problem is and allowing them to write shaders in non-standard way. Isn't STL a savior? I think so.Well, one could argue, that a glSlang shader is a lot shorter than a c program, so reading it wouldnīt be such a big problem (and glSlang is still a lot simpler, too), but in general i have to agree with you. I donīt want/need a compiler which can parse everything, i only want a compiler which works and supports hardware-features, i donīt bother much about compiler-features.

And then there are people who moan, that they are driver-writers, not compiler-writers, and why they need to implement a high-level-language-compiler.....so why even further complicate it?

Jan.