Tom's GLSL new demo

I’ve been testing Tom's new GLSL demo under ATI 9700 Pro with Catalyst 3.10 and there are a few things to note to make it work:

a) First, Tom uses “casting” instead of construction to create variables, but GLSL only allows construction there is no typecast operator, constructors are used instead. (GLSL 1.051 spec, pg. 24).

Wrong: Vec3 vec = (Vec3) V_eye;
Right: Vec3 vec = Vec3(V_eye);

b) ATI’s GLSL implementation is very picky with types, in particular it doesn’t promote integers to floats, which is a bummer. That comes from a cumbersome part of the spec where you cannot promote integers as floats (not even integer constants): No promotion or demotion of the return type or input argument types is done. (GLSL spec 1.051 pg. 35) and Converting between scalar types is done as the following prototypes indicate… (GLSL spec 1.051 pg. 24).

The problem is not the lack of promotion itself, but how this translates to operating numbers. This problem appears in all the clamp, max and pow functions, as well as when adding subtracting or multiplying integers by floats. So it turns out that you can do float * vec3, but you cannot do int * vec3 or float * int (doh!).

It’s specially painful in the case of pow, where you cannot pow a float to an integer (the function prototypes specify a unique genType for all the parameters).

c) GLSL spec already has a Reflect function, although it’s the reverse negated from the one Tom implemented. Overloading Reflect with Tom’s was giving me grief, so I just commented it out and changed the invocation to match GLSL.

d) In GLSL you cannot alias varying variables: you cannot initialize them in the vertex shader nor in the fragment shader.

varying vec4 Ca = gl_TexCoord[0];

Is wrong, causes a compile error and has to be changed to

vec4 Ca = gl_TexCoord[0];

and when used in the vertex shader, remember to write them out to gl_TexCoord[0] again so the fragment shader receives the proper values.

An even better solution would be to change the application to get the handle for “Ca” instead of hardcoding it to the 0 texture coordinate.

e) Log doesn’t exist, you have to use Log2.

f) Because there’s no promotion/demotion, you cannot multiply a vec3 by a vec4, i.e.

  vec4 Ca = gl_TexCoord[0];
  vec4 Cd = gl_TexCoord[1];
  vec4 Cs = gl_TexCoord[2];
  vec3 specular = { 0, 0, 0 };
  const vec3 esheen = vec3( 0.1, 0.2, 0.5 );    // Environment sheen
...

Wrong:
  specular = specular + pow(sin, (1/breathe*5)) * dot(L, V) * Cs * esheen;

Right:
  specular = specular + pow(sin, (1.0/breathe*5.0)) * dot(L, V) * Cs.xyz * esheen;

These are the corrected shaders:

Phong.glsl

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

/* vec3 reflect(in vec3 N, in vec3 L)
{
  return 2*N*dot(N, L) - L;
}*/

void main(void)
{
  vec3 V = normalize(vec3(V_eye));
  vec3 L = normalize(vec3(L_eye));
  vec3 N = normalize(vec3(N_eye));

  float diffuse = clamp(dot(L, N), 0.0, 1.0);

  vec3 R = -reflect(L, N);
  float specular = clamp(pow(dot(R, V), 16.0), 0.0, 1.0);

  gl_FragColor = Ca + (Cd*diffuse) + (Cs*specular);
}

blin.glsl

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

void main(void)
{
  vec3 V = normalize(vec3(V_eye));
  vec3 L = normalize(vec3(L_eye));
  vec3 N = normalize(vec3(N_eye));

  float diffuse = clamp(dot(L, N), 0.0, 1.0);

  vec3 H = normalize(L + V);
  float specular = clamp(pow(dot(N, H), 32.0), 0.0, 1.0);

  gl_FragColor = Ca + (Cd*diffuse) + (Cs*specular);
}

rim.glsl

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

const float edgeWidth = 0.3;

float bias(float value, float b)
{
  return (b > 0.0) ? pow(value, log2(b) / log2(0.5)) : 0.0;
}

void main(void)
{
  vec3 V = normalize(vec3(V_eye));
  vec3 L = normalize(vec3(L_eye));
  vec3 N = normalize(vec3(N_eye));

  float diffuse = clamp(dot(L, N), 0.0, 1.0);

  float edgeScale = bias(1.0 - dot(V, N), edgeWidth);
  edgeScale = max(0.7, 4.0*edgeScale);
  diffuse = diffuse * edgeScale;

  gl_FragColor = Ca + (Cd*diffuse);
}

lambert.glsl

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

void main(void)
{
  vec3 L = normalize(vec3(L_eye));
  vec3 N = normalize(vec3(N_eye));

  float diffuse = clamp(dot(L, N), 0.0, 1.0);

  gl_FragColor = Ca + (Cd*diffuse);
}

sharpspecular.glsl

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

const float sharpness = 0.2;

void main(void)
{
  vec3 V = normalize(vec3(V_eye));
  vec3 L = normalize(vec3(L_eye));
  vec3 N = normalize(vec3(N_eye));

  float diffuse = clamp(dot(L, N), 0.0, 1.0);

  vec3 H = normalize(L + V);
  float specular = clamp(pow(dot(N, H), 32.0), 0.0, 1.0);

  float w = 0.18 * (1.0 - sharpness);
  specular = smoothstep(0.72 - w, 0.72 + w, specular);

  gl_FragColor = Ca + (Cd*diffuse) + (Cs*specular);
}

thinplastic.glsl

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

const float sharpness = 0.2;

void main(void)
{
  vec3 V = normalize(vec3(V_eye));
  vec3 L = normalize(vec3(L_eye));
  vec3 N = normalize(vec3(N_eye));

  float diffuse = clamp(dot(L, N), 0.0, 1.0);

  vec3 H = normalize(L + V);
  float specular = clamp(pow(dot(N, H), 32.0), 0.0, 1.0);

  float w = 0.18 * (1.0 - sharpness);
  specular = smoothstep(0.72 - w, 0.72 + w, specular);

  gl_FragColor = Ca + (Cd*diffuse) + (Cs*specular);
}

velvet.glsl

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

const float backscatter = 0.25;
const float edginess = 4.0;
const float sheen = 0.7;

void main(void)
{
  vec3 V = normalize(vec3(V_eye));
  vec3 L = normalize(vec3(L_eye));
  vec3 N = normalize(vec3(N_eye));

  float diffuse = clamp(dot(L, N), 0.0, 1.0);

  float cosine = clamp(dot(L, V), 0.0, 1.0);
  float shiny = sheen * pow(cosine, 16.0) * backscatter;

  cosine = clamp(dot(N, V), 0.0, 1.0);
  float sine = sqrt(1.0 - cosine);
  shiny = shiny + sheen * pow(sine, edginess) * diffuse;

  gl_FragColor = Ca + (Cd*diffuse) + (Cs*shiny);
}

vertex.glsl

const vec4 AMBIENT = vec4( 0.1, 0.1, 0.1, 1.0 );
const vec4 SPECULAR = vec4( 1.0, 1.0, 1.0, 1.0 );
uniform vec4 light;

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

void main(void)
{
  V_eye = gl_ModelViewMatrix * gl_Vertex;
  L_eye = (gl_ModelViewMatrix * light) - V_eye;
  N_eye = vec4(gl_NormalMatrix * gl_Normal, 1.0);

  gl_Position = gl_ProjectionMatrix * V_eye;
  V_eye = -V_eye;

  Ca = AMBIENT;
  Cd = gl_Color;
  Cs = SPECULAR;
  
  gl_TexCoord[0] = Ca;
  gl_TexCoord[1] = Cd;
  gl_TexCoord[2] = Cs;
  gl_TexCoord[3] = V_eye;
  gl_TexCoord[4] = L_eye;
  gl_TexCoord[5] = N_eye;
}

sheen.glsl

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

const vec3 esheen = vec3( 0.1, 0.2, 0.5 );    // Environment sheen
const vec3 lsheen = vec3( 0.3, 0.4, 0.5 );    // Light sheen
const vec3 gsheen = vec3( 0.4, 0.35, 0.3 );   // Glow sheen
const float breathe = 0.8;                    // Sheen attenuation

void main(void)
{
  vec3 V = normalize(vec3(V_eye));
  vec3 L = normalize(vec3(L_eye));
  vec3 N = normalize(vec3(N_eye));

  float diffuse = clamp(dot(L, N), 0.0, 1.0);

  vec3 H = normalize(L + V);

  float cos = dot(N, V);
  float sin = sqrt(1.0 - pow(cos, 2.0));
  vec3 specular = vec3( 0, 0, 0, 1 );

  specular = specular + pow(sin, (1.0/breathe*5.0)) * dot(L, V) * Cs.xyz * esheen;
  specular = specular + pow(sin, (1.0/breathe*5.0)) * dot(L, N) * Cs.xyz * lsheen;
  specular = specular + pow(cos, (breathe*5.0)) * dot(L, N) * Cs.xyz * gsheen;

  gl_FragColor = Ca + (Cd*diffuse) + vec4(specular, 1.0);
}

I’ve just sent an email to Tom with those issues (before reading this post) with modified shaders.
Instead of using built-in varying variables to pass data from vertex to fragment program (somethin that I think is useful if you have a fragment program that can be used with vertex fixed pipeline or with custom vertex shader), I think that, in this case, it is better to use user-defined varying variables.
There are the correct shaders using this way:

blinn.glsl

 varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

void main(void)
{
  vec3 V = normalize(vec3(V_eye));
  vec3 L = normalize(vec3(L_eye));
  vec3 N = normalize(vec3(N_eye));

  float diffuse = clamp(dot(L, N), 0.0, 1.0);

  vec3 H = normalize(L + V);
  float specular = clamp(pow(dot(N, H), 32.0), 0.0, 1.0);

  gl_FragColor = Ca + (Cd*diffuse) + (Cs*specular);
}
 

lambert.glsl

varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

void main(void)
{
  vec3 L = normalize(vec3(L_eye));
  vec3 N = normalize(vec3(N_eye));

  float diffuse = clamp(dot(L, N), 0.0, 1.0);

  gl_FragColor = Ca + (Cd*diffuse);
}

Phong.glsl

varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

vec3 reflect(vec3 N, vec3 L)
{
  return 2.0*N*dot(N, L) - L;
}

void main(void)
{
  vec3 V = normalize(vec3(V_eye));
  vec3 L = normalize(vec3(L_eye));
  vec3 N = normalize(vec3(N_eye));

  float diffuse = clamp(dot(L, N), 0.0, 1.0);

  vec3 R = reflect(N, L);
  float specular = clamp(pow(dot(R, V), 16.0), 0.0, 1.0);

  gl_FragColor = Ca + (Cd*diffuse) + (Cs*specular);
}

rim.glsl

varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

const float edgeWidth = 0.3;

float bias(float value, float b)
{
  return (b > 0.0) ? pow(value, log2(b) / log2(0.5)) : 0.0;
}

void main(void)
{
  vec3 V = normalize(vec3(V_eye));
  vec3 L = normalize(vec3(L_eye));
  vec3 N = normalize(vec3(N_eye));

  float diffuse = clamp(dot(L, N), 0.0, 1.0);

  float edgeScale = bias(1.0 - dot(V, N), edgeWidth);
  edgeScale = max(0.7, 4.0*edgeScale);
  diffuse = diffuse * edgeScale;

  gl_FragColor = Ca + (Cd*diffuse);
}

sharpspecular.glsl

varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

const float sharpness = 0.2;

void main(void)
{
  vec3 V = normalize(vec3(V_eye));
  vec3 L = normalize(vec3(L_eye));
  vec3 N = normalize(vec3(N_eye));

  float diffuse = clamp(dot(L, N), 0.0, 1.0);

  vec3 H = normalize(L + V);
  float specular = clamp(pow(dot(N, H), 32.0), 0.0, 1.0);

  float w = 0.18 * (1.0 - sharpness);
  specular = smoothstep(0.72 - w, 0.72 + w, specular);

  gl_FragColor = Ca + (Cd*diffuse) + (Cs*specular);
}

sheen.glsl

varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

const vec3 esheen = vec3( 0.1, 0.2, 0.5 );    // Environment sheen
const vec3 lsheen = vec3( 0.3, 0.4, 0.5 );    // Light sheen
const vec3 gsheen = vec3( 0.4, 0.35, 0.3 );   // Glow sheen
const float breathe = 0.8;                    // Sheen attenuation

void main(void)
{
  vec3 V = normalize(vec3(V_eye));
  vec3 L = normalize(vec3(L_eye));
  vec3 N = normalize(vec3(N_eye));

  float diffuse = clamp(dot(L, N), 0.0, 1.0);

  vec3 H = normalize(L + V);

  float cos = dot(N, V);
  float sin = sqrt(1.0-pow(cos, 2.0));
  vec3 specular = vec3(0.0, 0.0, 0.0);

  specular = specular + pow(sin, (1.0/breathe*5.0)) * dot(L, V) * vec3(Cs) * esheen;
  specular = specular + pow(sin, (1.0/breathe*5.0)) * dot(L, N) * vec3(Cs) * lsheen;
  specular = specular + pow(cos, (breathe*5.0)) * dot(L, N) * vec3(Cs) * gsheen;

  gl_FragColor = Ca + (Cd*diffuse) + vec4(specular, 1.0);
}

thinplastic.glsl

varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

void main(void)
{
  vec3 V = normalize(vec3(V_eye));
  vec3 L = normalize(vec3(L_eye));
  vec3 N = normalize(vec3(N_eye));

  float diffuse_f = clamp(dot(L, N), 0.0, 1.0);
  float diffuse_b = clamp(dot(L, -N), 0.0, 1.0);

  vec3 H = normalize(L + V);
  float specular = clamp(pow(dot(N, H), 32.0), 0.0, 1.0);

  gl_FragColor = Ca + 0.8*(Cd*diffuse_f) + 0.2*(Cd*diffuse_b) + (Cs*specular);
}

velvet.glsl

varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

const float backscatter = 0.25;
const float edginess = 4.0;
const float sheen = 0.7;

void main(void)
{
  vec3 V = normalize(vec3(V_eye));
  vec3 L = normalize(vec3(L_eye));
  vec3 N = normalize(vec3(N_eye));

  float diffuse = clamp(dot(L, N), 0.0, 1.0);

  float cosine = clamp(dot(L, V), 0.0, 1.0);
  float shiny = sheen * pow(cosine, 16.0) * backscatter;

  cosine = clamp(dot(N, V), 0.0, 1.0);
  float sine = sqrt(1.0 - cosine);
  shiny = shiny + sheen * pow(sine, edginess) * diffuse;

  gl_FragColor = Ca + (Cd*diffuse) + (Cs*shiny);
}  

vertex.glsl

const vec4 AMBIENT = vec4( 0.1, 0.1, 0.1, 1.0 );
const vec4 SPECULAR = vec4( 1.0, 1.0, 1.0, 1.0 );
uniform vec4 light;

varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

void main(void)
{
  V_eye = gl_ModelViewMatrix * gl_Vertex;
  L_eye = (gl_ModelViewMatrix * light) - V_eye;
  N_eye = vec4(gl_NormalMatrix * gl_Normal, 1.0);

  gl_Position = gl_ProjectionMatrix * V_eye;
  V_eye = -V_eye;

  Ca = AMBIENT;
  Cd = gl_Color;
  Cs = SPECULAR;
}  

Also, you can note that instead using things like:
vec3 V=normalize(vec3(V_eye));
You can use:
vec3 V=normalize(V_eye.xyz);
This is the same as V_eye.xyz is a vec3. This way can be more convenient in some context (for readability).

Hope this helps.

Originally posted by evanGLizr:
[b]

c) GLSL spec already has a Reflect function, although it’s the reverse negated from the one Tom implemented. Overloading Reflect with Tom’s was giving me grief, so I just commented it out and changed the invocation to match GLSL.

sheen.glsl

vec4 Ca = gl_TexCoord[0];
vec4 Cd = gl_TexCoord[1];
vec4 Cs = gl_TexCoord[2];

vec4 V_eye = gl_TexCoord[3];
vec4 L_eye = gl_TexCoord[4];
vec4 N_eye = gl_TexCoord[5];

const vec3 esheen = vec3( 0.1, 0.2, 0.5 ); // Environment sheen
const vec3 lsheen = vec3( 0.3, 0.4, 0.5 ); // Light sheen
const vec3 gsheen = vec3( 0.4, 0.35, 0.3 ); // Glow sheen
const float breathe = 0.8; // Sheen attenuation

void main(void)
{
vec3 V = normalize(vec3(V_eye));
vec3 L = normalize(vec3(L_eye));
vec3 N = normalize(vec3(N_eye));

float diffuse = clamp(dot(L, N), 0.0, 1.0);

vec3 H = normalize(L + V);

float cos = dot(N, V);
float sin = sqrt(1.0 - pow(cos, 2.0));
vec3 specular = vec3( 0, 0, 0, 1 );

specular = specular + pow(sin, (1.0/breathe5.0)) * dot(L, V) * Cs.xyz * esheen;
specular = specular + pow(sin, (1.0/breathe
5.0)) * dot(L, N) * Cs.xyz * lsheen;
specular = specular + pow(cos, (breathe*5.0)) * dot(L, N) * Cs.xyz * gsheen;

gl_FragColor = Ca + (Cd*diffuse) + vec4(specular, 1.0);
}

[/b]

I like the use of built-in functions (as recommended) so the phong shader should look like:

blinn.glsl

varying vec4 Ca;
varying vec4 Cd;
varying vec4 Cs;

varying vec4 V_eye;
varying vec4 L_eye;
varying vec4 N_eye;

void main(void)
{
  vec3 V = normalize(vec3(V_eye));
  vec3 L = normalize(vec3(L_eye));
  vec3 N = normalize(vec3(N_eye));

  float diffuse = clamp(dot(L, N), 0.0, 1.0);

  vec3 R = -reflect(L, N);
  float specular = clamp(pow(dot(R, V), 16.0), 0.0, 1.0);

  gl_FragColor = Ca + (Cd*diffuse) + (Cs*specular);
}

In the original sheen.glsl by Tom, he had:
vec3 specular = { 0, 0, 0 };
(probably he has ported this code from Cg). You have modified it in your code as:
vec3 specular = vec3( 0, 0, 0, 1 );
That should be:
vec3 specular = vec3(0.0, 0.0, 0.0);

Hope this helps

If this shader was so buggy, how did he get it to compile? Or did he compile it on a deficient (re: nVidia) glslang implementation?

Originally posted by Korval:
If this shader was so buggy, how did he get it to compile? Or did he compile it on a deficient (re: nVidia) glslang implementation?
That’s funny. The shader works on NVIDIA’s implementation but yet you call NVIDIA’s implementation deficient.

Wouldn’t it seem the implementation upon which the shader does not work is the deficient one?

Strict GLSL has a lot of deficiencies that will frustrate anyone used to C-style languages. An incomplete standard library (C’s frac function is called fract, huh?), lack of reasonable type promotion, lack of casting constructs, inability to modify varying and uniform data, inability to override standard library functions, failure to support passing structs to functions, forcing control flow conditionals to be of type boolean when C and C++ (and Cg) implicitly compare numeric values to zero, etc. is all very limiting and frustrating.

A shading language should not be a straightjacket that forces you to write shaders in cumbersome ways just to satisfy a language specification that left out many of the practices that make C and C++ such rich, useful languages. A good language should make a programmer more productive.

I don’t know about you, but typing “3.0” when “3” would suffice isn’t productive in my book. It doesn’t help you be more creative writing shaders or make your life easier; it’s just annoying.

There’s stuff about GLSL that just makes reasonable programmers wonder “what were they thinking??” such as the decision to have row-major arrays (C-style) yet column-major matrices (FORTRAN-style). Every new language since, oh, 1971 has gone row-major as did Direct3D and OpenGL’s ARB-endorsed assembly language interfaces. Mixing the two conventions is just weird and prone to confusion. Unfortunately, these are decisions baked into the language so there’s nothing that can be done about it (it is exposed externally to the language by the fact that attributes in a matrix are columns, not rows).

NVIDIA’s GLSL implementation has a lot of Cg heritage so that constructs that make sense in C and C++ typically “just work as you’d expect” in GLSL.

I’m not saying NVIDIA’s GLSL implementation is perfect (it’s still a work in progress - improving rapidly), but it allows very lengthy shaders to be written today that can run in hardware much more often than other implementations. Vertex branch and loop constructs work well. Fragment shaders can be very very long. The standard library is compatible with the more functional Cg standard library. The imlementation doesn’t generate a bunch of errors due to odd GLSL language limitations that no programmer would tolerate if writing C or C++.

  • Mark Kilgard

All of which, imo is a bad plan, whats the point in having a spec to follow if one of the versions of it doesnt stick to the spec at all? Ok, sure its annoying for the code not to work how we expect C code to work, but its not C code, its OGLSL, so while its constructs/layout might the same if the spec says it should do something one way then thats how it should be done (unless ofcourse an extension is registered which allows for these ‘extensions’ but it shouldnt be allowed in the case of the basic version, it just makes the whole concept of the spec laughtable.
Heck, MS do this all the time, its called ‘Embrace and Extend’ and everyone gives 'em a hard time, Nvidia do it and you are giving 'em praise.

If the compiler doesnt follow the spec, its broken and therefore deficient in its current state, I truely hope Nvidia fix it so that it works the CORRECT way in future.

As for how it was compiled, maybe it was done on a newer driver revision, i’m led to belive that there have been significate (sp?) advances in the state of the GLSL backend in recent driver updates…

edit: however, that isnt the case, i’m on the 4.4 drivers and it bombs out on startup with the default code…

When I write a GLSL shader, my first priority is to get it working on every graphic card. That is the reason why the extension is called ARB and not NV or ATI…

If every compiler let you do things not in the spec, this would not be an easy task. I prefer having an “inconvinient” language over having many undocumented extensions. I have to stick to the spec anyways to remain compatible, but I would prefer if the compiler would tell me when I write something that is not correct instead of silently ignoring it.

With the current nVidia implementation I have to test every shader with an ATI card to see if it works because the fact that it works on an nVidia card doesn’t imply that it is comformant to the spec. So in the end this “more convinient” language actually means more work for developers…

About the column-major matrices: This actually makes sense in 3D graphics programming. If you see a matrix as transformation between coordinate frames, you are interested in the columns of the matrix, not in the rows, because the columns represent the new base vectors and the origin.

Deficient?

Nvidia simply introduces new extensions to make GLSL easier and nicer. I find this syntax much more convenient, because it bases on already existing languages. Why no type overloading in GLSL? To make something new? :smiley: Nvidias GLSL works very nice to me and, finally, it’s up to you to write compartible shaders. There is an option in NVemulate to enable shader conformance test. Anyway we will end writing different shaders for Nvidia and ATI, but this would be much more convenient then using old API’s.

Thanks for the comments, everyone. Yes, the shaders were written on an NVidia card, using driver version 57.10. I’ll update the demo with the fixed shaders after the weekend.

Mark, while I agree with most of the issues you have with the GLSL spec as is, I disagree that making your compiler more forgiving is a good thing to do. Sure, the language extensions you provide are very sensible. Unfortunately, because I don’t own an ATI card to test on, it also means I’m bound to get complaints when I release my demo to the public and half of my audience can’t run it.

I hadn’t noticed the “shader conformance” option Zengar mentions, but I’ll look into it. If it indeed forces the GLSL compiler to comply to the spec, then I strongly urge NVidia to make this behaviour the default when the first official GLSL drivers get released.

– Tom

In my opinion nVidia does some stuff which is not really good for their image. For example a pop-up-blocker (!?) for the internet-explorer…

Another thing is not to stick to a spec. I think this is an annoying thing, because this takes you by the hand and helps you to produce wrong, incompatible code. Even the bible says, that the easy way is not always the right one… :wink:

Honestly, if the spec defines a humble language (although i don´t think it really is), then the SPEC has to be changed, so that everybody can agree on it. And i think nVidia did have influence on the spec, when it was made. So now STICK TO IT !
Otherwise, change the spec. Now is still time to do this, because there is no completely working implementation anyway, and no professional program to use it either.

If a company claims to supports glSlang, i want it to support GLSLANG, not NV_glslang, nor ATI_glslang nor NV_CG_slang, etc.

Compatibility is in this case more important than c-stylishnes. And most people are able to learn those small differences quite easy anyway.

Jan.

I don’t quite understand what are you talking about.

Nvidia supports GLSL as the specs say. I was able to run all GLSL demos written by Humus(ATI card) on my GFFX5600 without any problems. Currenly noise functions are unsupported and maybe something else. But I don’t understand why providing extensions to GLSL that make use of GFFX features could be seen as something “bad for image” (??!!). How does it make nvidia not spec-conformant?
I’m happy for being able to use fixed types or packing instructions in my GLSL programs. Also, cg provides some additional functions which are not present in GLSL. This extension(NV_Cg_shader) isn’t thought to be portable, so where’s the problem? It’s up to you if you want to use it.
@Tom: it’s “strict shader portability warnings” option. Never tried if it works thought :smiley:

because as we’ve seen people like Tom will write code they ‘think’ is proper GLSL code but isnt and reqiures a rewrite to make things go properly.
The whole point of GLSL is a platform/hardware independant spec which allows you to write one version and run it and get the correct result on different hardware, by allowing people to relax the spec on their cards Nvidia have caused a sitution currently where things that are written on their version of GLSL dont work on anyone elses cards because they require strick GLSL which follows the spec correctly.

end of the day, its still a beta release of the backend, so mistakes can be expected, but i’m with Tom on strongly urging Nvidia to make sure this kinda thing CANT happen in the final version.

Originally posted by Mark Kilgard:

A shading language should not be a straightjacket that forces you to write shaders in cumbersome ways just to satisfy a language specification that left out many of the practices that make C and C++ such rich, useful languages. A good language should make a programmer more productive.

But I read code much more than I write them. Look at Perl, mostly the source isn’t readable. Look at Python, it’s IMHO a joy to read. C/C++ is also hard to read. I’m like more restricted languages in the kind of writing because of that.

Ok, it’s a little bit annoying to let 1 not 1., but isn’t really that hard. I like the idea to forbid implicit casts.

That I really miss is a good noise implementation in hardware. I hope the noise will look simular on all implementation.

I have to disagree with Nvidias stance on “extending the specification”, but thats not really the issue; That Nvidia feels the specification already needs extending concerns me.

I recall a poll on OpenGL.org some time ago about adopting Cg, GLslang, or “taking the best of both”.

When I voted “take the best of both” I wasn’t voting for the GLslang proposal to be refined.
Instead I was hoping for something more like Cg refined and extended, taking parts from GLslang that tailored it more to GL interfaces.

It would seem that it was taken as “Keep going with GLslang”, and here we come to the issue where the “better parts” of Cg were abandoned.

I realize these votes might not have actually meant anything, but I half-expected the working group to respond.

Personally I’d like to see GLSL re-thought, and heavily overhauled.

Of course, this is just my opinion, and your milage may vary.

Before this thread gets too out of control…

It is NVIDIA’s intention to provide a 100% conformant GLSL implementation. Our extended features are there just for the convenience of developers.

Obviously the whole point of having a standard shading language is that you can write your shaders once and they will run anywhere. We already have a “-strict” compiler option that gives warnings if your code is non-conformant, and future versions of our driver will have an option to enforce GLSL conformance.

Personally, I think GLSL as a language is fine - once you’ve used one C-style shading language you’ve used them all! There are a few wierd idiosyncrasies in the spec, but we’re working with the ARB to resolve these. I don’t think a device driver is really the right place for a high level language compiler, but that’s another story.

I must agree that having to write “1.0” instead of just “1” is kinda annoying. It’s no biggie though. Maybe a GL_ARB_shading_language_101 can relax that restriction?

Originally posted by simongreen:
We already have a “-strict” compiler option that gives warnings if your code is non-conformant, and future versions of our driver will have an option to enforce GLSL conformance.

I’m kinda hoping that the option to enforce GLSL conformance will be a default ‘on’ sitution before we get too much code which will only run on one card, yes being able to turn conformance off to aid with developement is a good thing but there just seems something backwards to me about having to turn on a feature of the API… :confused:

As long as the ARB agree to changes and both Nvidia and ATI get drivers out which support the spec and give the same results then personaly i’m not that fussed about what gets changed and with the relative newness of the spec it is important to get the idiosyncrasies sorted out now, before we get too much legacy code (or we’ll end up in a sitution like the VC7.1 C++ compiler where you have to turn on for-loop scope complience…)

On Cat 4.3 and 4.4 the colors look all screwed up for me, for all the examples.

It looks all multicolored triangles.

Anybody else having this problem?

Of course, running similar shaders in my own code work fine.

Yes, I too would like to see some of these conditioned relaxed, unless there is a strong reason not to have this.

Originally posted by V-man:
[b]On Cat 4.3 and 4.4 the colors look all screwed up for me, for all the examples.

It looks all multicolored triangles.

Anybody else having this problem?
[/b]
Did you remember to change vertex.glsl?.
The correct code is in my first post; I don’t think cab’s solution of using Cs, Cd… directly as varying instead of gl_TexCoord[n] works without changing the application (although that would be the recommended fix if you could modify the app), because I believe the the app is hardcoded to use fixed slots for the program attributes (gl_TexCoord[0…5]) instead of querying for Cs, Cd, etc.

Originally posted by evanGLizr:
Did you remember to change vertex.glsl?.
The correct code is in my first post; I don’t think cab’s solution of using Cs, Cd… directly as varying instead of gl_TexCoord[n] works without changing the application (although that would be the recommended fix if you could modify the app), because I believe the the app is hardcoded to use fixed slots for the program attributes (gl_TexCoord[0…5]) instead of querying for Cs, Cd, etc.

No. All the gl_TexCoord[…] are just varying parameters and are not used as input parameters for the vertex.glsl.
If you read vertex.glsl code, it just uses gl_Vertex, gl_Normal and a uniform variable ‘light’ as input values.
(Ca, Cd, Cs, V_eye, L_eye, N_eys) = (gl_TexCoord[0], …, gl_TexCoord[5]) are just output values from the vertex shader.
So you don’t need to change the app and the solution works properly :slight_smile:
Hope this helps.