Semantics for GLSL?

I can’t see any support for annotation and semantics for parameters in GLSL. When your parameters are only the plain light parameters (direction, etc) you can pass them in as lighting, but when you do things like spherical harmonics, or, worse, arbitrary effects, the set of input parameters really needs rich markup to work well in a production/tool pipeline.

Cg and HLSL solve this with annotations and semantics – but what’s the GLSL equivalent? I can’t find it in the language specification.

I can’t see any support for annotation and semantics for parameters in GLSL.

Can you explain what “annotation and semantics for parameters” is?

You mean like in HLSL,

float4 main(float2 texCoord: TEXCOORD0, float3 lightVec: TEXCOORD1, float3 viewVec: TEXCOORD2) : COLOR {
// …
}

?

If so, then it’s just that semantics isn’t needed in GLSL. The reason is that a vertex and fragment shader is linked together into a single program object. It can then link the correct parameters to each other and leaves the option up to the driver to decide which texture coordinate gets what parameter. So:

Vertex Shader:

varying vec2 texCoord;
varying vec3 lightVec;
varying vec3 viewVec;

void main(){
texCoord = …;
lightVec = …;
viewVec = …;
}

Fragment shader:

varying vec2 texCoord;
varying vec3 lightVec;
varying vec3 viewVec;

void main(){
gl_FragColor = …;
}

[This message has been edited by Humus (edited 01-14-2004).]

I understand that I can take one of the defined OpenGL semantics and translate that to “something else” in my own application.

However, Direct3D allows me to define MY OWN semantics. There is no semantic for “camera position” for a parameters. However, I can decide to declare a:

float3 camPos : CAMPOS < string uiTitle = “Camera Position”; string type = “WorldPos” > = { 0, 0, 0 };

If I have to take some parameter I’m not using (say, light[7].position) then something is missing in the middle to make an application using the program know that this parameter should have the value “camera positoin”. I could make this re-definition globally, but then there’s a limitation to the number of GLOBAL parameters available, rather on the number of parameters available per shader invocation.

Also, making these assumptions implicit makes program portability even harder.

Last, the annotations (the stuff in <> brackets in my sample) are very useful for both documenting what the parameters are, and telling the host program about special-nesses of the parameters. Such as whether they should be exposed in an artist UI, and if so, using what widget (color picker, direction, etc) and using what units (meters, pixels, lumen, …).

So, I take it that neither of these requirements are actually met by GLSL? That’s a shame. Cg has it, but doesn’t have universal vendor support, especially not on the content creation side. It’s sad that everything just steps backward as it’s being ARB-ized.

Honestly I don’t understand really. The point is that defining semantics in GLSL isn’t needed. It serves no purpose there. In DirectX it’s neccesary since vertex and fragment shaders are compiled as separate units and you don’t know at compile time which vertex shader goes with which fragment shader. Therefore, you need semantics for the driver to know where to map all inputs and outputs. This is not neccesary in GLSL though, and also reduces the likelyness of bugs, allows the compiler to give more informative warnings and errors, and might also lower the shader switching load slightly.

I don’t get at all why you’d use light[7].position or something like that for a camera position? Makes no sense whatsoever. You should be using a shader constant of course.

uniform vec3 camPos;

OK, for uniforms, looking it up by name makes sense. Except you can’t tag names with annotations to make sense of them in an artist-friendly parameter-tweaker UI.

But what if you want to supply a per-vertex blorgification factor. Or perhaps a per-vertex slappy-doo level? How would the source code, which is fully capable of supplying either, know where to supply it? At a minimum, you need some way of mapping arbitrary semantic name to vertex attribute channel.

And, again, meta data makes a lot of sense when you want to build an artist pipe. Perhaps I’m missing it somewhere? I only read through the spec once, and went back once to look for these things, so I could miss something.

You can map arbitrary semantic names to a vertex attribute channel using the attribute keyword:

attribute float blorgification;

void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_FrontColor = vec4(blorgification);
}

Then using glGetAttribLocation and glVertexAttrib/glVertexAttribPointer to set the attribute.

[This message has been edited by jra101 (edited 01-19-2004).]

Such as whether they should be exposed in an artist UI, and if so, using what widget (color picker, direction, etc) and using what units (meters, pixels, lumen, …).

And, again, meta data makes a lot of sense when you want to build an artist pipe.

Sounds like a job for the almighty comment block.

Metadata, really, has no buisness being part of the language. You don’t need it to bind arbitrary attributes to the language; the GetLocation mechanism takes care of that. The function, then, of metadata becomes solely for non-language purposes. At which point, a comment block is no more or less useful than actual inline “metadata”.

Originally posted by Korval:
Sounds like a job for the almighty comment block.

Which would mean you would need to write a parser to scan your shader for specific comments, then inside these comments for the user specified annotations/semantics.

The benefit of having the langauge and API support user defined annotations and semantics is that there will be a standard way to query/set these values.

Originally posted by jra101:
[b] Which would mean you would need to write a parser to scan your shader for specific comments, then inside these comments for the user specified annotations/semantics.

The benefit of having the langauge and API support user defined annotations and semantics is that there will be a standard way to query/set these values. [/b]

Alternatively you only need to have some extra file which maps your attributes to whatever extra metainformation you want to have .
That way you don’t need to parse any glsl (and if you use xml you don’t even have to write any parser at all). You may even embed your glsl inside that file (ala d3dx effects files).

There’s little sense in polluting your shaders’ code (which will have to be parsed at production time and it may influence your compilation speed if you piecewise build shaders at runtime from the ones you load at startup) with artist/game design information (IMHO) and there’s even less sense in speccing that into a graphics language spec.

I actually did the comment thing in the ARB_vertex/pixel_program version of my code. It wasn’t a huge success – the semantics and decorations of Cg and HLSL made much more sense.

Also, if you use a custom method (XML, comments, or something else) to describe these things, then shader code portability becomes harder. Being able to download arbitrary HLSL and plop it into your app is quite useful.

There’s still the issue that there’s no standard language for things like multiple render targets (not yet available in OpenGL) or exactly what the supported set of user interface parameters are, so even on the HLSL side, we’re not there yet. But it seems to me that HLSL supports that notion better than GLSL, which is a shame.

exactly what the supported set of user interface parameters are

I’m not sure I understand what you are refering to by “user interface parameters”.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.