GLSL Annotation Syntax

I brought this up in a recent shading language forum thread, so I thought I’d toss it in here for the record.

The idea is simply to add some sort of syntactic sugar for specifying user defined semantics/attributes within GLSL. The motivation is to provide the user a way of directly communicating variable semantics to the application.

Yes, this can be achieved indirectly, by parsing out embedded comments and such, but this is comparatively inconvenient and considerably less aesthetically pleasing than a structured part of the language would be. I’m also aware that this functionality could be and is layered in other APIs, such as Colada. But my contention is that a language element like this belongs in the language itself, not in some other, non-OpenGL layer.

As a simple example:

[Texture="Clouds.tga", MinFilter="NEAREST", ...]
sampler2D mySampler;

[Type="TimeDelta", Rate="2"]
float timeDelta;

[UI="Slider", Range="0,100"]
float brightness;

The syntax is actually borrowed from C# (without the attribute class name).

Perhaps something more along the lines of a FX type annotation:

sampler2D mySampler  
<
Texture="Clouds.tga",
MinFilter="NEAREST",
...
>;

Whatever, it’s simply an optional list of (identifier, string) pairs, and the driver needn’t (necessarily) concern itself with the contents at all. One could store a list of these pairs, and expose a simple API to access a particular item belonging to a particular uniform or attribute.
glGetAttribureAnnotation(int prog, int attr, char* name, int maxLen, char* value);
glGetUniformAnnotation(int prog, int uniform, char* name, int maxLen, char* value);

But my contention is that a language element like this belongs in the language itself, not in some other, non-OpenGL layer.
And this contention is based on… what, exactly?

This seems to add something to the driver that could be fragile and break implementations, as well as take up memory for applications that don’t need them. You’re going to have to justify why this needs to be in a driver rather than an appropriate external layer.

Agree, GLSL need annotations badly. Also an offline compiler to protect intellectual property ( but this is offtheme ).
As a temp solution you could use /**/ and // comments and parse them manually like DOxygen ( a code documentation system ) does.

The GLSL is just oriented to manage shaders… but why not to use the D3D effect idea? We need too render states, depth/stencil operations, etc…

There is a thing called ColladaFX to do this but is not officially supported by OpenGL at the moment ( perhaps a future extension to manage this will be a good idea? )

There is a thing called ColladaFX to do this but is not officially supported by OpenGL at the moment ( perhaps a future extension to manage this will be a good idea? )
You have failed to answer the practical and relevant question: why does this need to go in a driver?

As you point out, ColladaFX exists. And (I guess) is usable. So, what’s the need to add this to the driver?

And even if ColladaFX didn’t exist, what is the pressing concern that says it should go into the OpenGL implementation rather than a layer above it?

Come on, Korval. Are you really opposed to the idea, or do you just feel personally obligated to be contrary?

Santyhammer, thanks for the positive feedback :slight_smile:

Come on, Korval. Are you really opposed to the idea, or do you just feel personally obligated to be contrary?
1: I’m opposed to any significant change presented without any real foundation other than, “Because it’d be nice to have.” These kinds of things should be justified, and you have not done so.

2: Personally, I don’t see the point in making this a feature of glslang. It takes up room in an already crowded driver and makes a difficult to implement specification even moreso. Personally, if it were implemented, I would have no use for it.

These kinds of things should be justified, and you have not done so.
Um, yes I have.

It takes up room in an already crowded driver and makes a difficult to implement specification even moreso.
Difficult for you, perhaps.

Oh, and what “room” would it take up?

Personally, if it were implemented, I would have no use for it.
Really? Then I guess if you have no use for something, we should all turn in for the night?

Thanks, Korval. You concerns are duly noted, if not strictly required :wink:

Um, yes I have.
No, you didn’t. Certainly not completely.

You only explained what it is you want and how it would be useful. You did not justify why it should be embedded in drivers.

A justification would take the form of, “Implementing this outside the driver would incur a substantial performance bottleneck.” That would be the justification for why VBO’s need to be in OpenGL rather than as a layer on regular vertex arrays. That would be the justification as to why shaders need to be in OpenGL rather than as a layer on top of an increasingly tortured fixed-function pipeline.

A full justification for inclusion in the OpenGL spec requires explaining why it can’t be somewhere else without losing some of its efficiency or functionality. Otherwise, I could easily ask for all kinds of nonsense features. Maybe the modified butterfly subdivision surfaces scheme should be implemented in OpenGL too.

Difficult for you, perhaps.
You can’t honestly be suggesting that writing a GL implementation is a simple task.

Oh, and what “room” would it take up?
Code room. Parsing the data, storing it, and then regurgitating it on demand. More entrypoints and behaviors to respond to. The most bug-free code is the code that doesn’t exist. These functions can’t cause problems if they’re not there.

I’m not saying it’d be a huge, onerous task to implement. But if you don’t have to, if someone else can do it, why should you?

Really? Then I guess if you have no use for something, we should all turn in for the night?
Did you miss the “Personally” part? As in, “My personal opinion?” So, I’ll restate:

My professional, objective opinion is that you have not done enough to justify the placement of this functionality in a driver when it could just as easily and effectively be layered. My personal, subjective opinion is that I consider the functionality useless to me.

Thanks for clearing all that up. I was really confused…

Do you have any idea what you’re talking about?

I believe you’d argue both sides of a tautology.

Hey, here’s something for the weekend: It’s going to rain tomorrow or it’s not going to rain tomorrow.
(Knock yourself out.)

 :rolleyes:

Your inability and ultimate unwillingness to defend the construct you propose to a obvious and reasoned concern about it speaks to how much it is truly not needed in OpenGL. And your willingness to turn it into a personal matter rather than being about the issue in question shows how little factual need there is for this functionality.

On the contrary. It’s my style and enthusiasm that makes this thread worth reading in the first place :cool:

It’s your weightless prose that adds levity and entertainment value :stuck_out_tongue:

By the way, I am not under any contractual obligation to satisfy you or anyone else of the merits of a given proposal. I figure the good chaps ultimately responsible for the induction of new functionality are sufficiently capable of making up their own minds in these matters. Be it outright acceptance or the waste paper basket–I’ll gladly leave it in their very capable hands to decide.

However, I’ll continue with this ridiculous quibbling, if you insist.

@Leghorn:
You can do this in application layer. This “feature” shouldn’t go in driver. OpenGL is just a graphics library. Why driver should bother with slider, texture names (as strings), …

If you need such functionality, use comments like //[…] and parse comments in application.

Well, now that makes a lot more sense when you put it that way.

   :D        :D        :D        :D  

Thanks, Yooyo.

P.S. It’s a pipe dream, you know :frowning:

Perhaps someone here should really try to justify why this should go into the driver, instead of just saying “why not” :wink:

What about this reason:
This feature should go into the compiler. Why? Because you have to parse and semantically analyse the GLSL language to find out where this annotation belongs. The code is already there in the compiler, it just has to add the annotations to each symbol table object.

The proposed solution as comments that are processed by a layer on top of GLSL (doxygen-style) would work as well. But you have to duplicate at least the GLSL parsing and the symbol list.

So to sum up, annotations should go into the compiler, and because the compiler is in the driver they should go there, too.

Of course it can be done as a layer on top of the driver. But with the same argument, the whole GLSL compiler should not be in the driver, but hey, it is already there.

Thanks for your support, Overmind :slight_smile:

This feature should go into the compiler. Why? Because you have to parse and semantically analyse the GLSL language to find out where this annotation belongs.

[…]

So to sum up, annotations should go into the compiler, and because the compiler is in the driver they should go there, too.
See, this is a justification. And a pretty decent one.

However, it does seem that such functionality, literally storing information in GL that GL itself doesn’t use internally for anything, is somewhat new for OpenGL. I don’t recall another extension or feature where GL is specifically used as a data storage and retrieval device.

Just something to think about.

Something else.

If this were going to happen, I would suggest that at least some of these have explicit intrinsic meanings to GL. Like the specifications of Sampler Object parameters. This is the kind of thing that a shader might need to assume or require (because it might be doing its own filtering on the texture, etc), and allowing the shader to explicitly force this would be useful.

That way, at least you’re getting a feature out of it.

But with the same argument, the whole GLSL compiler should not be in the driver
Hey, I agreed with nVidia that it shouldn’t have been there in the first place. Obviously the ARB didn’t see things that way. Thanks, 3DLabs…

This should not go inside the “driver”. This should be part of the future OpenGL SDK helper classes. Exactly like D3D does with the D3DX library.

So… need this to be in OpenGL? I think yes because will be useful and has been prvoed a good thing in D3D.

Need to be in the driver? Nope, not at all. That will introduce more complexity to the driver and increase its size.
Of course you could do yourself in your app and parse the /**/ and //comments but will be better if the official SDK supports basic things like this one with some “helper classes” like D3DX does.

But with the same argument, the whole GLSL compiler should not be in the driver

I think they should do that. The GLSL needs an offline compiler like D3D. Why? To protect your precious shader intellectual property, to be faster loading, to know what asm instructions are generated to see what messed the damm high-level shader compiler(ok you can do this with a 3rd party propietary and non-standard tool like NVShaderPerf ). I really don’t see the point of GLSL begin compiled at realtime. It only increases driver size and after all does not optimize well… only introduces incompatibility problems and ugly things like going software mode when founds a problem, etc…

The D3D offline-precompiled approach is much more strict(does better validation), you can see the asm instructions it generates, obfuscates your IP and you can control the code much better. Also reduces driver size and improves shader loading times.

In fact, I think somebody mentioned all this in the upcoming OpenGL 3.0 specification and now it’s beeing implemented in that way.

Well… what if NVidia add such feature in compiler and AMD/ATI do that 6-12 months later? Your GLSL code will not work on AMD/ATI cards in that timeframe. At that time, your app have to detect driver and change shaders “on the fly” before compiling.

Even if this suggestion goes directly to ARB and they approve this feature, it depens on OpenGL driver developers when they will support it. In short… we can expect another mess.

If you really need such feature do it yourself. I think it is quite easy to parse such code comments, or… use XML for shaders, write your own shader library and store additional annotation in XML structure. When APP load XML it can generate clean GLSL code.

To be honest… this can be very usefull feature for shader development, but it should be in app layer. Maybe it can go in KHRONOS OpenGL SDK (if such project exist).

Thanks a lot for the great feedback guys. You all raise some very good points.

Everyone seems generally in favor of the basic idea, but unsure about where it belongs. All I’m suggesting is that this should be a part of OpenGL proper, be it in the driver, a layer, whatever makes the most sense, if and when the time comes.

I really like the idea of exposing built in/intrinsic GL names, but my initial vision was of something embarrassingly simple to implement and spec. As it stands, this is a very simple “addition” to the parser, requires very little storage space or code complexity (a simple array of strings would do), and is completely orthogonal to the rest of the spec (zero interaction, except where a declaration is optionally decorated). I agree that it seems sorta weird to use GL as a storage bin, but perhaps the built-ins could be added later, or perhaps as yet unseen additions/changes in GL3 will make such an addition less awkward in that respect (lump it in with the IL meta type info, say). Dunno. Heck, you could add a full blown type system and go stark raving mad with this, as D3D did. Though personally, I think that’s a bit overkill, but I’m not at all opposed to a little good natured insanity.