Virtual shader functions

wouldn’t it be a nice feature of GLSL if you could declare a function virtual?
by that I mean:-

1/ compile&link a global shader, with virtual functions such as ‘getWorldPosition’, or ‘getColour’.

2/ for each material, be able to get the currently active shader, clone it, override any virtual functions by adding more shader source and linking again, thus generating a new shader.

I’m thinking in terms of having a unified lighting model, but also having fancy fx shaders at the geometry level which can be merged in with the current lighting shader, without resorting to multipass.

kinda like the fragment linking in hlsl or the interfaces in cg (which i believe is just fragment linking in disguise)? i think it’s a great idea; i’m all for it.

Originally posted by knackered:
for each material, be able to get the currently active shader, clone it, override any virtual functions by adding more shader source and linking again, thus generating a new shader.

You can simply write the main part of the code as one source string and those per material functions as separate strings and combine them together during call of the glShaderSource when you compile shader for that material.

Well, no, that’s not the flexibility I’m talking about. What if I don’t know the source code of the currently active shader? What if the lighting model uses multiple shaders depending on the number and type of active lights?
I’m talking about a generic policy, rather than relying on people copy/pasting.
I’m thinking more in terms of being able to document the shader interface the application is using (the names of the virtual functions being used) and then being able to override those functions (even accumulating the result from the hiearchy of shaders) to generate a new shader.

// base vertex shader
void main()
{
gl_FrontColor = getColor();
}
virtual vec4 getColor()
{
return dofancylighting();
}

// shader2 derived from base
virtual vec4 getColor()
{
vec4 c = super::getColor();
c *= domyfancyeffect();
return c;
}

// shader3 derived from shader2
virtual vec4 getColor()
{
vec4 c = super::getColor();
c *= domyfancywobble();
return c;
}

Glslang provides that functionality through the linking stage. The shader/program paradigm.

You can compile your main shader and link it to some stub shaders that turn those “get*” statements into just retrieving uniforms. Later, you can take that same compiled main shader and link it to some other shaders that do more substantial computations.

To reword your suggestion into glslang-compatible language:

Remember: the glslang linking stage can take multiple shaders for both vertex and fragment steps.

I know the mechanics of the compile/link stages of GLSL. Just as I know that what I’m suggesting is absolutely not possible without resorting to global uniforms and/or string manipulations.
I’m not talking about simple overrides, I’m talking about a hiearchy of overrides, each feeding the next one down the hiearchy, just like C++ virtual functions are commonly used. To do this, you need to have a concept of multiple implementations of a method.
For (simple) example, I want to modulate the colour result of the entire hiearchy of shaders with my special number, I don’t want to completely replace the whole colour calculation, I want to modify it, and I want a shader further up the hiearchy to be able to modify my modified colour.
I’ve almost finished a system I started about 4 hours ago that sits on top of GLSL that does this, so it’s not entirely necessary for it to be part of GLSL, but maybe others would appreciate it being in the standard - making GLSL more attractive than HLSL.

Originally posted by knackered:
but maybe others would appreciate it being in the standard - making GLSL more attractive than HLSL.
Adding features that can be reasonably implemented by application is imho not the right thing to do to make GLSL more attractive. For me the better candidates for adding would be:

  • Query to determine ammount of virtualization applied to shader (e.g. no virtualization, software emulation) so program can, if desired, select shader that is better match for HW.[/li] [li]Support for uniforms shared by all program objects.[/li] [li]Support for vendor and driver independent glsl compiler whose use can be enabled, ideally with specifying its specific version, by application. If disabled, HW specific compiler created by HW vendor will be used. This independent compiler may not support all features or be optimal for specific HW however it must be reliable so its specific version for specific input always generates the same output.

Originally posted by knackered:
I know the mechanics of the compile/link stages of GLSL. Just as I know that what I’m suggesting is absolutely not possible without resorting to global uniforms and/or string manipulations.
You can attach the shader object (if I’m using the term correctly) to the program object and link. No need for string manipulation or uniforms.
For example, if your composed vertex shader is compiled as 5 VS, then you need to attach all 5 to the program object, then link.
No string manipulation except those 5 pieces need to fit in together.

What knackered seems to want is something more than merely having a compiled shader that can feed into another. He wants each shader to be an object with inputs and outputs. Outputs from one shader can be linked to feed inputs of another shader arbitrarily (ie, there’s no need to explicitly match names in both of the shaders).

Shader 1 can have a named output called “foo”, and Shader 2 can have a named output called “thingy”. You would call a function (as you attach the shaders to the program) that says that the Shader 1 output “foo” links into the Shader 2 input called “thingy”.

I agree that such a system would be nice. The only compelling argument for directly including such a feature into glslang proper is compiler/linking performance.

[ edit ]

Just thought of another reason: performance. If you know ahead of time (at link time) that a certain output is unused in this particular program, you can remove the code that generates it (to the extent that it isn’t used by something else). Then, that code doesn’t get executed.

Support for vendor and driver independent glsl compiler whose use can be enabled, ideally with specifying its specific version, by application. If disabled, HW specific compiler created by HW vendor will be used. This independent compiler may not support all features or be optimal for specific HW however it must be reliable so its specific version for specific input always generates the same output.
That is an incredibly good idea. Too good for the ARB, I’m afraid.

Support for uniforms shared by all program objects.

yeah this has gotta happen, overriding the inbuild states (glLight etc) is messy

Support for vendor and driver independent glsl compiler whose use can be enabled, ideally with specifying its specific version, by application. If disabled, HW specific compiler created by HW vendor will be used. This independent compiler may not support all features or be optimal for specific HW however it must be reliable so its specific version for specific input always generates the same output.

this is what HLSL does i believe, i read somewhere (cant find the quote) though theyre gonna drop it + do the same thing as glsl.
what ild like better is the ability to read the generated already compiled shader off the disk.
at the start of app u just do a check of the driver version shader compiled with against the one on disk if different regenearte if not (99% of time) use the existing one

This is all very interesting, but this thread is about my zippy virtual functions idea.
Start your own bloody thread.

Originally posted by Komat:
Adding features that can be reasonably implemented by application is imho not the right thing to do to make GLSL more attractive.
No, for me adding features that could eventually be hardware accelerated is the right thing to do. By building in this kind of thing we can cut down the number of shaders required in an application, or not, depending on the capabilities of the target hardware. This is in keeping with the spirit of OpenGL.

for what it’s worth, i think the idea is quite zippy :slight_smile:

Thank you leghorn, you exhibit great wisdom.
In case you’re interested, the system works perfectly. It’s generating lots of unique shaders automatically, but remembers the combinations it’s already made in a cache. The cache has a usage counter, and deletes shaders that haven’t been used in x number of frames. This is cool for me, as the user can create/destroy lots of materials at runtime. I can’t believe the number of options that have opened up for me with this system.
For instance, I’ve got a ‘projected grid’ vertex shader generating vertex positions, which feeds into a FFT displacement mapping vertex shader that offsets the vertex positions, which feeds into a local displacement mapping vertex shader that offsets the positions further. The same for the normals. At runtime I could create a new vertex shader and add that to this hiearchy to further enhance the shading.

Originally posted by zed:
this is what HLSL does i believe, i read somewhere (cant find the quote) though theyre gonna drop it + do the same thing as glsl.

From what I have read i got an impression that at least in DX10 the HLSL compiler will be still provided fully by Microsoft and drivers will be only operating on its output. Only this time there will be no way to specify hand written “assembler”.

yeah, i’ve been doing the same thing with hlsl for some time. the nice thing about hlsl is that the fragment linker can eliminate dead code automatically, on-the-fly, and all very efficiently. the variable binding semantics make them a breeze to work with to boot. like you say, it’s string manipulation, unless you want to deal with asm directly, but it’s nice to know it’s there. that’s why i half-jokingly suggested an intermediate language in the suggestions forum, so you could build your own front-end and quasi back-end for any language you like, without the need for 2 or more source-to-source translators (sigh). that’s what i’m doing now, translating my script to hlsl and glsl (at least trying to). i have all sorts of scripted dependencies in my shaders, so off-the-rack languages don’t quite get it for me.

p.s. is that the projected grid water thing?

Yes, the water thingy. Don’t bother with it, it’s got serious aliasing issues with anything other than very small displacements, I’ve ended up making the normals face straight up the further away the vertex is, just to sort out the lighting aliasing.
It also absolutely needs vertex texture fetching, it’s a real CPU/bus hog otherwise.
Apart from that it’s great.
So why change your name from bonehead?

that’s a shame. the demo looks promising, but i’d have to agree that now with vertex texture sampling you’d have to question the wisdom of spending all that cpu on reducing detail at this point.

p.s. i don’t know why, knackered. sometimes i feel like a bonehead, sometimes i don’t. besides, i was in the mood for something somewhat less indicative of my own cerebral thickness. but chances are i’ll be feeling like a bonehead again very soon so i’ll probably be changing it back later. sorry if i caught you off guard :wink:

Yes, with ‘projected grid’ on the CPU you have to displace and upload all of the visible vertices, so the more you refine the grid the more vertices you have to displace and upload…but water simulations usually just deal with a small patch of data points, and a standard approach would be to displace this small patch on the CPU, then upload it, then render the patch many times in the frustum with a single draw call each time…which is waaay faster than the ‘projected grid’ approach.
Of course, using vertex texture fetch this becomes less of a problem, but you’re still severely straining the GPU with all those redundant texture fetches…
I mean, I could go on all day about this. Regrets, I’ve had a few…

found an interesting article about HLSL fragments:-
http://www.talula.demon.co.uk/hlsl_fragments/hlsl_fragments.html