Shaders organizing

Hello everyone :slight_smile:
I started to code some shaders for my game, but I rapidly realised that it would become a horrible mess.
I have coded a shader for vertex lighting, then one for fragment lighting, then one for changing colors, then one for transparency, etc…
Now I would like to mix some of them, but this gives me a large number of combinations, ie a large number of different shaders :sick:

Is there any other more “proper” way to organize shaders ?
Thanks

The classic C++ way you could apply to shaders is to selectively enable certain sections with #ifdef / #else / #endif preprocessor directives. However, that leads to a mess because nested preprocessor directives aren’t that easy to read.

Another way that works just as well in the GLSL world is to use real language “if” statements that use expressions that are only based on constants. For instance:


const int  FOGMODE_EXP2 = 2;

// Shader permutation settings
const int  FOG_MODE  = FOGMODE_EXP2;

if ( FOG_MODE == FOGMODE_EXP2 )
  ...
else ...

In GLSL with dead code elimination, this works exactly the same way. That is all the non-active code (the else in this case) including all the if conditional checks on constant expressions are tossed out of the code leaving you with the permutation you want.

This is the single-source-code generating multiple-shaders approach, often called ubershaders.

It is also called ubershaders where you have single-source-code generating single-shader with dynamic if statements in the code (i.e the "if"s are actually in the compiled source and evaluated on the GPU). Though this can be more expensive.

Another approach is the whole shader graph thing, but I won’t suggest you go there. I still have a bad taste in my mouth from that thing. You give up optimization and its a good bit more difficult to change the data flow in your shaders. Whereas with ubershaders you just do it, in one place, no fuss, no bother.

This topic pops up from time to time. Unfortunately, I didn’t find the exact links, so will express my approach again.

The shaders in GLSL are linked in a similar way to C programs. You can implement polymorphism concept on the GLSL program level. Generally it looks like this:

  1. The root shader object - that is actually trying to do some useful stuff, contains ‘main’ function. Has some functions declared, but not defined:

vec4 get_diffuse();
vec4 get_specular();
void main() { return get_diffuse() + get_specular(); }

  1. The library shader objects - contain various implementations for primitive functions like: diffuse for color, diffuse for texture, specular phong, specular with specularity map, etc.

  2. For each material the final shader program is composed from the root shader and a number of these little bricks, each implementing a particular property of a material.

On the minus side - the difficulty to debug error in case of incorrect objects attached, especially taking in account the dumbness of the GLSL compiler error messenger.

On the plus side - clear code of each shader brick and especially of the root shader. No conditional expressions (which don’t allow you to declare uniform/in/out, and may confuse the driver to use real branching), no bloat of the pre-compiler directives.

As far as lighting goes, going to deferred rendering helped simply my shader pipeline dramatically. Now each light type has its own simple shader and is applied in a separate pass.

I just finished implementing most of the Quake 3 “shaders”, which are effects applied to surfaces, involving multiple textures, vertex deforms, texture modifications, texture blending, etc…
What I learned from that was the utility of a nice abstraction linking your surfaces to conceptual effects. The full shader system then links these conceptual effects through the phases of shader setup (defining and setting/animating your uniform variables), vertex shading and fragment shading.

Vertex and fragment shaders are incredibly powerful, but they seem to be really only an enabling tool, not and end solution. At some point things have to link up to your C++ code. Tha’s what the quake type shaders seem to do.

I like the workflow of the Unreal Ed. shaders too. More general and they support multiple output channels (diffuse, emissive, normal). I will probably head in that direction…

I definitely agree with the complexity challenge!!

Thank you for your replies :slight_smile:

Nickels : I think deferred shading is a little too advanced technique for me, but I am very interested in, and I will try it as soon as I have some time

Dark Photon & DmitryM : Excuse me if I’m wrong but I think a mix of both of your approaches would be nice ? I will try to build my shaders this way

Thanks again for the help

To nickels:
I like deferred shading approach as well, but it doesn’t solve the problem completely. Since each light application shader has to know about surface material properties, you have to unify them and have the “only true” lighting model. It vastly decreases the freedom of artist expression - supporting arbitrary materials should be a goal in general case.

To Maire Nicolas:
You can combine two approaches, but I see no point of doing that.

Understood; my experience is probably too limited to know all the limits deferred will bring. It helped me on my ‘many many lights’ crusade.

I like the point about artist expression. One way of thinking about it is that the shaders and supporting code (shader initialization/feeding) are just the conduit between the visual editing tool or level designer and the end result. So some way of unifying that path and providing hooks to common variables and allowing the designer to pipe those into the effect they want. It makes their job easier and it makes the programmers’ job easier, because it narrows the scope into an implementable set of possibilities. As with many nice abstractions, the heftier the amount of parameterization, the better for everyone as well.

The extreme in the other direcion is to have to write C++ and shader code for every different surface, aack! Of course the goal may be producing cutting edge effects, so, just like research code, shortest path = best path. Deal with the mess later! --I’m still learning this stuff, so my ‘wisdom’ should definitely be taken with a grain of salt!

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.