Using different shaders on objects, and multiple shaders on one object

I couldn’t decide if this belonged in the GLSL section or not, but I kept if here because I assume this is less related to GLSL than to the way OpenGL handles shaders.

Essentially I am trying to handle my shaders in the most dynamic way possible. I want to be able to associate certain objects with shaders, like a component-based system. So an object can have multiple shaders attached to it. The simplest way I can think of doing it is by having a map of shaders to objects.

I am targeting OpenGL 3.3 so that I can get decently modern code that works on the majority of systems owned by gamers.

My biggest issues revolve around two questions :

  1. What is the best way to apply different shaders to different objects?

    Thoughts : From what I have read, it seems like the best way is to designate what objects use what shaders, then begin using a shader, draw the corresponding objects, move to next shader.

  2. What is the best way to apply multiple shaders to an object?

    Thoughts : Should I just bind a shader program, draw what I need, get a texture back, then bind the next shader and pass it my texture? For example, if I had a shader that rendered one specific object in black and white, could I apply my normal shaders, then pass a resulting texture on to the new shaders to do the final processing? Is that too much overhead?

I apologize if my issues are hard to understand. Please ask any questions that can help make the issues more clear. Thanks in advance!

-Kreed

For using different shaders for different objects, your suggestion is probably the best one. However, if you don’t switch shaders often, I wouldn’t even worry about grouping the objects together for the shaders. It won’t cause a significant performance difference.

You can’t run multiple shaders at once, so it has to either be a single shader that combines all the functionality of the individual shaders, or you have to look into deferred shading. Using a temporary texture for each object in order to render multiple shaders would definitely be too much overhead, but having all objects rendered to a single set of render targets (G Buffer) upon which all shaders are applied in succession would be fast. You could add a special render target to the G Buffer to decide what shaders to apply to the different regions of the texture (as with material properties).

For deferred shading, check out this tutorial: http://ogldev.atspace.co.uk/www/tutorial35/tutorial35.html

Thanks for the info Cireneikual.

Are you saying that for each object I should just call glUseProgram (desiredShader); every time I draw an object?

For Deferred Shading, I have looked at a couple of sources, and it’s a bit complex, but I think I have a basic grasp. It seems like as you pass each object through the Vertex Buffer, it gets drawn to any texture in the G-Buffer that corresponds to a shader that said object has associate with it. For example : If I have two objects, lets say a cube and a sphere. Each object has a shader that draws stripes on it. Also, the cube has a shader that makes it red, and the sphere has a shader to make it blue. We would have a texture in the G-Buffer that has the Red cube on it, one with a blue sphere on it, and one with both objects having stripes. At the end of the day, these textures are all combined, and you end up with your final view. Is this correct?

If so, I have a few questions. I read that this ruins transparency and antialiasing, will I have to write shaders for those? How do I apply those to the final product before rendering it? Also, how do I dynamically add new shaders into this workflow without the final shader that combines everything knowing about the new shaders added? Sorry for the questions, but I’m new to this and really want to do it the right way.

Are you saying that for each object I should just call glUseProgram (desiredShader); every time I draw an object?

Yes, but only if you don’t need to switch for every object of a very large amount of objects (like for individual particles in a particle system - that would be slow!). Make it check if the shader is already bound (from a previous object), and if not, then switch. To see what is currently bound, either make your own system to keep track of the current shader or use the appropriate glGet’s:

glGetIntegerv(GL_CURRENT_PROGRAM,...);

For Deferred Shading, I have looked at a couple of sources, and it’s a bit complex, but I think I have a basic grasp. It seems like as you pass each object through the Vertex Buffer, it gets drawn to any texture in the G-Buffer that corresponds to a shader that said object has associate with it. For example : If I have two objects, lets say a cube and a sphere. Each object has a shader that draws stripes on it. Also, the cube has a shader that makes it red, and the sphere has a shader to make it blue. We would have a texture in the G-Buffer that has the Red cube on it, one with a blue sphere on it, and one with both objects having stripes. At the end of the day, these textures are all combined, and you end up with your final view. Is this correct?

Kinda. The idea is that instead of rendering geometry with (usually) lighting shaders applied to it as you go (per object), you render aspects of the objects which almost always include per-pixel position, color (diffuse and specular), and normals to the GBuffer (FBO with a bunch of textures attached) using a single shader that takes advantage of multiple render targets to draw to all of these textures at once. This shader isn’t replaced during rendering to the G Buffer, unless you want to enable a certain feature such as normal mapping, since this has to be done at this stage. Anything that provides information about an object occurs at this stage, so the information is saved to the textures for the shaders to use later (that’s why it is called “Deferred”). After you are done rendering to the G Buffer, you can then run all your shaders in succession on the stuff in the G Buffer, without having to re-render the geometry. This method also makes sure that the shaders are only run on what is visible.

It uses way more memory than forward rendering. But, it is way faster for doing things such as lighting, especially when you have lots of small ones, and makes it easy to do things such as SSAO. You don’t have to run more than one geometry pass for any object like you often have to do in forward rendering.

Many commercial games use this (or variations of it) (Battlefield 3, Crysis 1-2, Amnesia: The Dark Descent, StarCraft 2…)

So do it, it is worth it!

Deferred shading has no problem with alpha test transparency (0/1 but nothing in between transparency), but can’t really do semi-transparent stuff.

To solve the semi-transparency issue, you usually use forward rendering for just the semi-transparent objects after you have done all of the deferred shading stuff. Render it using the same depth texture you used for the G Buffer.

Cirenkual, once again thanks for the help.

Deferred shading has no problem with alpha test transparency (0/1 but nothing in between transparency), but can’t really do semi-transparent stuff

So then textures with 100% transparent areas would work fine?

Also, with deferred shading, it seems like it focuses on just separating light calculations from the rest. Would I use forward rendering for effects like fog, shader-based particle effects, applying animated textures, etc. to a single object?

Just for a short-term solution, what would be the best way to handle multiple shaders on an object using forward shading? I am definitely working on deferred lighting, but I need to get this prototype working quickly and deferred shading is still confusing as I can’t find any good tutorials out there. The one you gave me looks great, but it looks like the tutorials stop before the shader has the kinks worked out, so it leaves me confused. Also, the first project that uses this engine is a game that uses mostly 2D animations, although I am working on a 2D lighting algorithm for it, so some of these questions may be a little overkill for the first project, but they are definitely necessary for my 2nd project.

So then textures with 100% transparent areas would work fine?

Yes!

Would I use forward rendering for effects like fog, shader-based particle effects, applying animated textures, etc. to a single object?

Nope. Animated textures don’t even need a shader, and fog can be done as a post processing effect. Only stuff like bump mapping needs a separate shader that is an extension of the normal G Buffer rendering shader, since bump mapping modifies the normals that need to be drawn the G Buffer.

Just for a short-term solution, what would be the best way to handle multiple shaders on an object using forward shading?

Put all your shaders together into one shader.

but it looks like the tutorials stop before the shader has the kinks worked out

It has 2 more parts, I believe. At some point they start using light stencil volumes, but don’t use these, just do a depth test, it is faster and easier to code.

So, would post-processing effects include particle effects? Particles effects are a major part of my current project for rendering smoke, and thus it’s my largest concern, It doesn’t seem like something that you would put into the deferred shading category. For the post processinf effects, where do they come in? I know they will be a separate shader. Does openGL handle post-processing shaders in a special way?

Thanks again for all the awesome information.

So, would post-processing effects include particle effects?

No. You have to use forward rendering if the smoke sprites are semi-transparent, otherwise you can use deferred shading. Particle effects don’t require shaders, you just render the particles as 2D images facing the player.

Does openGL handle post-processing shaders in a special way?

No. With post processing effects, I just mean shaders run at the very end of the frame.

That would include SSAO, FXAA, Deferred Lighting, HDR, etc.