Are you saying that for each object I should just call glUseProgram (desiredShader); every time I draw an object?
Yes, but only if you don’t need to switch for every object of a very large amount of objects (like for individual particles in a particle system - that would be slow!). Make it check if the shader is already bound (from a previous object), and if not, then switch. To see what is currently bound, either make your own system to keep track of the current shader or use the appropriate glGet’s:
glGetIntegerv(GL_CURRENT_PROGRAM,...);
For Deferred Shading, I have looked at a couple of sources, and it’s a bit complex, but I think I have a basic grasp. It seems like as you pass each object through the Vertex Buffer, it gets drawn to any texture in the G-Buffer that corresponds to a shader that said object has associate with it. For example : If I have two objects, lets say a cube and a sphere. Each object has a shader that draws stripes on it. Also, the cube has a shader that makes it red, and the sphere has a shader to make it blue. We would have a texture in the G-Buffer that has the Red cube on it, one with a blue sphere on it, and one with both objects having stripes. At the end of the day, these textures are all combined, and you end up with your final view. Is this correct?
Kinda. The idea is that instead of rendering geometry with (usually) lighting shaders applied to it as you go (per object), you render aspects of the objects which almost always include per-pixel position, color (diffuse and specular), and normals to the GBuffer (FBO with a bunch of textures attached) using a single shader that takes advantage of multiple render targets to draw to all of these textures at once. This shader isn’t replaced during rendering to the G Buffer, unless you want to enable a certain feature such as normal mapping, since this has to be done at this stage. Anything that provides information about an object occurs at this stage, so the information is saved to the textures for the shaders to use later (that’s why it is called “Deferred”). After you are done rendering to the G Buffer, you can then run all your shaders in succession on the stuff in the G Buffer, without having to re-render the geometry. This method also makes sure that the shaders are only run on what is visible.
It uses way more memory than forward rendering. But, it is way faster for doing things such as lighting, especially when you have lots of small ones, and makes it easy to do things such as SSAO. You don’t have to run more than one geometry pass for any object like you often have to do in forward rendering.
Many commercial games use this (or variations of it) (Battlefield 3, Crysis 1-2, Amnesia: The Dark Descent, StarCraft 2…)
So do it, it is worth it!
Deferred shading has no problem with alpha test transparency (0/1 but nothing in between transparency), but can’t really do semi-transparent stuff.
To solve the semi-transparency issue, you usually use forward rendering for just the semi-transparent objects after you have done all of the deferred shading stuff. Render it using the same depth texture you used for the G Buffer.