Basic Shader Management

Hi, I have 15 years of professional development and I’ve toyed with OpenGL along the way, but I’ve been diving in lately and actually reading the books and I’m actually learning things, but I’m having trouble knowing how to organize my code.

I ultimately would like a world I can move around in with lots of lights and reflections. I’m targeting OpenGL ES 1.1 and 2.0 and the 1.1 branch will be limited to the fixed function pipeline and the 2.0 branch will have access to the full range of shader logic I can muster. Questions…

If I want my lights to reflect off of things in my environment all vertex/fragment shaders must know ahead of time about all of the lights in the scene, right?

Therefore shouldn’t all models/scene objects use the same vertex/frag shader that receives all light sources as uniforms?

I was setting things up so that each model could have its own GLSL program with its own vertex/frag shaders, very much independent from each other. Now I’m thinking that was a mistake because if they’re independent how do they all know about each other’s light and reflection info? Also, if the entire fixed function pipeline from 1.1 can be implemented in GLSL, then shouldn’t that be the baseline shader program that all models use and enable special stuff through bools?

Also, on another topic…array of structures vs structure of arrays for vertex attribute data…all of the books and dev docs seem to indicate that you might want to keep your texture coordinates separate from your vertices and normals. I can’t think of any reason why. If you want a different image, can’t you just swap out the active texture? Would you do that if the second texture image has different coords? Isn’t that something that could be handled by better pipeline management?

When you say reflections, you should clarify. What kind(s) of reflections? Local-only or some global-reflections (e.g. mirror reflections)? And in what permutations? For instance (in Heckbert light interaction notation): LDE, LSE, LDSE, LDDE, LDDSE, etc. (L=light,D=Diffuse,S=Specular,E=Eye).

And do you want to consider light occlusion in your shading model?

And what types of lights? Point, line, area, infinite directional, etc.?

If I want my lights to reflect off of things in my environment all vertex/fragment shaders must know ahead of time about all of the lights in the scene, right?

No not necessarily.

And whether you’d even use OpenGL depends on what light interactions you want, your light types, and whether you’re modeling light occlusion.

If you’re talking local shading only diffuse and specular (e.g. LDE, LSE) with simple point or directional light sources, then if you really do have a bunch of light sources, consider a Deferred Rendering technique such as Deferred Shading, Deferred Lighting, or Light-indexed Deferred Rendering. If you need some global light interactions (e.g. multiple reflections), then you may want to precompute and bake those (an exception being mirror specular). Similarly, if your light interactions are somewhat static, you can consider prebaking them. Depends on what you need.

However, if you’re just starting out with OpenGL/3D I’d suggest you start with the usual Forward Shading approach and push it to the limits to “get your GL legs” and see for yourself whether you actually need another approach.

Thing about Forward Shading is that you either have the “but the shader has to know about all my lights” problem you mention as well as not being able to be too smart about which lights you apply to which pixels/samples. OR, you end up having separate fill passes per light (or group of lights) and shipping the geometry and material state down the pipe a few times. Both of which get expensive and have their limits.

…all of the books and dev docs seem to indicate that you might want to keep your texture coordinates separate from your vertices and normals. I can’t think of any reason why.

I can’t either, unless you plan to dynamically update one and not the other. Otherwise, I’d interleave them into one attribute block and shove them down together.

If you want a different image, can’t you just swap out the active texture?

You can, but that’s a state change, and state changes are expensive.

Would you do that if the second texture image has different coords?

Read up on texture arrays. Same .xy texcoords. Different .z slice index. All in one sliced texture. No state change required to flip images.

Consider precomputed lighting as well. Most engines seem to use lightmaps, which reduces the shader complexity (and increases the level building complexity) mixed with some dynamic lights implemented in the shader.
Of course, you don’t get specularity and dynamic shadows with such light…
+1 on implement the forward shader and see how far you can go.
I got 3 or 4 lights before the shadowmap storage swamped the shader. Eventually went to deferred approach, which has some very nice simplifying properties (and some problems…).
Currently working on a shader building tool similar to the unreal editor material editor…
Unreal material editor
a natural extension of the quake type shader approach:
quake shader discussion
which is (more or less) limited to precomputed diffuse lighting plus emissive lighting only…

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.