Virtualized OpenGL Light Sources for Efficient Multi-Light Source Lighting

Recent discussion of the OPENGL-GAMEDEV mailing list have explored how to support numerous (more than 8) light sources at once with an OpenGL rendered scene. For example, consider a hallway in a Quake-style dungeon where candles or torches illumate dynamic objects in the corridor. The candles/torches can be be modeled as positional, attentuated OpenGL light sources.

The problem is that most OpenGL implementations only support the minimum required number of light sources. Minimum required number of light sources is eight (8). Note: OpenGL implementations are free to support an arbitary number of light sources, but to make hardware accelerated lighting tractable, OpenGL only mandates that at least 8 light sources.

Even if there were an arbitary number of OpenGL light sources and available, the performance implications of a dozen or more OpenGL light sources enabled throughout the scene is likely to be prohibitive, particularly considering that far away lights may add an extremely insignificant or even no lighting contribution to many objects in the scene.

A couple of solutions were proposed on the mailing list:

The remainder of this article describes the last approach in more detail including the presentation of screen snapshots and sample source code to demonstrate the technique.

First, a few words about OpenGL's lighting model. Light is a complicated phenomenon. OpenGL's lighting model is designed for real-time interaction; OpenGL's lighting model only attempts to capture some of the simplest lighting surface effects such as diffuse and specular interactions. OpenGL's lighting model does not handle complicated effects such as shadows, reflections, refraction, or occlusion of light (relativistic and quantum light interfactions are similarly ignored). If you want to implement effects such as reflections and shadows with OpenGL, you can with more sophisticated rendering techniques beyond those supported by OpenGL's lighting model. What OpenGL does model is per-vertex interactions involving only the surface material and a set of light sources. In practice, this is enough to achieve some pretty nice effects at interactive rates.

In practice, you can think of OpenGL's lighting model as really a bunch of equations that compute an RGB color value at each vertex. Indeed, if you want to really understand OpenGL's lighting model, see the explanation of OpenGL's lighting operation in the OpenGL 1.1 specification.

The fact that OpenGL's lighting equations are explicit makes it straightforward for applications to quickly and robustly approximate the contributions of various light sources in the scene more or less the same way that OpenGL. This lets the application virtualize its light sources. Some other 3D graphics APIs such as Direct3D do not specify the lighting equations the API uses in enough detail to pre-compute lighting effects reliably (in the case of Direct3D, the API lacks both a rigorous specification and a standard conformance suite to enforce a uniform behavior; OpenGL has both).

Before we get much further describing the approach, let's take a look at a screen snapshot from the multilight.c example (the full workign source code is available; the program uses the OpenGL Utility Toolkit for portability):

So what is the image showing? The sphere in the scene wanders among the two rows of light sources (indicated by each of the small color spheres). Think of the two rows as light sources as candles in a hallway if you want (use your imagination). Notice that most of the light sources are numbered. The closer (less distant from the sphere) light sources have smaller numbers (the blue "0" light source is the closest; light sources "7" and "4" are actually hidden behind the sphere). The distant light sources (what would be "8" and beyond) are simply not enabled. Because of the way these light sources attenuate over distances, the distable light sources wouldn't change the sphere's appearance even if they were enablable.

Indeed, look at the following snapshot of bascially the same scene:

What's the difference? If you notice, light sources "7", "6", "5", and "4" are gray, not white. This indicates that these light sources are disabled. Note that the sphere's lighting looks basically the same; this is because previously listed 4 light sources really aren't affecting the coloration of the sphere in any significant way (yet even so, when enabled, they generally still slow down your rendering!). The point is that if we are clever about know how light sources contribute to the scene, we can get the same scene appearance with less lighting overhead.

The idea is that if light sources are localized (technically, if the light sources are positional and attenuated), it makes sense to not enable light sources that are too dim to contribute to the lighting of an object. The more light sources in your scene, the truer this becomes because all those extra light sources would probably suck performance without significantly improving your 3D scene.

When you run the multilight example (I encourage you to compile it and try it out), you'll notice that as the sphere wanders among the light sources, the distances between the sphere and the light sources changes. The program automatically updates what light sources are active. You'll see that the numbers change as the sphere's location changes. The un-numbered and high-numbered light sources are always the ones most distant (that is, likely to not affect the sphere's coloration much).

Think about the light sources labelled "4" and "7" in the snapshots above. These lights are not going to affect the sphere from the view shown because they "behind" the sphere. Their diffuse contribution to the sphere's lighting is nil for all of the sphere we can see from this view. This is true even though these light sources are fairly close to the sphere. Our distance-based determination of how "bright" or "dim" a light source is may not be the best determination of what light sources should be enabled or not.

Lambert's Law (explained in Section 6.3.2 of Ed Angel's OpenGL textbook) models the way diffuse reflections occur. Basically, Lambert's Law explains that the diffuse light bouncing off a diffuse surface is proprotional to the cosine of the angle between the normal of the surface and the direction of the light source. A more sophisticated determination of which diffuse light sources affect a diffuse object should use Lambert's Law, not simply rely on distance. The multilight example implements such a scheme. See the snapshot below:

In this version, the picture looks about the same, but light sources behind the sphere are no longer in the "top 8" light sources affecting the object. The Lambertian-based approach does a better job determining what light sources are really going to contribute to the lighting of the sphere based on not just their distance from the sphere, but also the nature of diffuse reflections from the surface.

A better determination of which lights are most important to the object's lighting is important because it means that in more complicated situations where lots of lights matter, the determination doesn't needlessly enable light source just because they are close. Fewer enabled light sources = improved performance.

Here's the same scene from a different viewpoint positioned so that the blue light marked "0" above is actually behind the sphere now:

Notice that the blue light source that was "0" in the scene before is now not even enabled; and the right light source now marked "0" was not even enabled before. This makes sense because of how diffuse reflection works. Unlike the distance-based approach, as the view changes, so will what light sources contribute more to the object's lighting.

One caveat is that to quickly approximate diffuse reflection to determine which light sources are most significant, multilight makes an assumption that the "normal" of the sphere directly faces us. That's approximately true for most of the sphere that is facing us; it is not really true for the sides of the sphere that are still visible to us.

So how does this work in practice? Generally, for the scene in movelight (admittedly contructed to demonstrate this point), generally only about 4 light sources really significantly contribute to the scene. By enabling only the four most significant light sources (instead of all eight), movelight can render frames 33% faster on a 200 Mhz Indigo2 XL (lighting calculations are done on the main CPU, not off-loaded to dedicated graphics hardware on this machine) compared to naively enabling 8 light sources (presumably if the full 12 light sources in the scene could be enabled, it would be even slower). The point is that virtualized light sources can permit faster rendering at basically the same visual quality as naively an OpenGL light source per light source in your scene.

You can download or read the multilight.c source code.

If you want to find more information about using OpenGL for sophisticated rendering effects, check out these other OpenGL rendering techniques.

- Mark Kilgard (mjk@nvidia.com)