05-24-2004, 06:53 PM

I have implemented the following lighting system in my game:

There is "ambient" light which is just a base level for the lighting.

There is one "infinite" light which is specified as two vectors, one indicating the direction of the light and the other indicating the color.

Then there is an arbitrary number of "point" lights which are specified as two vectors, one indicating position and the other indicating color.

There are two "rendering paths" when it comes time to draw models. Either you have ARB_vertex_program or you don't. Either way you get two rendering passes. The first pass fills the depth buffer with the depth values for all solid objects and the color buffer with the coefficients for all lit objects that are rendered.

The second pass simply multiplies the color information for the objects (as produced by my shader system) by the existing pixels to produce the lit images, and trasparent stuff is drawn without Z-writes, and so forth.

It produces good results, as far as I'm concerned. Here's how it works:

The game itself will have a unique class on the hierarchy for describing a light, but as far as the graphics subsystem is concerned, all point lights are just two vectors as described above. During the frame, a std::vector is populated with a number of these lights and then when it comes time to render a model, things get fun. First of all, each vertex normal is transformed by the modelling matrix--not the modelview matrix, but the modelling matrix, which is the result of all transformations following the viewing transformation but NOT including the viewing transformation. This ensures they take into account object orientation. Then the dot product of each normal with the infinite light vector is taken and a coefficient calculated. Then for each point light, the same process is repeated; the vector used is the difference of the light position and a fixed "object position" specified prior to rendering. Technically it should be the vertex position but that would require transforming the vertex and that would be exceedingly expensive for every vertex for every light. This means this method is not entirely accurate, but it should work for my demands. All the lights are summed to produce one chain of coefficients for all the vertices that is passed to glColorPointer prior to rendering the first pass.

If vertex programs ARE used, the program actually breaks the first pass into n stages, where n is the number of point lights. In the first stage, the ambient light and infinite light are calculated as well as the first point light; infinite and ambient light are passed in as env parameters while the point light is a local parameter. Then the process is repeated for the remaining point lights; a flag prevents the infinite/ambient light from being added multiple times. The results are summed to produce the resultant coefficients in the color buffer. In vertex program mode, the actual vector used for calculating the light is transformed vertex position-light position so it is more correct per vertex than the non vertex program method, but since we are already transforming in hardware we can afford to use the transformed location. So, in fact the vertex program mode usally does n+1 passes for n point lights.

So, the non vertex program path does it in 2 true passes but slows down because it has to calculate the coefficients by itself, while as the vertex program path does a brute force pass-per-light to get the right lighting values.

Phew. Somebody tell me how stupid my system is now so I can feel bad for spending all day on it.

There is "ambient" light which is just a base level for the lighting.

There is one "infinite" light which is specified as two vectors, one indicating the direction of the light and the other indicating the color.

Then there is an arbitrary number of "point" lights which are specified as two vectors, one indicating position and the other indicating color.

There are two "rendering paths" when it comes time to draw models. Either you have ARB_vertex_program or you don't. Either way you get two rendering passes. The first pass fills the depth buffer with the depth values for all solid objects and the color buffer with the coefficients for all lit objects that are rendered.

The second pass simply multiplies the color information for the objects (as produced by my shader system) by the existing pixels to produce the lit images, and trasparent stuff is drawn without Z-writes, and so forth.

It produces good results, as far as I'm concerned. Here's how it works:

The game itself will have a unique class on the hierarchy for describing a light, but as far as the graphics subsystem is concerned, all point lights are just two vectors as described above. During the frame, a std::vector is populated with a number of these lights and then when it comes time to render a model, things get fun. First of all, each vertex normal is transformed by the modelling matrix--not the modelview matrix, but the modelling matrix, which is the result of all transformations following the viewing transformation but NOT including the viewing transformation. This ensures they take into account object orientation. Then the dot product of each normal with the infinite light vector is taken and a coefficient calculated. Then for each point light, the same process is repeated; the vector used is the difference of the light position and a fixed "object position" specified prior to rendering. Technically it should be the vertex position but that would require transforming the vertex and that would be exceedingly expensive for every vertex for every light. This means this method is not entirely accurate, but it should work for my demands. All the lights are summed to produce one chain of coefficients for all the vertices that is passed to glColorPointer prior to rendering the first pass.

If vertex programs ARE used, the program actually breaks the first pass into n stages, where n is the number of point lights. In the first stage, the ambient light and infinite light are calculated as well as the first point light; infinite and ambient light are passed in as env parameters while the point light is a local parameter. Then the process is repeated for the remaining point lights; a flag prevents the infinite/ambient light from being added multiple times. The results are summed to produce the resultant coefficients in the color buffer. In vertex program mode, the actual vector used for calculating the light is transformed vertex position-light position so it is more correct per vertex than the non vertex program method, but since we are already transforming in hardware we can afford to use the transformed location. So, in fact the vertex program mode usally does n+1 passes for n point lights.

So, the non vertex program path does it in 2 true passes but slows down because it has to calculate the coefficients by itself, while as the vertex program path does a brute force pass-per-light to get the right lighting values.

Phew. Somebody tell me how stupid my system is now so I can feel bad for spending all day on it.