Unification of Functionality

I’ve recently been playing around with a lot of “advanced” techniques for per-pixel operations (using nVidia’s register combiners - but I believe that ATI’s newest hardware has a similar extensions). As I began to look at some example code for per-pixel lighting I noticed something ludicrous (please note that the following description is loose).

In the example (which was fairly representative) vertex normals of a model are used as texture coordinates, which index into a cubic environment map whose only purpose was to normalize these vertex normals - I mean texture coordinate, and encode them as an RGB triple.

Then, within the register combiners (per pixel shader) the dot product of the texture 1 color - or rather unit normal, and the light direction - actually the “auxiliary color” or something is taken to generate a light intensity.

It seems to me that OpenGL has a lot of kinds of four-dimensional values - colors, vertices, normals, texture coordinates. Various facilities are provided for operating on each of these kinds of values, yet no one format allows for all the operations that the others do.

Having to use a texture unit just to generate interpolated unit normals for per-pixel shading seems wastefull. it seems that much could be gained just by unifying the operations on colors, texture coordinates, and geometry to a smaller, but more comprehensive set of operations.

The only real limitation to this is the different ranges on these kinds of values ([0,1] vs. [-1,1] vs. R for example). But one need only look at OpenGL shader or John Carmack’s .plan file to see that there would be great benefit to having floating-point color components, and extended range throughout the GL pipeline.

This is a pretty impressive thing to be able to do. The beauty is the reuse of existing hardware to do this thanks to a powerful initial design.

I completely agree that the interface is difficult for your purpose, and that this kind of lighting is a pretty common purpose. I called for this to be improved on the second page of the thread “new shading mode”, but remember the kinds of operation the hardware is actually performing here.

If you consider that the hardware may have to use the same resources to get the equivalent effect, then what you are actually asking for is a simpler interface to fragment lighting which exposes the functionality you’d like without having to set up a slew of register combiners and funky textures to implement the arithmetic needed to compute lighting per pixel.

Something which packages the whole thing up and reserves the requisite hardware resources to allow you to implement fragment lighting is needed badly. It is clear that telling someone who want’s fragment lighting to go play with DOT3 register combiners and cubemap textures is inappropriate in many cases no mater how much sample code you happen have.