vertex programs and extreme number of matrix transformations

I wasn’t sure if this would be a basic question or not, but which I checked the basic board there were questions about how to compute the normal of a set of vectors, so I’m going to wing it and presume this is an advanced question.

I’ve long since gotten tired of the built in lighting model in opengl ( specular of ( n dot h ) ^ shine * color - n the normal and h the half angle…) and thought for a warm up I would implement phong lighting without the Blinn half-angles used - in other words actually compute the reflection vector of the lighting vector l through normal n by 2*(n dot l)(n - l) - use that vector r for computing specular values with (r dot v) ^ shine -

Now it’s obvious in the vertex program I’ll need to compute r, and I’ll have to do other housework such as transforming the incoming vertex to the clipping volume which is no problem either. So I presume all the data coming into a vertex program is untransformed, is that correct? This would include lights?

If I want to get the lighting vector and I have a vertex still in model space, do I need to transform that vertex into world space (by multiplying by the modelview matrix) before doing the subtraction and subsequent normalization, or has opengl already transformed the light for me by multiplying the light position (if it’s a positional light) by the inverse of the modelview matrix?

It would seem like on a per-model basis OpenGL should save me the work of having to transform lights on a per-vertex basis but I can’t really find any clear documentation on what the state of a positional light is at the start of a vertex transformation in a vertex program.

Thanks for the help guys…

The lighting vectors are passed straight through to your vertex program. So they will be in the default OpenGL lightiong space. (eye space?)

If you want the data in object space, transform the data yourself into object space and pass the data via a custom PARAM.

As I put together my vertex shader it’s getting longer and longer - and many of the program elements are converting points from one space to another for lighting equation computation - I’m beginning to think there must be a better way. Also, I end up having to normalize vectors frequently - is the fastest normalization method something like:

DP3 temp, vec_to_normalize
RSQ temp, temp.x
MUL vec_to_normalize.xyz, temp, vec_to_normalize
?

I end up doing it so frequently as well that I half suspect there’s got to be an instruction for it, but there doesn’t seem to be.

Do you guys know of a place where I can find slightly more complex examples of a vertex shader, where perhaps non-standard lighting computations are done (the reflection vector example would be perfect)? Most of the vertex shader code out there seems to be either extremely simple examples (multiply vertex positions by modelview * projection and write results, passthru color and texcoords) or very complex… I want an intermediate example if anyone knows of one.

I just realized something else as well - or at least I think I did. Since I’ve just got a vertex program, the color assignments I do based on lighting are only effective for the vertex, correct? Lighting across the polygon is still computed with the half-angle equation with interpolated normals, correct?

If I want to really do reflection-vector lighting, I need to write a fragment program to light the pixels of the polygon correctly, don’t I?

Thanks for the response.

Originally posted by bostrov:

Do you guys know of a place where I can find slightly more complex examples of a vertex shader, where perhaps non-standard lighting computations are done.

Take a look at this example. It’s in directx but you can check out the shaders only. http://www.ati.com/developer/samples/crystal.html

Originally posted by bostrov:

I just realized something else as well - or at least I think I did. Since I’ve just got a vertex program, the color assignments I do based on lighting are only effective for the vertex, correct? Lighting across the polygon is still computed with the half-angle equation with interpolated normals, correct?

If you only use the vertex shader, the lighting is computed per-vertex, the pixel colors should be simply interpolated color of the vertices colors.


If I want to really do reflection-vector lighting, I need to write a fragment program to light the pixels of the polygon correctly, don’t I?

Yes, I think so. You need to interpolate the normals on each pixel and use the fragment program (Pixel shaer) to compute the lighting.
Also, regarding to the light source matrix, if you are using vertex shader, you need to pass the light source position to the vertex shader with registers, you can not use the fixed pipeline lighting.

[This message has been edited by paladinzzz (edited 07-25-2003).]

From a comment in the vertex shader of that sample:

;The next section of the vertex shader computes the view vector in world space.
;This is accomplished by subtracting the world space camera position from the world
;space vertex position and then normalizing the result. Note that in this example
;the world transform is identity and the input vertex position is used directly.
;If this is not the case in your application, you will need to transform the input
;position by the world matrix before performing this calculation.

That’s where I’m having a lot of my concerns - I want the most optimal space that I can have that I don’t have to transform every vector in a given model by a 4x4 matrix. I’m in a situation in which the lights are defined in world coordinates, the model is defined in modelspace, the camera position is defined in worldspace, normals are still in model space, etc.

So if I want to compute lighting in modelspace, I need to transform the vertex and its normal from modelspace to world space with the modelview matrix (2 matrix-transform-vectors right there), get the viewing vector in worldspace by matrix multiplying by the inverse of the modelview (another matrix-transform-vector) - then finally computing the reflection vector and then transforming the vertex into viewspace (yet another matrix-transform-vector).

All together that’s 4 matrix multiplies - not counting normalizing vectors as necessary.

That seems like a rediculous amount of work to perform on a per-vertex basis - and makes me think I must be missing something. That example is kind of typical of example code that I’ve seen - the modelview is the same as the world view and spares a bunch transforms that way, but doesn’t seem like it’s very realistic… or am I missing something obvious?

Thanks for the reply.