vertex program thoughts and questions

Hello,

I don’t know if this is beginner or advanced, but I decided to post it here. Sorry if this might be boring or a repetition of things already said several times.

I want to rework my terrain engine to use ppl/bump mapping (and got the bump mapping part already working, but only with a “test polygon”, not with the real data).

As the rendering is inside of disply lists (and this will stay this way, no chance ), I guess I have to use vertex programs, namely ARB_vertex_program, for per-vertex setup stuff, because of things that are changing and so cannot be compiled statically into the display lists. right?

When using vp, I have to do all with it, as all the fixed-function transformation part is bypassed, right? So I need to do what is normally done (modelview transformations, lighting etc.), but what exactly is this? Are there any examples for a vp that simply does what the normal fixed pipeline is doing?

I think the “core part” of vp for bump mapping is to transform the light vector into tangent space, right? I got the light position and the vertex position, and with these, I compute the vector in the vp, transform it to object space after that to tangent space, and then pass it on to the rendering pipeline. This would be for the diffuse pass, for the specular pass I have to compute the half vector (also inside the vp) and do these things with it. Are these thoughts right? I really really would like to know .

One problem just came to my mind: When using the modelview matrix for computing the tangent space light vector, I need the modelview matrix in the state it is BEFORE doing camera translations (gluLookAt), as what is impotant is the position of the object in the world, not concerning the position of the object relative to the camera. Like I said in http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/011214.html .

But at the moment the vp is called, the camera translation has already happend. how to solve that? re-transform the modelview matrix, so the camera positioning gets reversed? Or am I entirely wrong about this?

And, are there any good tutorials about this? I couldn’t find very many, the best seems to be the specs.

Sorry if this might be a very boring thread, but I need to know .

Thanks
Jan

[This message has been edited by JanHH (edited 12-30-2003).]

The only method that I know of for calculating the light vector in tangent space is when you have the polygon’s normal, binormal, and tangent vectors.

DP3 light1ts.x, light1vector, tangent;
DP3 light1ts.y, light1vector, binormal;
DP3 light1ts.z, light1vector, normal;

The light position must be transformed into object space before this, of course, in order to calculate the object space light vector (lightpos - vertexpos). There’s other ways to do it, I guess, but I transform the light vector to object space in software and then toss that new position into OpenGL. I do this because it’s not really possible to separate the camera’s modelview matrix from the object’s modelview matrix in order to transform the light position in the vertex program. The only way you could do this is to toss in the model’s matrix as a program matrix or other such constant, but why do that when you can simply toss in the transformed light position?

thanks. I just remembered that my light vector already IS in object space, or rather, the object does not get translated anywhere. So this is not a problem, at least not for the ground of the terrain (which has “absolute” vertex coordinates).

Also this shows that my thoughts were at least not entirely wrong .

Jan