Obli

10-07-2003, 09:16 AM

This should be quite easy so I'm going to post it there in beginner's forum.

Problem: getting the light position in model space to evaluate light vector. The light position is stored in absolute world space.

As posted in this thread (http://www.opengl.org/discussion_boards/ubb/Forum2/HTML/014148.html) the GL will combine model matrix and view matrix in a single matrix called modelview.The view matrix is basically the transformation applied to MODELVIEW "to put the camera in place" while the modeling matrix is everything other.

I was wondering why NVSDK didn't cover this subject. At first glance, it seems to be way to easy.

It happens that when this system needs to be integrated in a whole system (i.e. not a simple techdemo) which have no knowledge of what's may be going on (possibly nested transformations), this simple problem looks a bit more complicated to me.

Obj space --1--> Clip space

1 = transformation by "mvp", concatenated modelview-projection matrix.

Obj space --2--> World space --3--> Clip space

2 = modeling matrix

3 = modelview-projection matrix without counting for (2).

Now the idea to go back from Clip space to World space looks pretty bad. It would trash precious VP resources.

I think you'll agree that keeping track of (2) on CPU and trasforming each light source by the inverse of that tracked matrix would be faster and possibly, more useful.

Now, this is a problem which requires some effort. I am quite disappointed by this fact, so I would like to get some comments on that.

Thank you!

Problem: getting the light position in model space to evaluate light vector. The light position is stored in absolute world space.

As posted in this thread (http://www.opengl.org/discussion_boards/ubb/Forum2/HTML/014148.html) the GL will combine model matrix and view matrix in a single matrix called modelview.The view matrix is basically the transformation applied to MODELVIEW "to put the camera in place" while the modeling matrix is everything other.

I was wondering why NVSDK didn't cover this subject. At first glance, it seems to be way to easy.

It happens that when this system needs to be integrated in a whole system (i.e. not a simple techdemo) which have no knowledge of what's may be going on (possibly nested transformations), this simple problem looks a bit more complicated to me.

Obj space --1--> Clip space

1 = transformation by "mvp", concatenated modelview-projection matrix.

Obj space --2--> World space --3--> Clip space

2 = modeling matrix

3 = modelview-projection matrix without counting for (2).

Now the idea to go back from Clip space to World space looks pretty bad. It would trash precious VP resources.

I think you'll agree that keeping track of (2) on CPU and trasforming each light source by the inverse of that tracked matrix would be faster and possibly, more useful.

Now, this is a problem which requires some effort. I am quite disappointed by this fact, so I would like to get some comments on that.

Thank you!