Is it possible to do the lighting in model space?

Is it possible to do the lighting in model space?

The OpenGL spec specifies that the lighting calculation is performed in the
eye space. While the vertex coordinate and light source positon and direction
are specified in the model space. So the implementation must transform those
quantities from model space to eye space which invoves many matrix-vector
multiplication and even the matrix inversion for normal transformation. So,
why not just do the lighting in model space?

If the model-view transformation is the rigid transformation, then I do not
see any problem in lighting calculation in model space since the lighting
calculation should be independent of the choice of the coordinate system.

But the model-view transformation may not be the rigid transformation if
scaling and shearing is utilized. And in this case we may not peform the
lighting calculation in the model space and have to perform it in the
eye space.

But there seems to be a problem if non-rigid model-view transformation is used.
the non-rigid model-view transformation for vertex cooridnate has no problem.
We can think it as first perform a non-rigid model transformation from model
space to the concept world space, and then a rigid view transformation from
the concept world space to the view space. If we specify the light source in
the concept world space or just the view space, then everyting is OK. But, the
OpenGL specifies the light source in the model space. That is, the light source’s
position and direction also undergo a non-rigid transformation. And after this
non-rigid transformation for the light source, will the lighting calculation in
the view space still make sense? Conversely, if this does make sense, then why
does not performing the lighting calculation in the model space in case of a
non-rigid model-view transformation make sense?

Thanks in advance…

Model-view transformation is VERY simple and it has to be done anyway. Objects can move, rotate, etc. So they have to be transformed into eye space. Artists define them in their own space, and gamer must see them all in one final space - the eye space.

So, since model needs to be transformed into eye space anyway, it does not make sense to define lighting in model space because that would require that you define one light source separately for each object.

You have written a book practically.

Today, it’s all about shaders. If you want to do your lighting in modelspace, then compute the inverse transform matrix to transform the light source to modelspace. Now you have to use glUniform4fv to set your new light position.

If you see an improvement in speed, let us know.

Another option is to do the (per-vertex only, obviously) lighting on the CPU and just glDisable() light calculation. :wink:

yeah, i suppose it is an option, isn’t it? :wink:

You can do lighting in any space as long as you have all the necessary components in the same space.

I had some demos where it was easier to do lighting in model space than it was in the eye space.

Simply position the light when the model matrix is on the transformation stack and that light will be positioned in that object’s space.

HOWEVER, the light will be transformed through the modelview matrix to eye space and lighting will happen there (it’ll just happen, you don’t need to care about the details).

Why do you care?

Lighting happens in eye space because it is the only available consistent space, there is no ‘world’ space because the view matrix is concatenated with the model on the xform stack.

Object space is a fleeting thing that happens per object (but can be used to position a light).

Also if you want to position the light in world space position it after the viewing matrix is loaded onto the modelview matrix.

I struggle to see why anyone would care unless you’re dong your own shaders, and then you should know what you’re doing anyway.

Now when it comes to bump mapping you have a choice of lighting spaces that can have a significant impact on performance.

Most folks go with tangent space, but you can chose other spaces like object space if your texture normal map representation is in that space (and it can be if you grok this stuff). Lighting never happens in eye space per pixel really, the light vectors are transformed to the object or to the tangent space (you could think of it as another space even more local than the object space), and also consider the that tangent space may be under bone transformation so you’re a couple of matrices beyond even object space (3 matrices from eye space in a sense).

So lighting can happen anywhere you like really, in OpenGL it happens in eye space as an optimization, but if the positions/vectors exist in the same space you can perform your vector dot products in any chosen space.

Thing is you don’t get to implement the OpenGL pipeline so the dsecision has been made for you unless you use shaders or really bastardize the projection vs. modelview matrix useage and you DON’T want to do that or we’ll be answering your posts on why your fog & zbuffer are broken in a few days.

In summary;

  1. you can position lights in any space with OpenGL using the modelview matrix.

  2. OpenGL fixed function hardware will perform vertex lighting in eyespace (i.e. in the space after modelview transformation but before projection) no matter what.

  3. With shaders all bets are off and you can do what you like.

  4. You can mess with modelview contents and projection contents to do all sorts of crazy things, you DO NOT really want to go there though, it will screw up other pipeline operations.

Good luck.

Originally posted by dorbie:
Now when it comes to bump mapping you have a choice of lighting spaces that can have a significant impact on performance.
I had no idea people are using regular lighting too nowadays :smiley:

But concerning spaces, I just remembered NVIDIA used to do some weird stuff in some of their demos, the strange thing about them was that they used world space to do shading. I can’t remember which demo it was, but I also can’t think of any reason why someone would use world space as a basis.

Oh, maybe it was because of the cubemaps, but then again it shouldn’t be noticeable error wise to resort to eye space, or is it?

Originally posted by M/\dm/
:
but I also can’t think of any reason why someone would use world space as a basis.

For example because it does not depend on position of camera. If you store light (or effect) parameters in world space and cache model matrices for objects, you can directly upload them to the shaders without wasting CPU time for matrix multiplications unless the object or light in question moves. Or store such parameters in uniform buffers (on hw supporting them) and reuse values between frames.

Sounds like the much overhyped deferred shader approach. That would shade world space information for later use in a shader during the shading after visibility pass, BUT there’s no need to mandate world space stuff for this. You really want to use whichever approach minimizes your register useage during shading for such a scenario.

Object space is the bastard child that gets insufficient attention IMHO. It is inherently better behaved and cheaper by cutting out the tangent space basis. You still may have a bone transform though but that’s OK.