PDA

View Full Version : How to bake lighting onto a texture?



gator
03-03-2004, 02:19 PM
How can I bake lighting from a point light source onto a texture? The model is already mapped and
has texture coordinates. But, I'm not quit sure how to get the lighting color onto the texture map.

How do you transform a pixel into object space?

Or perhaps I have to go in the other direction, object space to pixel space?

Zeno
03-03-2004, 07:16 PM
Since you already have a mapped model, do this:




for each triangle
{
find min and max s and t tex coords from the 3 tri verts;
for (i = sMin; i < sMax; ++i )
{
for ( j = tMin; j < tMax; ++j )
{
find barycentric texel coords at (i,j);
if ( (i,j) is not in triangle )
{
continue;
}
use barycentric coords and triangle vertices to get object space location of texel;
calculate lighting at this location;
place light value in texture data array;
}
}
}


My biggest problem with lightmapping is actually finding a way to texture objects (especially triangle stripped objects) without seams http://www.opengl.org/discussion_boards/ubb/frown.gif. I think that the research on geometry textures may help solve this problem for me, but I haven't gotten around to reading it closer yet.

[This message has been edited by Zeno (edited 03-03-2004).]

gator
03-03-2004, 09:48 PM
Did you get that from Nvidia's normal mapper source code?

I was just looking through it for some ideas, and I seem to recall seeing something similar to
what you just posted, although it looked a bit more complicated.

For filling seams, I see a function in the normal mapper code called DilateImage. Maybe you already seen this.

gator
03-04-2004, 03:43 PM
Just tried it out, and my texture looks like...pixelated garbage. This is more difficult than I imagined. Those edges are hard to figure out.

chemdog
03-04-2004, 11:31 PM
The way a famous Santa Barbara company baked textures was to ray trace photons from each light source, and track the lighting contributions at every geometry intersection. After collecting roughly 4 gig of intersections, the data was processed into textures. A benefit was that the textures could be refined with additional passes.



for each light
for each photon emitted
do
Ray-trace photon to geometry
If intersection then
Mark intersection position and energy
Lower photon energy based on material
endif
while photon has sufficient energy
endfor
endfor




for each intersection
Inverse project intersection to texture coordinates
Filter energy to textel location
endfor
Normalize texture

This may be too real or too slow for your application.

gator
03-05-2004, 05:52 PM
I think Nvidia's normal mapper code is doing some kind of sampling too, but the code is
based on sampling normals. If anyone would like to explain what GetEdge, GetSortedEdges,
GetYMinMax, GetXMinMax is doing from Nvidia's normal mapper code, I would like to know.
Zeno posted a simplied version, but Nvidia is doing something slightly different.

Zeno
03-10-2004, 05:49 PM
Did you get that from Nvidia's normal mapper source code?

No, just what I came up with when I was presented with the problem.

I'm guessing your problem is just some small bug. If it's not TOO long, post your fleshed out algorithm here and we can see if it's obvious.

gator
03-10-2004, 08:19 PM
I'm taking the code straight from Nvidia's code, but since it's based on normals, and not position, I have to hack it up pretty badly.

It was somewhere around this page, but I can't seem to find it anymore: http://developer.nvidia.com/object/ps_normalmapfilter.html http://developer.nvidia.com/object/detail_normalmaps.html

gator
03-10-2004, 08:31 PM
ah! My mistake, it was ATI, not Nvidia. http://www.opengl.org/discussion_boards/ubb/smile.gif
I'll put it aside for now, and work on something else.