Light map theory...

Hi everyone, this is my first post, so first of all hi! My name is Aimee, and I’ve been programming for about 11 years now.

After successfully building a great little 2d ortho engine, I’ve moved over to learning the 3d side of game rendering. At the moment everything seems to be coming together quite nicely as allot of this stuff is not really new to me. However my current focus is light maps, and I’m struggling with one aspect that is turning into a bit of a thorn.

In particular I’m talking about the dynamic light map generation. I have a routine that takes in a polygon, two arrays (one for the normal’s and texture uv’s associated with each point on the polygon), and an array of light objects (which contain position and color).

From my understanding the following should happen:

  1. create a matrix that contains information for each lumel on the light map.

  2. calculate each lumel’s distance from each light.

  3. decide each lumel’s color based on it’s distance from light.

  4. pack the matrix into a new gl texture.

  5. upon render, use multi-texturing to layer the underlying texture, and the light map to create the effect.

Out of those 5 steps, the one that is giving me a headache is the 2nd, I understand that first the position of the lumel need’s to be converted from 2d to 3d space, but how do I do do this conversion for each lumel? Unfortunately the only article I’ve found so far with a clear example of the math uses a rotation matrix for quad’s, and that just confuses things.

Thank’s in advance, and don’t worry about me not understanding a potential answer, I aim to master this, and any pointers will be a great help.

Aimee.

Out of those 5 steps, the one that is giving me a headache is the 2nd, I understand that first the position of the lumel need’s to be converted from 2d to 3d space, but how do I do do this conversion for each lumel?

Basically, what you’re asking is how to do lighting per-fragment.

A texture mapping (ie: the UVs) defines a way to associate each point on a triangle with a triangular region on a texture. What you want is the reverse of this operation: for each point on a texture, find its corresponding point on the original triangle’s surface.

For a given pixel on the light map, you can compute the UV coordinates for that pixel. Now, find the triangle that has this UV coordinate within its area. Then, compute the barycentric coordinates that represent this location on the triangle. Once you do that, you simply use the barycentric coordinates to interpolate the triangle’s positions, and you now know the position on the surface.

I’m guessing the barycentric coordinate generation is probably tripping you up. It’s a matter of mathematics.

The triangle has three vertices, called A, B, C. Each vertex has (among other things) a UV coordinate, called UVa, UVb, and UVc respectively. We have a fourth UV coordinate, called UVk (for known). We need to find the scalar values A, B, and C that satisfy the following equation:

AUVa + BUVb + C*UVc = UVk

This may seem unsolvable, but it’s not. There are two equations (since the UVs are all 2D vectors), but we also have a third equation:

A + B + C = 1

This defines barycentric coordinates. Now, we have 3 equations and 3 unknowns. So solve for it. You could even convert this into a matrix (by taking the vertex UVs and adding a third component that is 1 and forming a matrix out of them) and do a simple inverse matrix. That is:

UVk1 * UV^-1 = Av

Where UVk1 is the UVk with a 1 in the third component, UV is a matrix where UVa1, UVb1, and UVc1 are columns, and Av is a vector of A, B, and C.

QED.

Woah :eek:

I don’t mean to hijack the thread, but… Why are barycentric coordinates subjected to the constaint a+b+c=1? I can follow what you’re saying mostly, I’m just having trouble with the geometric interpretation of it.
So does a point aVertexA+bVertexB+c*VertexC, where a+b+c=1, necessarily lie on plane vertexA/B/C?

I’d usually use parametric line math for this, but this seems like a much better/cooler way.

[edit]
so the way I’d do it, with UV vectors X, Y, Z, and scalars a,b, would be:
a*(X-Z)+b*(Y-Z)=UVcoord
then plug a and b into the same equation, with vertex vectors X/Y/Z. Sure enough, if you add a scalar c
a*(X-Z)+b*(Y-Z)+c*(Z-Z)=F
aX+bY+c*Z=F+(a+b+c)*A-A

so, that’s why… I almost get it… except for the reason a+b+c=1.

Why are barycentric coordinates subjected to the constaint a+b+c=1?

For the same reason that triangles composed of the space bounded by 3 non-colinear points: because that’s how we define them.

Barycentric coordinates are a way of defining a position on a triangle as a weighted average of the 3 positions of that triangle. The weights must add up to 1, otherwise the math doesn’t work out and the point isn’t correct. Just look at the Wikipedia article I linked to.

What makes barycentric coordinates useful is that if you have the barycentric coordinates for a particular point of interest on a triangle, you can interpolate any per-vertex attribute and get its value at that point. Or in our case, use the interpolated per-vertex attribute to compute the barycentric coordinates, so that you can convert them back into a position.

but… if it’s a problem of definition then I should be able to say a+b+c=-2, and have a working system similar to barycentric coordinates. if the math doesn’t work out if a+b+c!=1, then that’s a proof by contradiction.

I’ll definitely read up more on barycentric coordinates though.

but… if it’s a problem of definition then I should be able to say a+b+c=-2, and have a working system similar to barycentric coordinates.

And you could define a triangle as 4 points, while defining a quadrilateral as 3 points. They’re just definitions, arbitrarily chosen.

Algebra says that you can certainly redefine barycentric coordinates by saying that A + B + C = -2 as follows:

Ab + Bb + Cb - 3 = 1 - 3

Where the “b” subscript means regular barycentric coordinates (summed to 1). Just subtract 3 from one of the other coordinates and you’re good to go.

The point is that your values for A, B and C will be different. You cannot use them directly as weights for vertex attributes, because a weighted average must sum to 1. Your equation for computing the weighted average between the endpoints will therefore also have to be different.

Thank you for giving me some real maths to learn instead of forcing me to learn C++ :slight_smile: This way i’ll be able to optimze my code for c# instead! I’ll be taking the next couple of days to understand this stuff, and i’ll post up the results once I know its working.

Thanks again!

Aimee.