Rendering to a texture, and using that to texture a triangle

I have an interesting idea for a really fast lightmapper. It uses a GLSL per pixel light shader that would be slow, but still much faster than traditional lightmapping. Anyways, each triangle would be rendered alone with the shader on. The render would be saved to a texture, and that texture would be mapped onto that triangle. This would allow for fast lightmapping using relatively simple code (compared to the Barycentric Coordinate method of traditional lightmapping) and would also allow for baking normal maps into textures.

The first problem I’m having is finding out where the camera should be when mapping the triangle. I could use the way that many tutorials use, where they get the largest component of the normal of that triangle and planar map it onto that plane. But I would like to directly use the normal of the triangle to cram as much detail as possible into the texture.

Either method I use, i’ll have to use some method to make sure that the lightmap size for each triangle is consistent, so that theres a smooth transition between two triangles of different sizes.

The final problem is UV mapping that triangle. If I use the planar mapping method, it wouldn’t be too difficult, but again, I want the triangle to be flat facing the camera. Does anyone know how I could do this?

Thanks for your help!

Humus made a demo that does this. His light is also dynamic (to an extent). You will find it here: http://www.humus.ca

-SirKnight

I cant really find any demos using the technique I described… maybe I’m just gonna have to figure this one out.

No one has any thoughts or Ideas on this?