Lightmaps and the Z buffer.

In another posting I started someone mentioned that if you dont use the same verticies for the light maps as you do the texture maps you may not get the desired results. For example, if i have 2 triangles forming a square, and a smaller set of triangles forming a square inside the original, If i texture the large square, and use the second square as the light maps, i may have some z fighting, and blending issues. Is this correct?

Yes . But there comes glPolygonOffset …

Pardon my ignorance, but what is that and how does it work?

EDIT: I have been looking this up online, but I would like a more english version (too much tech junk). Also, is blending the 2 textures (Texture map, and lightmap) going to be effected with this method?

EDIT: What method does games like Quake 3 use? Poly offset? Or using the same points for both the texture map and lightmap?

[This message has been edited by LostInTheWoods (edited 12-09-2002).]

Also, using the poly offset method, what is the best mothod to blend the 2 images togeather. Should i be using a grey scale image for the light maps? or a rgb w/ an alpha value? Im just not sure what the fastest, best way to do this with an offset (being that the only blending i have done has bee with multitexturing.

Most games use multitexturing to avoid having to draw the quad twice. When it is unavoidable to use multiple passes, they use the same vertices.

Hey!
I bet most of the game engines don’t have problems with z-fighting when drawing lightmaps. Lightmaps are drawn over SAME triangles as base textures, so they have same 3d coordinates, only texture coordinates differ.

Ok here is my problem with that. How many extra polys do you have to create to break a wall up to be light map “friendly”. ALOT. Also in games like Quake 3, when a rocket screaches across the room, it projects a light map onto the floor. How would they be able to do that and always use the same verts as the room?

I’m sure in that Quake III don’t use the same vertexes for dinamic lightmapping as for the geometry. Until you haven’t got a lot of dinamic textures, i think it’s cheeper to display them with blending then finding the necessary vertexes and texture coordinates. This way you can use the texture map stage for something other. So I think the using the original vertexes for lightmapping is only usefull (usually!) in cases of static lightmappings.

What do you think?

Ok, then what would be the best blending technique be to blend a greyscale light map to a wall (with different coords). I wouldnt use alpha right, more like a source color addition?

Quake uses GL_DST_COLOR, GL_ZERO – is multipling base texture with the lightmap – the result will be a darker texture.

I’m completely sure that Q3 does use the same vertices for dynamic lighting as for the geometry.

Most games use a lower resolution texture for lightmaps than for textures. For example, Q3 uses 16 lightmap pixels per world unit, and only 0.5 texture pixels per world unit. You basically never have to break a wall texture up at all to be lightmap friendly; a 128x128 texture can cover a wall that is 4096 units long. Before the add-on pack the largest Q3 map was 8192 units long.

Some newer games are going insane lightmap density, such as New World Order and Kreed. I think NWO actually bakes their world textures into their lightmaps, and Kreed uses one lightmap sample for every 4x4 texture pixels. These games have huge texture memory requirements because of this, though.

Originally posted by LostInTheWoods:
Ok here is my problem with that. How many extra polys do you have to create to break a wall up to be light map “friendly”. ALOT. Also in games like Quake 3, when a rocket screaches across the room, it projects a light map onto the floor. How would they be able to do that and always use the same verts as the room?

Use GL_CLAMP as your texture wrapping mode.
Wastes some fillrate though.

Edit: fixed an exaggeration

[This message has been edited by zeckensack (edited 12-10-2002).]

If you know that an entire triangle is going to be unlit, you can remove it from the list of triangles to be lit. You don’t really waste much fill rate this way.

EDIT: stupid typos…

[This message has been edited by Coriolis (edited 12-10-2002).]

I don’t think that the blending function is DST_COLOR, ZERO because that would mean that walls not affected by lightmap would appear brighter. Moreover an alpha component would be wise if the entire wall is used for ligtmap because alpha testing could help the fillrate.

As for GL_CLAMP, I recommend GL_CLAMP_TO_EDGE instead (if possible, because GL_CLAMP is in OpenGL1.0 whereas GL_CLAMP_TO_EDGE is in OpenGL1.2+ or in ARB_texture_edge_clamp extension)

It is GL_DST_COLOR, GL_ZERO. Walls without lightmaps are brighter unless you fake the lighting some other way, which is why you don’t do walls without lightmaps unless they are supposed to be brighter or you fake the lighting some other way.

For static lightmaps (the ones computed with radiosity), I definately agree. But for lightmaps that comes from dynamic light sources (eg the red light around the rocket) I don’t think it will give a correct result.

For weapons fx they use additive blending - GL_ONE, GL_ONE - which is obvious because those fx must bright polys.

Ok, so my guess, is that using glPolygonOffset would render undesirable results correct. So my 2 main options would be either, to break my wall up in such a way that i could apply light maps to just the area i wanted, but still using the wall verts (More verts). Or create a LARGE light map for the entire wall, and apply it all at once (More texture memory needed, expecialy if I cant reuse lightmaps for like areas). Which would you all see working better?

From what i gather Q3 uses the second technique, building a LARGE lightmap for an entire wall, or area. This just seems like a lot of redundancy though. I mean if you have 16 lights that are all the same light map background, you would build one LARGE texture to cover the area, when it would be alot better use of memory to create one light map, and apply it 16 times in those areas. But that would require breaking those areas up into 16 more sets of 2 triangles, plus how ever many times you had to split the wall to make it work. So that method creates alot more info to be rendered.

Which way is smarter?

LostInTheWoods: Using more texture memory and drawing a pixel once is going to be much better than drawing a pixel 17 times just to get it lit properly. It is yet another example of the classic time-memory tradeoff.

dawn: Q3 uses GL_DST_COLOR, GL_ONE for dynamic lights. This is why a dynamic light in Q3 cannot brighten a completely black room.