Opinions on Light Map Idea

I was wondering what the experts among you think of this idea.

I have a terrain that consists of 64 x 64 vertices. I generate a light map (a 64 x 64 array of light value entries) using a pretty simple algorithm. And I have a terrain texture which is 512 x 512 pixels. Now I could apply the light map in (at least) two ways:

  1. Use the light map to set the terrain vertex colors appropriately. Draw the scene without the terrain texture, and then draw the scene with the terrain texture using GL_MODULATE.

  2. Use the light map as a texture. Draw the scene with the light map texture, and then draw the scene with the terrain texture using the appropriate blend function. Those two steps could be replaced by one step if you used multitexturing.

In the first method, you get what I think is called Mach banding, which is a common problem with per-vertex lighting. In the second method you also get Mach banding. In the second method, my light map texture is 64 x 64 pixels, and so probably there will be quite a bit of zooming involved (I used GL_LINEAR) in displaying it. My idea was to use a nice bicubic interpolation algorithm to zoom the light map texture myself beforehand to 512 x 512 pixels. Then use that zoomed light map as the light map texture. I tried it out and it seemed to reduce Mach banding significantly, and produce a nicer looking scene.

Here is screenshot of a scene with a 64x64 light map texture (and no terrain texture):
http://home.iprimus.com.au/cragwolf/3dpics/nozoom.jpg

Here’s a screenshot with the zoomed 512x512 light map texture:
http://home.iprimus.com.au/cragwolf/3dpics/zoom.jpg

What do you think? I’m sure someone else has thought of this idea before, but I’m wondering if there are any criticisms of the idea or maybe suggestions for improvement.

OK… i have no idea what match banding ist, but when you get it anyway, with both methods, it doesn’t matter anyway, right?

when using method no 1 then there is no need to draw with two passes, just modulate the vertex colors with your lightmap and use GL_MODULATE as texture paramter, and that’s it.

when using the second method and a larger lightmap looks better, then obviously its better to use such one. Maybe its best to calculate a 512x512 lightmap instead of a 64x64 lightmap then, rather than zooming the small one.

Crazy situation anyway… have you ever thouht about simply using normal vectors instead of a lightmap?

Jan

Originally posted by JanHH:
OK… i have no idea what match banding ist, but when you get it anyway, with both methods, it doesn’t matter anyway, right?

Well, it does matter when the second one, suitably modified (with the zoom), has significantly less Mach banding.

when using method no 1 then there is no need to draw with two passes, just modulate the vertex colors with your lightmap and use GL_MODULATE as texture paramter, and that’s it.

Ah, of course, I’m so silly! Thanks for that tip.

when using the second method and a larger lightmap looks better, then obviously its better to use such one. Maybe its best to calculate a 512x512 lightmap instead of a 64x64 lightmap then, rather than zooming the small one.

That’s a possibility. But then I would need to interpolate height values in between vertices. Sort of generate a 512x512 heightfield from a 64x64 heightfield, but only use the 512x512 one for the lightmap. Sounds interesting.

Crazy situation anyway… have you ever thouht about simply using normal vectors instead of a lightmap?

You mean in association with OpenGL lighting? Oh yeah, I already tried that. Still get the Mach banding (if that’s what it’s called!). I’m just experimenting at the moment and learning OpenGL in the process. But perhaps you are thinking of something more complicated, like per-pixel lighting (and some sort of Phong model)?

Thanks again for your help.

well but what is mach banding? never heard of that, maybe you can describe what it looks like? do you have a picture? and in general, normal opengl lighting should look fine, per-pixel-lighting is surely great but a) when rendering a terrain it will probably look the same as vertex lighting and b) i would not recommend messing with that when you’re just lerning opengl.

See this thread:
http://www.opengl.org/discussion_boards/ubb/Forum2/HTML/012535.html

Mach banding, as I understand it, occurs when you get abrupt changes in intensity across an image, and the human eye picks it up a little too eagerly, because we’ve evolved to detect edges and lines. It shows up when you use per-vertex lighting on polygon meshes, but textures tend to lessen its impact, often significantly. So I’m assuming that the pictures above show this Mach banding.

It’s not a major problem for me, actually, especially when I add textures to the above scene. But it’s neat to learn ways to combat it. Here’s the above scene with a 64x64 lightmap, but with a texture applied to the terrain.
http://home.iprimus.com.au/cragwolf/3dpics/nozoomtex.jpg

The Mach banding has virtually disappeared. But if I had used a lighter texture, like a snow texture, then it would have still been noticeable. So for some textures, it will still be a problem.

Of course, one could simply add more vertices. Create a 256x256 heightfield, for example. Then the banding would be far less noticeable. But that’s also 16 times more triangles to draw. As almost always, there’s pros and cons to consider in any solution.

My guess is that if you generate mip maps for your 64x64 texture (which will take MUCH less space than 512x512) and then turn on anisotropic filtering with a max value of 8 or so, the banding will also go away.

Using a bigger texture gives you better resolution, for sure. Whether you get this bigger texture by computing every texel, or computing fewer texels and filtering, is entirely up to your taste, of course. I don’t think anyone would claim that’s a novel idea, though :slight_smile:

Mach banding is more a magnification issue, I don’t see how anisotropic filtering could help that. It would also slow things down propably more than a bigger texture, so if you can afford the bigger texture, I’d go with it.

However, instead of applying a cubic filter on a lowres lightmap, I’d rather interpolate the normals and calculate a high resolution lightmap from them. In theory that would make it more correct, altough the actual difference may be unnoticable. Actually, since you way seems to work fine, why change it?

-Ilkka

Originally posted by JustHanging:
However, instead of applying a cubic filter on a lowres lightmap, I’d rather interpolate the normals and calculate a high resolution lightmap from them.

Ah, now that’s a bloody good idea.

Actually, since you way seems to work fine, why change it?

Just to learn different methods. Thanks for the idea.

Well it’s hardly my idea, but thanks. Remember to normalize the normals after interpolation, otherwise the bands will be still there.

-Ilkka