View Dependent Bump Mapping

It struck me that there’s significant surface self-occlusion effects that are dependent on both light vector and view vector, in relation to the surface. These are not taken into account in common typical normal mapping techniques (which are the only bump mapping techniques I’m interested in these days :slight_smile:

I wrote up a suggestion for how to take this effect into account; I’d appreciate it if someone had a moment to read through it and find all the obvious flaws, not to mention all the previous research which I’m sure exists but which I couldn’t find when looking.
http://www.b500.com/~hplus/vdbump.html

It struck me that there’s significant surface self-occlusion effects that are dependent on both light vector and view vector, in relation to the surface.

By that, do you mean that bumps don’t hide bumps behind them?

The way I see it… who cares? Bump mapping can’t get the edges right anyway, simply due to the limitations of the technique. Why bother making a near-edge view look more correct, when the person seeing the image can (quite likely, given that he’s looking nearly edge on) see the edge anyway? Also, you’re going to be dropping texture LODs anyway, thanks to the edge-on view, so the bump mapping is going to be less detailed anyway.

don’t you mean selfshadowing? and in what situation is that viewdependent? i don’t get it…

okay, got it now…

bether implement that bumps can shadow eachother with horizonmampping, but your idea is neat…

but you only take lighting into account, right? think about it. the most view dependend thing is a reflection. so if you don’t look at your sand, but at your sea, you would notice that there the same happens, and as you normally look from a flat angle, and the sea has water ripples, you there have the a) biggest problem, and b) a standart situation (wich happens to happen for every sea/lake/ocean you wan’t to do in a game…

thats why that water demos with envbump look so artificial… that and the not real reflection displacement depending on distance to the hit of the geometry… (wich is easy to solve if you copy color and_depth_of_reflection, and subtract your depht… at least good approx…)

[This message has been edited by davepermen (edited 11-27-2002).]

I took a shot at it: http://www.delphi3d.net/misc/images/jwatte_bump_comp.jpg

The top image is the original approach, the bottom one includes jwatte’s view-dependent normal perturbation. As you can see, the area behind the light looks bright, whereas the parts in front of the light have been darkened.

I should really put in an extra normalization cube map, though, because right now the perturbation vectors are calculated per vertex and interpolated, but not normalized per pixel. It shows

– Tom

It sounds original to me.

Sweet, Tom! The floor tiles at the bottom of the image were actually helped a fair bit by this. I just obtained drivers which claim hardware ARB_fragment_program support, so I’ll soon proto it myself :slight_smile:

davepermen, you actually want the viewer angle in there. If you look at a field of golf balls, say, and you look at it from the same direction as the light, then you’ll see only lit sides of the golf balls. If you don’t move the lights, and don’t move the golf balls, but move yourself so you look at the balls, towards the light, you’ll see mostly shadowed parts of the golf balls. This is because of view-dependent self-occlusion of the “surface” of a field of golf balls.

Korval, Thanks for the comments. I agree that my suggestion is basically a hack, but I think it has the chance of making some of the worse artifacts of “it’s really flat” go away; especially for bumps that are small in relation to the surface. You could actually use this simply for anisotropic lighting to simulate surface roughness, no bumps required :slight_smile: Also, the beach scenario that I talk about is a farily vast, “locally bumpy” surface and the view-dependent lighting is clearly visible even at not-so-glancing angles, so I think it might be worth it adding another hack to the first hack to make it potentially look a little better.

Tom again: are you sure you got the direction of the offsets right? The rough square on the upper-right side of the image should become brighter with the technique I suggest, I think, as you’re looking at it in mostly the same direction as the light is shining at it. Perhaps this is just artifacts of the interpolation, as you say.

While I agree to jwatte’s observation, I think the trick will work only on high-frequency bump map.
In places where surface is locally flat (as in bumps in Tom’s pictures) the occlusion effect should not occur (this is why the pics dont look convincing to me). I think it is because of relying purely on the FLATnormal.
I have not much constructive idea, but maybe you should try somehow include to the equotation dependance on normal-map vector (not just perturb it, but perturb it depending also on its direction)

jwatte, you can see i edited the post, and first i did not got what you ment at all.

sure you want the eye in…

best is some small raycasting over the bumpmap to do tiny displacementmapping… then you have what you want… to find out where exactly the ray hits the surface, and then lighten from there…

that would be sort of embrossing actually… hehe…

Originally posted by jwatte:
Tom again: are you sure you got the direction of the offsets right?

Heck no, that picture was basically a 15-minute hack, so the interpolation issue is unlikely to be the only bug in there.

That said, I agree with MZ that it doesn’t look 100% convincing. The effect is particularly obvious when you’re facing the light and you look up and down. If you look down, you can see the ground light up as you tilt your head.

– Tom

Tom: I think that the view vector is not the direction of the camera, but the vector that goes from the camera to the point you are lighting. So, it shoudn’t change when the camera rotates.

I made that same error when doing specular lighting, so sorry if it’s not the case. :wink:

One improvement I’ve been thinking about is to run a low-pass on high-pass filter combo on the normal map to come up with a measurement of “local bumpiness”.

However, run this experiment. Go out in the afternoon (when the sun is low). Look at a lawn in the direction towards the sun. Turn around and look at the lawn in the direction away from the sun. Note that it’s much brighter in the second case.

Also, as castano said, the vector to use should be the per-pixel vector; using an infinite vector probably will give you all kinds of wavy lighting when just spinning in place :slight_smile:

Originally posted by castano:
[b]Tom: I think that the view vector is not the direction of the camera, but the vector that goes from the camera to the point you are lighting. So, it shoudn’t change when the camera rotates.

I made that same error when doing specular lighting, so sorry if it’s not the case. ;-)[/b]

That’s what I originally did, and that’s what you would do for specular, but it wouldn’t work here. The idea is that things turn dark if you’re looking towards the light, and bright if you’re looking away from it. That means that if the light is in front of you, things in front of you should be dark and things behind you should be bright. You can only model that using the view direction.

Right?

– Tom

Yeah, view direction, but from camera to point, what’s wrong with that? when doing raytracing, the view vector is just the direction of the ray, why is that different on a polygon rasterizer? It would also model the effect John proposed, but without camera tilts you described. Imagine for example a 360 degree camera, you wouldn’t use the camera direction, right? And then why would you do it on a 90 degree one?

Tom: you are correct, you want to use the view direction. The view direction == the normalized vector from fragment to camera. Note that this vector (in world space) does not change if you just rotate the camera.

D’oh! ::runs off and hides in shame::

This is all very obvious, of course. Serves me right for trying to write code in the morning while still half asleep

I’ll fix up my code after the weekend, solve the denormalization problem while I’m at it, and then we’ll see what happens.

– Tom