cubemap limitations?

Hi

I just read those “Abducted Engine” articles (on the main page there is a link to them).
Nice articles, but not much new for me.
However, in one section the author mentions a problem i never discovered and which sounds very strange:

"Do not make the mistake of normalizing the vertex->light vector. There are several reasons for this:

If you intend on using a cube-map to renormalize a vector, then unit length vectors don’t work. For whatever reason, passing in a unit length or sub unit length vector as the texture coordinates on a cube map are not guaranteed to work, so make sure they aren’t normalized."

Did one of you ever discover this problem? Is there any plausible explanation why this should not work?

I´m just curious, if their is some hardware limitation i don´t know of.

Jan.

Yes, I found this. I didn’t quite understand why at 1st.

Its to do with the lerping between the 3 vectors on a tri. IF you pre-normalize them, then the lerp looks different basically.

Hm, sounds logical. Could be that he means that, too.
I first understood this as if i get an undefined result if i try to access the CM with - for example - a texcoord (1,0,0), which would be strange.
But certainly it´s really an interpolation issue.

Thanks,
Jan.

I’ve always ‘justified’ it in my brain thus:-
If you sweep a normalised 3d vector through all possible quadrants you will be describing a sphere. A cube map describes a cube - each corner of the cube is further away from the centre than 1 unit. Therefore if you try to access a texel in the far corner with a unit length vector you get some mad result.
It works for me

You think that´s a logical and justifying explanation?

I mean, Cubemaps are not handled like 3D textures, with void inside or outside the texture. The length of the texcoord vector should not matter.

But as we all know, a genius mind is usually a bit crazy. So if it works for you that´s OK

Jan.

Consider 2 expressions in vertex shader:
(a) light_position - vertex_position
(b) normalize(light_position - vertex_position)

The (a) is linear function of vertex attributes. Thus, if you evaluate this function at some points (vertices), and then linearly interpolate them (what happens per fragment) then result function exactly matches the original one. Because it was linear. Interpolation is perfect in this case, and you get what you asked for.

The (b) is non linear function, then any linear interpolation of it must differ from original one. That difference depends on placement of sample points (tesselation). The result is that interpolated value you get per fragment is different from the one you expected (except at vertices), what causes distortions in texturing or lighting

Normalizing texcoord in fragment shader before the cube map lookup, doesn’t change result of the lookup (here knackered’s explanation fails)

This looks like part of fundamental math of polygon based graphics, yet I admit I was unaware of that, until enlightment came upon me when Carmack talked about that in his (still latest) plan

Jeez, everyone’s a critic.
Just been reading through the spec, and trying to visualise this:-

“The (s,t,r)
texture coordinates are treated as a direction vector emanating from
the center of a cube. At texture generation time, the interpolated
per-fragment (s,t,r) selects one cube face 2D image based on the
largest magnitude coordinate (the major axis). A new 2D (s,t) is
calculated by dividing the two other coordinates (the minor axes
values) by the major axis value. Then the new (s,t) is used to
lookup into the selected 2D texture image face of the cube map”

Originally posted by knackered:
Jeez, everyone’s a critic.
Just been reading through the spec, and trying to visualise this:

Well, you’ve got a vector of arbitrary length, emanating from the center of a unit cube. First, you find the largest of X, Y and Z (looking at absolute values). This tells you which cube face the vector points at.

Now you divide the vector by the absolute value of its largest component. What happens now is that the largest component becomes +1 or -1, so the end of your vector is exactly on the selected side of the cube. Your vector still points in the same direction as before, because you divide the other two components as well. These two components (which are now in [-1, 1] range) can be used as texcoords for the selected face of the cube by scaling and biasing them into [0, 1] range.

Clear as mud?

– Tom

[This message has been edited by Tom Nuydens (edited 08-07-2003).]

So, if I understand what you’re saying, MZ, what one should do is either:

1: Don’t normalize the vector before sending it off from the vertex shader. Then, after interpolation, it can be normalized.

2: Normalize the vector, but use some, non-existant, interpolation scheme that is non-linear. The result should still be normal.

The question is, does option 1 produce the correct result? That is, is the vector, post-normalization, the same as what you would get if you used scheme #2?

Yes, the unnormalized vertex-to-light vector continues to point to the light during interpolation. To prove it, do the math: interpolating two vertex-to-light vectors is the same as interpolating the two vertices first and calculating the vertex-to-light vector based on that.

LERP(w, (L-V0), (L-V1))
  = (1-w)*(L-V0) + w*(L-V1)
  = (1-w)*L - (1-w)*V0 + w*L - w*V1
  = L - (1-w)*V0 - w*V1
  = L - ((1-w)*V0 + w*V1)
  = L - LERP(w, V0, V1)

I’ll leave it to you to figure out what goes wrong if you normalize the vectors before interpolation

– Tom

Tom & MZ are right. This has been discussed on OpenGL.org before, it’s really not that hard.

Basically the interpolation post normalization doesn’t track the correct light position, simple, VERY simple.

It’s NOT a slerp vs lerp issue either (it’s been mistakenly called this before), any illuminated polygon describes a planar section through a spherical field surrounding the light, so you need to know the local light position to interpolate correctly.

If you get it wrong you see strange distortions of the light than are affected by the tesselation of the geometry, one solution is to tesselate the geometry some more and produce intermediate values. The problem with requiring the unnormalized vector is that until recently you didn’t have the precision or range to store it or interpolate it. You had to trade normal precision for lerp accuracy, bearing in mind you had to send these in as limited precision color vector triplets it’s a good thing that you don’t actually HAVE to preserve the unnormalized light vector, you just have to make sure that the vectors are proportionate in length to the original unnormalized vectors. Unfortunately using shorter normals can screw up precision. So basically when generating your to light vectors you make sure that you divide them all by the same factor when bringing them into range, that was the old way of course (and you had to make sure that one wasn’t radically shorter than any other), on newer hardware you have better options.

If you get smart about computing your to light vector as a spherical field around the light then you don’t have any of these issues even on older hardware.

The same issues apply to view vectors & any other interpolated vector from a point source.

P.S. turning off perspective correction correctly interpolates view vectors (interesting!).

[This message has been edited by dorbie (edited 08-07-2003).]