Specular 'warping' effect

Hi,

I’ve nearly got my per pixel specular term complete. Currently it does:

4 * ((N dot H’)^2 - 0.75) * attenuation * gloss * light filter * light color

which gives a nice specular term. The problem is though, as you move around the scene the specular highlight sort of warps around polygon edges.

I am using a normalization cube map to normalize the H vector.

I think the problem is the transformation of the H vector into tangent space. here is the code that computes a polygons orthonormalbasis:

const void Polygon3d::calculateONB()
{
const Vec3 v1 = vertex(0) - vertex(1);
const Vec3 t1 = Vec3(vertex(0).u(), vertex(0).v(), 0.0f) - Vec3(vertex(1).u(), vertex(1).v(), 0.0f);
const Vec3 v2 = vertex(2) - vertex(1);
const Vec3 t2 = Vec3(vertex(2).u(), vertex(two).v(), 0.0f) - Vec3(vertex(1).u(), vertex(1).v(), 0.0f);

const Vec3 ddx = Vec3(v1.x(), t1.x(), t1.y()).cross(Vec3(v2.x(), t2.x(), t2.y()));
const Vec3 ddy = Vec3(v1.y(), t1.x(), t1.y()).cross(Vec3(v2.y(), t2.x(), t2.y()));
const Vec3 ddz = Vec3(v1.z(), t1.x(), t1.y()).cross(Vec3(v2.z(), t2.x(), t2.y()));

m_onb.tangent() = Vec3(-ddx.y() / ddx.x(), -ddy.y() / ddy.x(), -ddz.y() / ddz.x()).normalized();
m_onb.binormal() = Vec3(-ddx.z() / ddx.x(), -ddy.z() / ddy.x(), -ddz.z() / ddz.x()).normalized();
m_onb.normal() = plane().n();
}

The H vector is then calculated per vertex as:

Vec3 halfAngle = (lightPos - poly->vertex(j)).normalized() + ((cameraPos - poly->vertex(j)).normalized());

and transformed into tangent space like this:

s = halfAngle.dot(poly->onb().tangent());
t = halfAngle.dot(poly->onb().binormal());
r = halfAngle.dot(poly->onb().normal());

s,t,r are then used as texture coordinates for the normal cube map

If someone could shed some light (pun intended) on the situation, I would be most greatful. This is the last little bit of the lighting equation I have left to finish, before moving on to bigger and better things !! :slight_smile:

The diffuse bump mapping and attenuation looks fine, ie doesn’t have any obvious artifacts.

Thank you,
Richard

[This message has been edited by FatalXC (edited 10-11-2002).]

You don’t seam to normalize the halfAngle. Also, as the halfAngle is interpolated you’ll need a decent amount of tesselation for it to look good.

I guess it’s already done by the normalization cube map, he doesn’t need to do it on the CPU, does he?

Y.

the correct solution would be to calculate to_light and to_eye per pixel, normalize them perpixel, sum them up perpixel, normalize the resulting halfangle perpixel. if you dont do this, you get wrong directions of the halfangle (even thought you normalize per cubemap), wich results in wrong specular regions, that thing moves funny over the surface (warping? )

the quick’n’dirty halfangle per vertex approach works only for hightesselated geometry, else it fails. i even drew an image one day to show the wrong direction visually, but… a) i’m not at home and b)… its some time ago, that file is surely deleted…

the direction is much more important than the length. the incorrect length would dimm the specular, an incorrect direction moves it (depending on the size of the faces it moves half the screen! hehe )

Yep this is the tradeoff of pre vs post interpolation normalization and of course vector precision.

Ideally you want to store the vectors with lot’s of precision and normalize post interpolation. Unfortunately hardware is not ideal.

You can try and manage your vector lengths to improve some of the interpolation, for example normalize the vectors on a primitive with the same length factor for correct interpolation then, post interpolation you normalize to unit length but the shorter vector loses directional precision. It’s quite nasty, but you can get creative with 3D vector textures and solve some of these problems, but you’re not using color interpolators for your problem vectors then (typically local light pos and view). There are other apporaches I’ve concluded would help, for example a perspective interoplation warp (call it color W coord that would completely solve this), and for example with the view vector just turning off perspective correct interpolation on color is exactly what you need.

Thanks for your replies.

So what’s basically being said is that you can’t get per pixel specular highlights correct on GF1/2 hardware without tesselating polygons? But it could be done on a GF3/4 by using two normalization cubemaps, one for vertex to eye and one for vertex to light, then summing them in the combiners and using the combiner normalization trick. Then you’d have your correct specular H term for the (N dot H) part of the equation. Is this right?

Thank you,
Richard

yes thats about it. so you at least can get the right inputvalues for calculating the specular.

this is actually the biggest problem on gf1,2,3,4 to get: the input values. you can code up really complex lighting solutions, even on gf2. you just don’t get the correct values for the equation perpixel…

Even with normalization, you have a tradeoff between vector precision and accurate direction during interplolation unless you can use a 3D texture for the color interpolation instead of color interpolators. With a 3D texture the interpolated vector can be normalized directly from the fetch without a cube map.

Originally posted by Humus:
You don’t seam to normalize the halfAngle. Also, as the halfAngle is interpolated you’ll need a decent amount of tesselation for it to look good.

Tesselation? We don’t need no stinkin’ tesselation! http://www.area3d.net/nitrogl/precision4.jpg

… but then you’re lucky enough to have a 9700. I have still yet to recieve mine

Tesselation? We don’t need no stinkin’ tesselation!

You don’t need no stinking 9700 ( or tessellation ) ,
http://www.geocities.com/SiliconValley/Pines/8553/Specular.html

Description can be found elsewhere on this board. Those shots are possible on all hardware. Still, I’m looking forward to messing with the 9700 myself but , currently, due to some driver problems with the 9700, my 8500 shaders don’t work as expected.

[This message has been edited by PH (edited 10-13-2002).]

Nice shot btw. NitroGL. Are you using ARB_fragment_program ? I noticed it’s available in the drivers now.

Yep.

Just a thought:
Is it possible to use the diffuse lighting (L.N) as an index into a 1D look-up table that would store diffuse+specular ? If yes, it would make everything much simpler.

This is based on the fact that L.N and H.N should be maximized on the same pixel, and have similar distribution (well, L.N and H.N being 2 different curves of course, but between the same ending points). So I cannot think of anything that would make this give wrong result.

The only drawback I can see is that you need several 1D texture,one for each set of params you use for your specular (power etc…).

The texture would be precalculated in a way like this:
t[i] = cos( acos( lighting(i) )*0.5 )^n

PS: This trick can be used for colored lighting as well at the same time.

[This message has been edited by tfpsly (edited 10-13-2002).]

Originally posted by tfpsly:
[b]Just a thought:
Is it possible to use the diffuse lighting (L.N) as an index into a 1D look-up table that would store diffuse+specular ? If yes, it would make everything much simpler.

This is based on the fact that L.N and H.N should be maximized on the same pixel, and have similar distribution (well, L.N and H.N being 2 different curves of course, but between the same ending points). So I cannot think of anything that would make this give wrong result.

The only drawback I can see is that you need several 1D texture,one for each set of params you use for your specular (power etc…).

The texture would be precalculated in a way like this:
t[i] = cos( acos( lighting(i) )*0.5 )^n

PS: This trick can be used for colored lighting as well at the same time.

[This message has been edited by tfpsly (edited 10-13-2002).][/b]

no.

the halfangle and the to_light vector act completely independent. doesn’t work. at least not with a 1d function.

you can check brdf’s

Diffuse is maximal when the normal of the object points at the light source, irregardless of viewing angle.

Specular is maximal when the view vector, reflected in the surface (which is assumed to be perpendicular to the normal) points at the light source, which depends on viewing angle.

You can store Vr.L and N.L in a 2d texture and look them up that way if you want. You can get cool silk-like, and cartoon rendering, effects that way.

True… True…