currently i am trying to implement specular lighting on a per-pixel level. I got it working but it looks ugly as hell. I get huge highlights, which are very blocky, although i normalize the half-angle vector and the normal-vector before i do the per-pixel calculations.
I searched google for papers, but i can only find nVidias “Why should i use nVidias super-duper high-end hardware?” papers, which don´t explain anything really good.
So, does anyone know a good paper/tutorial?
Anyway, i will keep on searching with google.
Also, you should normalize the fragment->eye, fragment->light and the half-vector to get it right.
In practice you can bind the two first into normalization cubemaps, and calculate and normalize the half-angle in the combiners. If your geometry well tesselated, it may be enough to normalize the two vectors per-vertex and only the half-vector per-fragment.
Yes, i do normalize per pixel. But i only normalize the half-angle vector. I don´t see a reason, why i should normalize the eye-fragment and fragment-light vectors. They will still point in the same direction, don´t they?
I first take the dot product of the half-angle vector and the normal, than i multiply that with itself 4 to 6 times (takes all of my remaining register combiners).
Like this:
B = A * A
C = B * B
D = C * C
E = D * D
If i just add that to the image it looks not that good. If i first multiply it by the lightmap (for distance attentuation) it still doesn´t look good, but it is not that visible anymore, so a normal person wouldn´t notice it immediatly. However if you look closly at a wall, you will still see that it is quite blocky.
You have to normalize them all, at least if you calculate the half-vector as the average of those two vectors. Because in that case the length of those vectors greatly affects the direction of the half-vector. Take a look at these images:
In action the latter looks even worse, the highlight is always round on flat surfaces and it wanders around the surfaces looking really out of place. On round surfaces the difference might not be as big.
YES, that might be it. I thought my lighting looked strange, because the highlight is always round, it never changes its shape, as i expected it to do.
Thanks, i will try that.
You can avoid normalization of the H vector and get yourself an arbitrary specular exponent by calculating H in tangent space at each vertex and using the interpolated vector to lookup a precomputed cube map. Since N = (0,0,1) in tangent space, N*H = Hz, so the cube map just needs to store the value Hz^m for every possible H.
If you’re also doing bump mapping, then you can use GL_NV_texture_shader to calculate the dot products NH and HH and use them as the s and t texture coordinates for a precomputed 2D texture containing (NH/|H|)^m when the squared magnitude of H is given by HH. This technique is described in Chapter 6 of The OpenGL Extensions Guide.
Yeah, normalizing the vectors greatly improves the shape of the highlight.
However, my real problem, the bad quality, stays. And, another problem appeared. Now the shape is depending on the tesselation of the level. I don´t think this one can be solved without real fragment programs.
However, maybe you have a tip how to improve the blockyness.
Here are two screenshots: http://www.artifactgames.de/IMAGES/Blocky.JPGhttp://www.artifactgames.de/IMAGES/BadTesselation.JPG
The first one shows my problem quite well.
Thanks,
Jan.
BTW: Very interessting book, i hope it will be published soon.
That seems to be a result of low resolution… you make a power in a pipeline with limited amount of calculation resolution. one of the best way to visually remove it is by dithering, and that can be done by adding a very fine random normalmap before you compute the perpixel light, like a very noisy bumpmap.
Originally posted by Eric Lengyel: You can avoid normalization of the H vector and get yourself an arbitrary specular exponent by calculating H in tangent space at each vertex and using the interpolated vector to lookup a precomputed cube map.
cubemaps are nice in this respect but you still get that staircase effect without good precision.
Do I have that right? It’s a precision problem and exponentiating makes it very visible I beleive.
I guess float textures would be the better solution in this case.
If you want good specular highlights on GeForce3/4 the only way I found was to do a dependant texture lookup. (ie after doing a dot-product, lookup a texture with the power function in it.) This should get rid of the banding artifacts.
(Unfortunately dependant texture lookups are slooow on Geforce3)
//fetch base color
float4 color = tex2D(DiffuseMap);
//fetch bump normal and expand it to [-1,1]
float4 bumpNormal = expand(tex2D(NormalMap));
// - compute the dot product between the bump normal and the light vector,
// - compute the dot product between the bump normal and the half angle vector,
// - fetch the illumination map using the result of the two previous dot products as texture coordinates
//this returns the diffuse color in the color components and the specular color in the alpha component
float4 illumination = tex2D_dp3x2(IlluminationMap, IN.LightVector, bumpNormal);
//expand iterated normal to [-1,1]
float4 normal = expand(IN.Normal);
//expand iterated light vector to [-1,1]
float4 lightVector = expand(IN.LightVectorUnsigned);
//compute self-shadowing term
float shadow = uclamp(4 * dot3(normal.xyz, lightVector.xyz));
//compute final color
OUT.col = mad(Ambient, color, shadow * mad(illumination, color, illumination.wwww));
return OUT;