HeightMap to NormalMap

Hey guys.
I sort of have a problem when trying to do this…
First of all my engine uses deferred lighting(still it shouldn’t make any difference).
So I made a really nice terrain but give the way it works I have to use a normal map for it. I only have a height map so I have to convert the heightmap into a normalmap.
Currently I have this code for converting:

for(int Y=1;Y<image->sizeX-1;Y++)
	{
		for(int X=1;X<image->sizeY-1;X++)
		{
			float v1 = image->GetImageHeight(X,Y-1)*scale;
			float v2 = image->GetImageHeight(X-1,Y)*scale;
			float v3 = image->GetImageHeight(X+1,Y+1)*scale;


			CVector3 n1 = (CVector3(1,v3,1)-CVector3(0,v1,-1)).cross(CVector3(1,v3,1)-CVector3(-1,v2,0));
			
			CVector3 normal = n1;

			normal.normalize();

			unsigned char r, g, b;
			r = (BYTE)(255.0 * (normal.x * 0.5 + 0.5));
			g = (BYTE)(255.0 * (normal.y * 0.5 + 0.5));
			b = (BYTE)(255.0 * (normal.z * 0.5 + 0.5));
			norm->data[Y*image->sizeX*3+X*3] = r;
			norm->data[Y*image->sizeX*3+X*3+1] = g;
			norm->data[Y*image->sizeX*3+X*3+2] = b;

		}
	}

It basically takes 3 near adjacent pixels transforms them into 3 3d points with the height(y value) being the height from the heightmap as a float multiplied by the maximum height in the terrain so that all the vertexes will be just like in the terrain.
Interesting problem is that when I run it my normal map looks chunky.

I really can’t figure this out.
I guess it’s something from my function but I can’t understand what.
Could anyone be so nice to help me please?
Thank you very much.

Maybe you can quickly find-out if simply sampling more and doing some filtering would be nicer:

GIMP:
http://nifelheim.dyndns.org/~cocidius/normalmap/
Photoshop:
http://developer.nvidia.com/object/photoshop_dds_plugins.html

Some nice sync here! I am trying to solve the same (or a very similar) issue.

In my experience, running a 3x3 sobel filter yields better results:


        Vector3 CalculateNormal(int u, int v)
        {
            // Value from trial & error.
            // Seems to work fine for the scales we are dealing with.
            float strength = scale.Y / 16;

            float tl = Math.Abs(this[u - 1, v - 1]);
            float l = Math.Abs(this[u - 1, v]);
            float bl = Math.Abs(this[u - 1, v + 1]);
            float b = Math.Abs(this[u, v + 1]);
            float br = Math.Abs(this[u + 1, v + 1]);
            float r = Math.Abs(this[u + 1, v]);
            float tr = Math.Abs(this[u + 1, v - 1]);
            float t = Math.Abs(this[u, v - 1]);

            // Compute dx using Sobel:
            //           -1 0 1 
            //           -2 0 2
            //           -1 0 1
            float dX = tr + 2 * r + br - tl - 2 * l - bl;

            // Compute dy using Sobel:
            //           -1 -2 -1 
            //            0  0  0
            //            1  2  1
            float dY = bl + 2 * b + br - tl - 2 * t - tr;

            Vector3 N = new Vector3(dX, dY, 1.0f / strength);
            N.Normalize();

            //convert (-1.0 , 1.0) to (0.0 , 1.0), if necessary
            //Vector3 scale = new Vector3(0.5f, 0.5f, 0.5f);
            //Vector3.Multiply(ref N, ref scale, out N);
            //Vector3.Add(ref N, ref scale, out N);

            return N;
        }

For best results, your normalmap should be at least 4x bigger than the heightmap (i.e. generate the heightmap at a higher resolution and downsample when you construct the vertex buffer). 16x bigger is even better.


for(int Y=1;Y<image->sizeX-1;Y++)
	for(int X=1;X<image->sizeY-1;X++)

that’s a little bit incorrect :slight_smile:
However, I don’t see any other major mistakes…
Maybe the problem lies aside the code you provided.