Normalmapping

Hi,

i wrote my shader by following different tutorials (mainly from https://learnopengl.com/)
But now i don’t know how to continue.


#ifdef GL_ES
// Set default precision to medium
precision mediump int;
precision mediump float;
#endif

uniform mat3 n_matrix;
uniform vec3 cameraPosition;

uniform sampler2D texture;
uniform sampler2D secondTexture;
uniform float materialShininess;
uniform vec3 materialSpecularColor;

uniform bool b_transparent;
uniform bool b_specular;
uniform bool b_normalmap;
uniform bool b_light;

uniform struct Light {
	vec4 position;
	vec3 intensities;
	float attenuationFactor;
	float ambientCoefficient;
} light;

attribute vec3 a_polyNorm;
attribute vec3 a_polyTan;
attribute vec3 a_polyBiTan;

varying vec2 v_surfaceUV;
varying vec3 v_surfacePosition;
varying vec3 v_surfaceNormal;

void main()
{
	if(b_light)
	{
		// some values
		mat3 tbn = transpose(mat3(a_polyTan, a_polyBiTan, a_polyNorm));

		vec3 finalNormal = normalize(n_matrix * v_surfaceNormal);

		if(b_normalmap)
		{
			finalNormal = texture2D(secondTexture, v_surfaceUV).rgb;
			finalNormal = normalize(finalNormal * 2.0 -1.0)
		}

		vec4 surfaceColor = vec4(texture2D(texture, v_surfaceUV));
		surfaceColor.rgb = pow(surfaceColor.rgb, vec3(2.2));
		
		vec3 surfaceToLight;
		float attenuation;
		// directional light
		if(light.position.w == 0.0f)
		{
			surfaceToLight = normalize(light.position.xyz);
		}
		// point light
		else
		{
			surfaceToLight = normalize(light.position.xyz - v_surfacePosition);
		}
		
		float distanceToLight = length(light.position.xyz - v_surfacePosition);
		attenuation = 1.0 / (1.0 + light.attenuationFactor * pow(distanceToLight, 2));

		vec3 surfaceToCamera = normalize(cameraPosition - v_surfacePosition);

		// ambient
		vec3 ambient = light.ambientCoefficient * surfaceColor.rgb * light.intensities;

		// diffuse
		float diffuseCoefficient = max(0.0, dot(finalNormal, surfaceToLight));
		vec3 diffuse = diffuseCoefficient * surfaceColor.rgb * light.intensities;

		// specular
		float specularCoefficient = 0.0;
		if(diffuseCoefficient > 0.0)
			specularCoefficient = pow(max(0.0, dot(surfaceToCamera, reflect(-surfaceToLight, finalNormal))), materialShininess);
		vec3 specColor;
		if(b_specular)
			specColor = vec3(surfaceColor.a);
		else
			specColor = materialSpecularColor;
		vec3 specular = specularCoefficient * specColor * light.intensities;

		// linear color before gamma correction)
		vec3 linearColor = ambient + attenuation * (diffuse + specular);

		// final color after gama correction
		vec3 gamma = vec3(1.0/2.2);
		if(!b_transparent)
			surfaceColor.a = 1.0f;
		gl_FragColor = vec4(pow(linearColor, gamma), surfaceColor.a);
	}
	else
	{
		vec4 surfaceColor = vec4(texture2D(texture, v_surfaceUV));
		if(!b_transparent)
			surfaceColor.a = 1.0f;

		gl_FragColor = surfaceColor;
	}
}

as you can see there are different options.

  1. light on/off if the light is not turned on, there won’t be specular, and so on.
  2. if a specular map is enabled the diffuse alpha channel is used as intensity.
  3. now i want to use normalmapping if it is enabled.

i tried to use this tutorial: Learn OpenGL, extensive tutorial resource for learning Modern OpenGL
But this example code is that unreadable in my eyes. I hope someone can point me the right way to get normal mapping enabled for my shader.

The whole code can be found here: QtMeshViewer · f47e1cc76a59b8748a766f3f475f8acbb0ebc435 · C-Fu / OpenGL · GitLab

If you understand texture mapping, then you basically understand normal mapping. A albedo/color map is basically what we call a texture map although old-school texture maps tended to put shadowing and other stuff into the actual color map where more modern techniques stick strictly to color in the albedo/color map. But all the other maps work on the same principle. You assign UV coordinates to your mesh that map vertices to points in the 2D picture. You then sample those colors out of the picture with a sampler by using the UV coordinates for that pixel on the model.

Blinn-Phong interpolates (averages) values between vertices. The pixel/fragment shader uses these interpolated values (unless you turn interpolation off) so that the fragment shader is working on one pixel at a time.

Now imagine that instead of storing the pixel’s color, you store a normal describing what direction it faces. Now rather than smooth shading which averages the normals between the 3 vertices to create the illusion of smoothness on a flat triangle, you can control every pixel on the face of the triangle individually. So, you can do bump mapping to make the surface look a lot more interesting and complicated without extra vertices. To get the same effect you would need as many vertices as pixels and that’s tough even on cutting edge graphics cards when you have thousands of models in the scene. This illusion is the core of modern graphics.

When I create a high poly mesh, I tend to bump up the vertex count to the point my graphics card starts choking on a single model. Imagine a scene with thousands of these models. It will be a few years before we get there. Meanwhile, backing that high poly model back down to a low poly model allows it to look high poly without being high poly. My graphics card (about to be replaced because it’s about 3 years old) can handle between 3 and 4 million triangles in Blender. I usually shoot for a number of triangles below 8,000 on my low poly model depending on how important it is to the scene and how much attention it will get. Some models can get away with around 1,000 triangles. The super realistic models I’ve been doing lately are around 20,000 triangles for the low poly model but these are for rendering to a 2D picture for a grade and not necessarily for use in a game engine. Although, if I ever sell a model this will probably be the low poly budget for important models by the time I get there. Every year you get faster and faster graphics cards that can handle more.

So, basically I do very roughly a 10,000 or less low poly model and sculpt on a 3,000,000 triangle high poly model that gets baked back down to the 10k model. This at the added expense of storing a couple extra texture files for the model. Now days I’m also painting on the low poly model, but current techniques have me painting normals and other information along with color.

But it all boils down to storing other information besides the pixel’s color on a per-pixel value in photographs. You can store all sorts of different information like ambient occlusion, material type, normals, roughness, whether or not the material is metal, specular and so on. The sky’s the limit and it’s up to your imagination as much as anything. Ultimately, you’re drawing pixels on a 2D screen and being able to control individual pixels on the model is about as powerful as it’s going to get.

So, to create the normal map, (the easy way like it’s done in the real world), you model your low poly model. Then in Blender you can add a sub-D modifier to create a lot more vertices in a virtual model that doesn’t really exist. You can also just take the low poly mesh and subdivide the quads until it’s a very high poly mesh. You then basically sculpt the high poly mesh to add fine details. If you do it right, the high poly mesh will still basically conform to the low poly mesh. You can place them on top of one another and you bake a normal map by projecting the normals of the high poly mesh down on to the low poly mesh. With the high poly mesh you actually have triangle or vertex normals because you have about 10 times as many vertices. The baking process uses that information to map normals for every pixel on the low poly mesh.

So, imagine you have your UV mapped low poly mesh and you would paint colors on it. But instead, for each pixel in the UV map image you calculate a normal that describes what direction that texel faces in 3D space.

Then you can throw away the high poly mesh and use this normal info to draw the low poly mesh as if all the pixels face in much more complicated directions taking on shadow and lighting as if it were a far more complicated surface. In fact, except at the edges of the model or looking at it from extreme angles, it looks exactly like the high poly mesh even though it’s a low poly mesh. Amazing really. Whoever thought this up was brilliant but it’s been a slow development over decades. So really it was probably a combination of people.

Anyway, to make the normal map you first have to calculate a vector normal for that pixel describing what direction it faces. I assume you know the basic concept of that. A normal is in the range of -1 to 1 in 3D space and by definition X,Y, and Z never go out of that range. A photo tends to store color as values in the range of 0 to 1. So, you need to force the math into this range. Add 1 to put it in the range of 0 to 2 and divide by 2 to put it in the range of 0 to 1. Now you can store the X,Y,Z coordinates of a vector normal in an R,G,B color space just like a photograph. Also, before you do this make the vector relative to the triangle surface. So, whatever direction the triangle actually faces, remove that so that the resulting normal is an offest of what direction the triangle faces.

So, if every texel on the surface of the triangle faces the same direction as the triangle normal, then the normal you use will face straight down the Z axis as 0,0,1 which maps to R,G,B as pure blue, which explains why normal maps lean blue. That’s your standard direction.

Also, in order to rotate these normals, it looks to me like you have to turn them into matrices. I’ve only actually coded all this once and it’s been awhile. But you need to start with a standard identity matrix that points one axis along the U axis and another along the V axis in your picture with the third axis being the neutral normal of the triangle surface. So, this is where your tangent and bi-tangent and such comes into play. You’re basically constructing a 3D matrix to describe orientation in 3D space because vectors by themselves can’t do this. You need 3 mutually perpendicular vectors to form an axis, and this is exactly what a 3 by 3 matrix is used for. A 4 by 4 matrix can add position or offset for that, but I don’t think that’s needed with normal mapping.

So, to use a normal map photo, you just do the whole process in reverse. Multiply the color value by 2. Subtract 1 and voila you have a vector normal. You still have to know the orientation of the triangle in question in order to use this as an offset from the triangle’s normal.

Remember that even though all the triangles face in different directions around the model, a perfectly blue normal map will just give you the flat shaded model. So, the normal map doesn’t store the direction the triangle faces, only the offset from that direction. That’s probably the trickiest part of understanding the whole thing. Short of that, it’s not much more difficult than displaying colors as a texture on a model.

These normals can then be used in your lighting calculations rather than vertex normals or interpolated pixel normals.

Another map you can do the same way pretty easy is a specular map. Normally you use the same specular value for everything in the shader. But you can store specular percentage as a pixel in a photo and UV map it back so that you do specular percentage per-pixel. This could allow you to make metal armor shiny while skin is not with a single texture/albedo map.

When you get into PBR you use this for even more calculations.

And if everything I said wasn’t perfectly clear, check out my YouTube HLSL tutorial. I know we’re talking GLSL here, but the concepts and math are identical. The series creates a basic Blinn-Phong shader which is your basic shader for everything. Normal mapping would have been the next video if I had of continued the series. I also assume you know vectors, normals, and matrices inside and out, but I have videos on the channel for that as well.

first: dont name your sampler variables like that:

uniform sampler2D texture;
uniform sampler2D secondTexture;

“texture” is also a GLSL function, so its better to yust name it “tex1”, “tex2”, or so

“texture” is a very abstract description of what it actually is: a “diffuse” texture, that gives you the “surfacecolor”

if you have ever taken a look into .obj model files:
“Ka” = ambient color
“Kd” = diffuse color
“Ks” = specular color

you can replace Ka with Kd, so you dont really need Ka to lit the model (in my opinion)

surfacecolor itself doesnt describe the material fully, you need also the specular color which describes how the “specular light intensity” is reflected, usually you can use just white (rgb = vec3(1, 1, 1)), but some surface materials can have the ability to absorb for example the red component of the light, like this: vec3(0, 1, 1)

i’ve made an example for lighting:

i’d name them:

uniform sampler2D Kd;
uniform sampler2D Ks;
uniform sampler2D bump;

Bump mapping - Wikipedia

a “trick” on how to avoid a uniform switch variable to enable/disable textures:

  1. use the texture anyway
  2. to avoid sampling “black” (no bound texture), bind a white texture
  3. to avoid sampling “white”, multiply white with the materials Kd value:
  4. if the texture is available, replace Ka / Kd / Ks with vec3(1, 1, 1)

consider again the .obj file format:
materials have:
vec3 Ka, Kd, Ks
float Ns
string map_Ka, map_Kd, map_Ks, map_bump, etc …
// again: you can replace map_Ka with map_Kd

vec3 diffuse (your “surfacecolor”) = Kd * texture(map_Kd, texcoord).rgb;
vec3 specular = Ks * texture(map_Ks, texcoord).rgb;
// vec3 ambient = diffuse;

vec3 finalcolor = ambient * Ia + diffuse * Id + specular * Is;
// Ia = uniform ambient intensity
// Id and Is are accumulated for each light source

instead of using either the vertex normal (passed from vertexshader to fragmentshader) OR just the bump texture, you can mix them like the diffuse / specular values

vec3 N_vertex = … // from the vertexshader
vec3 N_bump = texture(map_bump, texcoord).rgb;

vec3 N = normalize(…mix them together…);

any ideas what happend here??


#ifdef GL_ES
// Set default precision to medium
precision mediump int;
precision mediump float;
#endif

uniform mat3 n_matrix;
uniform vec3 cameraPosition;

uniform sampler2D tx0;
uniform sampler2D tx1;

uniform struct Material {
	float shininess;
	vec3 specularColor;
	bool isTransparent;
	bool hasSpecularmap;
	bool hasNormalmap;

} material;

uniform bool useLight;

uniform struct Light {
	vec4 position;
	vec3 intensities;
	float attenuationFactor;
	float ambientCoefficient;
} light;

varying vec2 v_surfaceUV;
varying vec3 v_surfacePosition;
varying vec3 v_surfaceNormal;
varying vec3 v_polyNorm;
varying vec3 v_polyTan;
varying vec3 v_polyBiTan;

void main()
{
	if(useLight)
	{
		// get the color and undo gamma correction
		vec4 surfaceColor = vec4(texture2D(tx0, v_surfaceUV));
		surfaceColor.rgb = pow(surfaceColor.rgb, vec3(2.2));

		// attenutation depending on the distance to the light
		float distanceToLight = length(light.position.xyz - v_surfacePosition);
		float attenuation = 1.0 / (1.0 + light.attenuationFactor * pow(distanceToLight, 2));

		// normal vector
		vec3 normal = normalize(n_matrix * v_surfaceNormal);

		// direction from surface to light depending on the light type
		vec3 surfaceToLight;
		if(light.position.w == 0.0)		// directional light
			surfaceToLight = normalize(light.position.xyz);
		else							// point light
			surfaceToLight = normalize(light.position.xyz - v_surfacePosition);

		// direction from surface to camera
		vec3 surfaceToCamera = normalize(cameraPosition - v_surfacePosition);

		// adjust the values if material has normal map
		if(material.hasNormalmap)
		{
			mat3 tbn = transpose(mat3(n_matrix * v_polyTan, n_matrix * v_polyBiTan, n_matrix * v_polyNorm));
			normal = texture2D(tx1, v_surfaceUV).rgb;
			normal = normalize(normal * 2.0 -1.0);
			surfaceToLight = tbn * surfaceToLight;
			surfaceToCamera = tbn * surfaceToCamera;
		}


	/////////////////////////////////////////////////////////////////////////////////////
	// ambient component

		vec3 ambient = light.ambientCoefficient * surfaceColor.rgb * light.intensities;


	/////////////////////////////////////////////////////////////////////////////////////
	// diffuse component

		float diffuseCoefficient = max(0.0, dot(normal, surfaceToLight));
		vec3 diffuse = diffuseCoefficient * surfaceColor.rgb * light.intensities;


	/////////////////////////////////////////////////////////////////////////////////////
	// specular component

		float specularCoefficient = 0.0;
		if(diffuseCoefficient > 0.0)
			specularCoefficient = pow(max(0.0, dot(surfaceToCamera, reflect(-surfaceToLight, normal))), material.shininess);
		
		float specularWeight = 1;
		if(material.hasSpecularmap)
			specularWeight = surfaceColor.a;
		vec3 specColor = specularWeight * material.specularColor;

		vec3 specular = specularCoefficient * specColor * light.intensities;

	/////////////////////////////////////////////////////////////////////////////////////
	// linear color before gamma correction
		vec3 linearColor = ambient + attenuation * (diffuse + specular);

	/////////////////////////////////////////////////////////////////////////////////////
	// gama correction
		vec3 gamma = vec3(1.0/2.2);

		if(!material.isTransparent)
			surfaceColor.a = 1.0;
		
		gl_FragColor = vec4(pow(linearColor, gamma), surfaceColor.a);
	}
	// don't use light
	else
	{
		vec4 surfaceColor = vec4(texture2D(tx0, v_surfaceUV));

		if(!material.isTransparent)
			surfaceColor.a = 1.0;

		gl_FragColor = surfaceColor;
	}
}


@John_connor: i just saw that you replayed, too. You are right, my code was not very readable. It was ugly and the variable names were not good at all. (that happens when you use different tutorials and combine them mixed with your own variable names.
I did some changes and i hope it looks better now and makes it easier to you to figure out why my normal mapping does not work properly

ok it was a matter of normalize. But now i have still a few grizzel points left. Any ideas how to fix them??


#version 450
#ifdef GL_ES
// Set default precision to medium
precision mediump int;
precision mediump float;
#endif

uniform mat3 normalMatrix;
uniform vec3 cameraPosition;

uniform sampler2D tx0;
uniform sampler2D tx1;

uniform struct Material {
	float shininess;
	vec3 specularColor;
	bool isTransparent;
	bool hasSpecularmap;
	bool hasNormalmap;
	bool isGlow;
} material;

uniform bool useLight;

uniform struct Light {
	vec4 position;
	vec3 intensities;
	float attenuationFactor;
	float ambientCoefficient;
} light;

varying vec2 v_surfaceUV;
varying vec3 v_surfacePosition;
varying vec3 v_surfaceNormal;
varying vec3 v_polyNorm;
varying vec3 v_polyTan;
varying vec3 v_polyBiTan;

void main()
{
	if(useLight && !material.isGlow)
	{
		// get the color and undo gamma correction
		vec4 surfaceColor = vec4(texture2D(tx0, v_surfaceUV));
		surfaceColor.rgb = pow(surfaceColor.rgb, vec3(2.2));

		// attenutation depending on the distance to the light
		float distanceToLight = length(light.position.xyz - v_surfacePosition);
		float attenuation = 1.0 / (1.0 + light.attenuationFactor * pow(distanceToLight, 2));

		// normal vector
		vec3 normal = normalize(normalMatrix * v_surfaceNormal);

		// direction from surface to light depending on the light type
		vec3 surfaceToLight;
		if(light.position.w == 0.0)		// directional light
			surfaceToLight = normalize(light.position.xyz);
		else							// point light
			surfaceToLight = normalize(light.position.xyz - v_surfacePosition);

		// direction from surface to camera
		vec3 surfaceToCamera = normalize(cameraPosition - v_surfacePosition);

		// adjust the values if material has normal map
		if(material.hasNormalmap)
		{
			vec3 surfaceTangent = normalize(normalMatrix * v_polyTan);
			vec3 surfaceBitangent = normalize(normalMatrix * -v_polyBiTan);
			vec3 surfaceNormal = normalize(normalMatrix * v_surfaceNormal);
			mat3 tbn = transpose(mat3(surfaceTangent, surfaceBitangent, surfaceNormal));
			normal = texture2D(tx1, v_surfaceUV).rgb;
			normal = normalize(normal * 2.0 -1.0);
			surfaceToLight = tbn * surfaceToLight;
			surfaceToCamera = tbn * surfaceToCamera;
		}


	/////////////////////////////////////////////////////////////////////////////////////
	// ambient component

		vec3 ambient = light.ambientCoefficient * surfaceColor.rgb * light.intensities;


	/////////////////////////////////////////////////////////////////////////////////////
	// diffuse component

		float diffuseCoefficient = max(0.0, dot(normal, surfaceToLight));
		vec3 diffuse = diffuseCoefficient * surfaceColor.rgb * light.intensities;


	/////////////////////////////////////////////////////////////////////////////////////
	// specular component

		float specularCoefficient = 0.0;
		if(diffuseCoefficient > 0.0)
			specularCoefficient = pow(max(0.0, dot(surfaceToCamera, reflect(-surfaceToLight, normal))), material.shininess);
		
		float specularWeight = 1;
		if(material.hasSpecularmap)
			specularWeight = surfaceColor.a;
		vec3 specColor = specularWeight * material.specularColor;

		vec3 specular = specularCoefficient * specColor * light.intensities;

	/////////////////////////////////////////////////////////////////////////////////////
	// linear color before gamma correction
		vec3 linearColor = ambient + attenuation * (diffuse + specular);

	/////////////////////////////////////////////////////////////////////////////////////
	// gama correction
		vec3 gamma = vec3(1.0/2.2);

		if(!material.isTransparent)
			surfaceColor.a = 1.0;
		
		gl_FragColor = vec4(pow(linearColor, gamma), surfaceColor.a);
	}
	// don't use light
	else
	{
		vec4 surfaceColor = vec4(texture2D(tx0, v_surfaceUV));

		if(!material.isTransparent)
			surfaceColor.a = 1.0;

		gl_FragColor = surfaceColor;
	}
}