PDA

View Full Version : updating normal after transforming vertex



j_derius
03-23-2016, 12:02 PM
Hi, I'm new in glsl shader programming.
My goal is to modify a mesh in real-time transforming the verts inside the vertex shader program. I've managed to do it correctly but the program renders the geometry as its normals are still the same as in the original mesh.
Is there a way to update the normals according to the vertex transform?
I'm using RenderMonkey, I've started modifing the base phong texture scene, maybe you can notice other errors in the code.

thanks

vertex shader:



uniform vec3 fvLightPosition;
uniform vec3 fvEyePosition;
varying vec2 Texcoord;
varying vec3 ViewDirection;
varying vec3 LightDirection;
varying vec3 Normal;

void main( void )
{
vec4 Pos = gl_Vertex;
// omitted modify vertex code Pos.x = bla bla; Pos.y = bla bla; Pos.z = bla bla;

gl_Position = gl_ModelViewProjectionMatrix * Pos;
Texcoord = gl_MultiTexCoord0.xy;
vec4 fvObjectPosition = gl_ModelViewMatrix * Pos;
ViewDirection = fvEyePosition - fvObjectPosition.xyz;
LightDirection = fvLightPosition - fvObjectPosition.xyz;
Normal = gl_NormalMatrix * gl_Normal;
}


fragment shader:



uniform vec4 fvAmbient;
uniform vec4 fvSpecular;
uniform vec4 fvDiffuse;
uniform float fSpecularPower;

uniform sampler2D baseMap;

varying vec2 Texcoord;
varying vec3 ViewDirection;
varying vec3 LightDirection;
varying vec3 Normal;

void main( void )
{
vec3 fvLightDirection = normalize( LightDirection );
vec3 fvNormal = normalize( Normal );
float fNDotL = dot( fvNormal, fvLightDirection );

vec3 fvReflection = normalize( ( ( 2.0 * fvNormal ) * fNDotL ) - fvLightDirection );
vec3 fvViewDirection = normalize( ViewDirection );
float fRDotV = max( 0.0, dot( fvReflection, fvViewDirection ) );

vec4 fvBaseColor = texture2D( baseMap, Texcoord );

vec4 fvTotalAmbient = fvAmbient * fvBaseColor;
vec4 fvTotalDiffuse = fvDiffuse * fNDotL * fvBaseColor;
vec4 fvTotalSpecular = fvSpecular * ( pow( fRDotV, fSpecularPower ) );

gl_FragColor = ( fvTotalAmbient + fvTotalDiffuse + fvTotalSpecular );

}

GClements
03-23-2016, 01:38 PM
My goal is to modify a mesh in real-time transforming the verts inside the vertex shader program. I've managed to do it correctly but the program renders the geometry as its normals are still the same as in the original mesh.
Is there a way to update the normals according to the vertex transform?

What is the nature of the transformation? If it's linear, you can transform the normals by the inverse-transpose of the matrix used to transform the vertex.

Otherwise, if for a given vertex you can calculate the result of transforming two nearby points on the surface, then the differences provide an approximation to the tangents at the vertex, and the cross product of the tangents gives the normal.

For some combinations of surface and transformation, you may be able to derive a direct closed-form solution. E.g. for a surface defined by an implicit equation F(x,y,z)=0, then the normal at any point is ∇F(x,y,z). In other cases, you may be able to derive the Jacobian matrix for the transformation at the vertex, and the inverse-transpose of that can be used to transform the normal.

j_derius
03-23-2016, 03:29 PM
What is the nature of the transformation? If it's linear, you can transform the normals by the inverse-transpose of the matrix used to transform the vertex.

Otherwise, if for a given vertex you can calculate the result of transforming two nearby points on the surface, then the differences provide an approximation to the tangents at the vertex, and the cross product of the tangents gives the normal.

For some combinations of surface and transformation, you may be able to derive a direct closed-form solution. E.g. for a surface defined by an implicit equation F(x,y,z)=0, then the normal at any point is ∇F(x,y,z). In other cases, you may be able to derive the Jacobian matrix for the transformation at the vertex, and the inverse-transpose of that can be used to transform the normal.

thanks a lot GClements, very helpfull, although since I'm not a math expert I understood about the 10% about what you wrote. I didn't even suspected the existence of the ∇ symbol and don't know what is an implicit equation. I did some research but my poor math knowledge prevents me from having a clear vision of the problem.
In few words the surface is defined by an algorithmic function similar to perlin noise which its arguments are x and y vertex coordinates. So, since that I know the initial position of all vertices I guess I can virtually calculate the position of their neighbors. But honestly I'm completely lost for what it comes later.
For example, how can I obtain the normal in the way you suggested? Which among the nearby vertices should I process?

Thanks again

GClements
03-23-2016, 08:04 PM
In few words the surface is defined by an algorithmic function similar to perlin noise which its arguments are x and y vertex coordinates. So, since that I know the initial position of all vertices I guess I can virtually calculate the position of their neighbors. But honestly I'm completely lost for what it comes later.
For example, how can I obtain the normal in the way you suggested? Which among the nearby vertices should I process?

You can use any three vertices which aren't co-linear (in a straight line). cross(v1-v0,v2-v0) is the normal to the triangle formed by v0,v1,v2, so if v1 and v2 are close to v0, the normal to the triangle is a reasonable approximation to the normal to the surface at v0.

A common technique for generating normals for a triangle mesh which approximates a smooth surface is to calculate the normal to each triangle then, for each vertex average the normals of the triangles containing that vertex. But this is normally done as a pre-process step; it's awkward to implement using shaders.

If the vertices are on a rectangular grid, then a common way to calculate the normal at v[i,j] is to use v[i+1,j]-v[i-1,j] and v[i,j+1]-v[i,j-1] as tangents (and their cross product as the normal). Using the vertices on either side (and ignoring the vertex itself) provides an unbiased approximation, whereas choosing points to one side tends to introduce a bias.

j_derius
03-24-2016, 06:00 AM
You can use any three vertices which aren't co-linear (in a straight line). cross(v1-v0,v2-v0) is the normal to the triangle formed by v0,v1,v2, so if v1 and v2 are close to v0, the normal to the triangle is a reasonable approximation to the normal to the surface at v0.

A common technique for generating normals for a triangle mesh which approximates a smooth surface is to calculate the normal to each triangle then, for each vertex average the normals of the triangles containing that vertex. But this is normally done as a pre-process step; it's awkward to implement using shaders.

If the vertices are on a rectangular grid, then a common way to calculate the normal at v[i,j] is to use v[i+1,j]-v[i-1,j] and v[i,j+1]-v[i,j-1] as tangents (and their cross product as the normal). Using the vertices on either side (and ignoring the vertex itself) provides an unbiased approximation, whereas choosing points to one side tends to introduce a bias.

ok, thanks, I think I can do it now. Indeed the it's are rectangulard grid mesh.

I've also found a tutorial suggesting what I think is another way although the forum engine doesn't hallow me to paste the url.
If someone wants to access it just google for "Terrain Tutorial Computing Normals" it's the first in the list right now


thanks again

Giacomo