Suitable shader for 3D face models

Dear all,

Does anyone have any good suggestions for which specific (type of) shader to use when visualizing 3D human face topography models?
The standard Phong lighting doesn’t yield enough reliëf/detailed lighting to highlight the objects shapes…
(Or maybe I could be using a wrong lightpos/lightdir)

Any help will be greatly appreciated,

Sam

I guess this would be an interesting reading:
http://graphics.ucsd.edu/papers/egsr2006skin/egsr2006skin.pdf
and also a presentation:

Topography? Do you only have a geometry mesh for the face? Or some shading information too (albedo, normal map, gloss map, etc.).

If your contemplating realistic skin rendering, I highly recommend looking into this:

First, sit back and be amazed at his results (high res version):

That’s amazing!

Only geometry mesh for now. Using a very basic Phong lighting shader, but I just can’t get it to illuminate my face mesh right… It looks as though it either fully lits or doesn’t lit my mesh at all… Sometimes some mild reliëf/details can be visible but most of the time it’s just single color.

Is there some technical paper or online knowhow on how to set the parameters for Phong lighting? Where to put the light source, object, direction of light source, etc.?

So in short, I don’t know if Phong will do and am willing to try other shaders. These ones look a bit ‘too professional’ for my application I’m afraid.

Please post a picture so we can at least verify that what you’re seeing looks reasonable.

Make sure you’re specifying reasonable vertex normals, make sure they are unit vectors, ensure they’re being interpolated across the triangles properly (renormalizing in the frag shader), and incorporated properly in shading calculations. It’s the normals that really make all the difference.

Ofcourse, please find below two still shots from my 3D measurements and the vertex and fragment shader that I’m currently using. Playing with the light source position and light source direction results in alterations in illumination on the object, but never grants me sufficient “shadowy details” to visually observe the depth variations that are present…
Never thought too much about the normals, I thought I could gather them from the N = normalize( gl_NormalMatrix*gl_Normal ); command?

-[ATTACH=CONFIG]585[/ATTACH][ATTACH=CONFIG]586[/ATTACH]

Vertex shader:

//phong.vert

varying vec3 N;
varying vec3 L;
varying vec3 E;
varying vec3 v;

void main(void)
{
float height = gl_MultiTexCoord0.x;

  if (height == 0){

gl_Position = vec4(0,0,1,0);
} else{
gl_Vertex.x = gl_Vertex.x/10;
gl_Vertex.y = 60.25height/10; //scaling
gl_Vertex.z = gl_Vertex.z/10;

// calculate position and transform to homogeneous clip space
vec4 pos = vec4(gl_Vertex.x, gl_Vertex.y, gl_Vertex.z, 1.0);
gl_Position = gl_ModelViewProjectionMatrix * pos;
}

v = vec3(gl_ModelViewMatrix * gl_Vertex);
N = normalize( gl_NormalMatrix*gl_Normal );

}

Fragment shader:

//phong.frag

varying vec3 N;
varying vec3 L;
varying vec3 E;
varying vec3 v;

uniform vec4 ambientUni;
uniform vec4 diffuseUni;
uniform vec4 specularUni;
uniform float shininessUni;

void main (void)
{
vec3 L = normalize(vec3(gl_LightSource[0].position)-v);
vec3 E = normalize(gl_LightSource[0].spotDirection.xyz); // we are in Eye Coordinates, so EyePos is (0,0,0)
vec3 R = normalize(-reflect(L,N));

//calculate Ambient Term:
vec4 Iamb = gl_FrontLightProduct[0].ambient;

//calculate Diffuse Term:
vec4 Idiff = gl_FrontLightProduct[0].diffuse * max(dot(N,L), 0.0);
Idiff = clamp(Idiff, 0.0, 1.0);

// calculate Specular Term:
vec4 Ispec = gl_FrontLightProduct[0].specular
* pow(max(dot(R,E),0.0),0.3gl_FrontMaterial.shininess);
Ispec = clamp(Ispec, 0.0, 1.0);
// write Total Color:
gl_FragColor = Iamb + Idiff + 0.7
Ispec;

}

Sorry for bumping, but I might have an idea on why this is not working and wanted to pass it by more experienced GLSL programmers:

As I’m only defining the object’s height IN the vertex shader with:

gl_Vertex.y = 60.25height/10;

, could it be that I can no longer use the predefined gl_Normal values as they would represent the normals of my straight mesh grid going in the shader? Or is this not how that works and are the gl_Normal’s calculated again when I change one of the vertices - even in the shader?
If I’m right, is there a way to calculate the normals (efficiently) within the vertex shader or is this typically done in the main program and then passed on to the shader
through uniforms?

Thanks again,

Sam

Unless you do it, there’s nothing that’s going to change your normals.

If I’m right, is there a way to calculate the normals (efficiently) within the vertex shader or is this typically done in the main program and then passed on to the shader
through uniforms?

Keep in mind that normals are often computed from a higher resolution mesh than you’re actually rendering. If you can get it, you want the normals to the true surface, not the low-res mesh that its is being approximated by. These are typically passed in via vertex attributes (and/or normal maps) than with uniforms because you want these at at vertex or better frequency.

I do note that you aren’t renormalizing the normals in your fragment shader (they are interpolated across your triangles), which you probably should be. But that’s not going to totally explain your problems here.

You’ve got a lot going on with your shading so it’s not easy to see if there is even a problem. Try rendering your vertex normals. That’ll help you verify that they are what you think they are.

Thanks for these insights, as you suggested I tried

gl_FragColor = vec4(normalize(N),1.0);

in the fragment shader to have a look at the normals. The output is a uniformly colored object, the color only changes in its entirety if I rotate the view.

This means that the normals are all the same I guess?

When exactly are the gl_Normal’s calculated? Once right before the vertex stage? In that case I’ll have to calculate them myself in my main program…

Normals are typically fed in via a vertex attribute. gl_Normal in the vertex shader is “explicitly” that attribute (in GLSL <= 1.2).

Just like you’re now calling glVertex* on the C/C++ side, you need to be calling glNormal*.

[QUOTE=Dark Photon;1257741]
Just like you’re now calling glVertex* on the C/C++ side, you need to be calling glNormal*.[/QUOTE]

I’m not calling glVertex on the C/C++ side, I’m using 2 VBO’s (one for mesh coordinate locations and one for the height value) and 1 IBO (holding the indices in the correct order). Not making the primitives myself on the C/C++ , but letting the shader do this for me would improve interoperability between CUDA and OpenGL, as I have read. This is the reason for the detour.

Therefore, just like the vertex shader ‘knows’ to extract the gl_Vertex from the VBO, I thought it would also ‘know’ how to extract the gl_Normal from that same buffer…

I was wrong about that, so thanks for pointing that out. By the way, isn’t it weird that I’m seeing some shading in the results, without calculating any normals at all?!

Moving on, I guess I need to calculate the normals myself on the C/C++ side and transfer them to the vertex shader through an additional VBO?
Do the glNormalPointers work in cooperation with CUDA?

Sure thing. Just to clarify though, this has nothing to do with the shader. You, or some code you are calling, is either calling a glVertex* (possibly glVertexPointer) to set the details about where and how to extract position data for the VERTEX (positions) vertex attribute. Similarly you, or some code you are calling, needs to call a glNormal* call (e.g. glNormalPointer) to set where and how to extract the NORMAL vertex attribute. Don’t forget to enable/disable the required vertex attribute with glEnable/DisableClientState.

Down-the-road when you kick the fixed-function pipeline to the curb (including the legacy vertex attributes), you’d use glVertexAttribPointer() to set “all” vertex attributes and glEnable/DisableVertexAttribArray() to enable the required attributes. This gives you more flexibility with what data you can store in each attribute slot too.

By the way, isn’t it weird that I’m seeing some shading in the results, without calculating any normals at all?!

Not too weird. What’s happening is that there is some some constant value in the registers for gl_Normal (possibly the last value set through glNormal3f() or similar, or the default value it has when the OpenGL context is created), and that constant value is being stuffed into the gl_Normal vertex attribute for all vertices.

Moving on, I guess I need to calculate the normals myself on the C/C++ side and transfer them to the vertex shader through an additional VBO?

You could use an additional VBO, but it’s probably more efficient to encode them in the same VBO in interleaved fashion. For instance, if V is a vertex position and N is a vertex normal: V0N0V1N1V2N2V3N3… Read up on the stride parameter to glVertexPointer, glNormalPointer, and their gl*Pointer friends.

Do the glNormalPointers work in cooperation with CUDA?

CUDA doesn’t deal with vertex attributes (AFAIK). However, you can use buffer objects or textures to exchange data between OpenGL and CUDA. With this you could have CUDA generate vertex attributes in a buffer object, and then use OpenGL for rendering with them. Or you could generate vertex attributes or other data with OpenGL and then do some processing on that data with CUDA.

Thanks for those great answers!

Back to the programming board now.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.