phong shading

hi. i’m a newbie of opengl and GLSL. i’m writing a phong shader but it don’t work. Light type would be directional. Can you segnale me where i’m wrong, please? Thanks

(sorry for my bad english :slight_smile: )

vertex:

varying vec3 normal,lightDir; 
varying vec3 vertexPosition;

uniform vec3 ucLightDir;
uniform vec4 ucLightAmbient,ucLightDiffuse,ucLightSpecular;

void main() { 

normal = normalize(gl_NormalMatrix * gl_Normal);

vertexPosition = (gl_ModelViewProjectionMatrix * gl_Vertex).xyz;

lightDir=ucLightDir;

gl_TexCoord[0] = gl_MultiTexCoord0;

//transform vertices
gl_Position = ftransform(); 

} 

Fragment:

uniform gl_LightSourceParameters gl_LightSource[gl_MaxLights];
uniform gl_LightModelParameters gl_LightModel;

varying vec3 normal;
varying vec3 lightDir;
varying vec3 vertexPosition;

uniform vec4 ucLightAmbient,ucLightDiffuse,ucLightSpecular;
uniform sampler2D tex;

//fragment shader
void main()
{

vec4 ambient,diffuse,specular;

//normalizing
vec3 nNormal   = normalize (  normal  );
vec3 nLightDir = normalize ( lightDir );


//DIFFUSE TERM

float lambertTerm = dot(nNormal,nLightDir);
//clamp 0-1 range
lambertTerm = max(lambertTerm,0.0);

diffuse = gl_FrontMaterial.diffuse * ucLightDiffuse * lambertTerm;

//AMBIENT TERM

vec4 lightAmbient  = ucLightAmbient * gl_FrontMaterial.ambient;
vec4 globalAmbient = gl_LightModel.ambient * gl_FrontMaterial.ambient;
ambient = lightAmbient + globalAmbient;

//SPECULAR TERM

if(lambertTerm>0.0) {
	vec3 eye = normalize (vertexPosition);

	vec3 reflectionVec = reflect(-nLightDir, normal);
	reflectionVec = normalize(reflectionVec);
	vec3 halfVector = vec3(eye+nLightDir);

	float RdotE = max( dot (nNormal,halfVector) , 0.0);
	specular = gl_FrontMaterial.specular * ucLightSpecular * pow( RdotE , gl_FrontMaterial.shininess);

} //if(cosine>0.0) {

vec4 texel = texture2D(tex,gl_TexCoord[0].xy);

//set color
gl_FragColor = (diffuse + ambient + specular ) ;

}

What exactly does not work? (which term…diffuse, specular… )

You will probably need to post more info.
For example it is not clear what is the direction of lightDir vector. In case it points FROM the light source, the diffuse term is incorrect (the correct form would be:

float lambertTerm = dot(nNormal,-nLightDir);

the specular term is a little bit messy as you are at first computing the reflect vector and then you use the half-vector (so it is in fact Blinn-Phong shading model). The way how you compute the half-vector is somehow confusing too. It can be computed as eye+nLightDir , however both eye and nLightDir have to point FROM the vertex…also the result should be normalized

hi. sorry for the less info. The specular term is the only that don’t work. So nNormal and nLightDir are from the fragment. I use blinn phong because phong doesn’t work and i would like to check if the problem is the reflectionVector or the eye vector, so i try with blinn phong but it doesn’t work too. Maybe i simply forgot to normalize halfVector and the problem with phong is the reflection vector?

In fact can you tell me if eye vector is computed correctly?

Thanks for answering bye

Originally posted by tiger:
[b] hi. sorry for the less info. The specular term is the only that don’t work. So nNormal and nLightDir are from the fragment. I use blinn phong because phong doesn’t work and i would like to check if the problem is the reflectionVector or the eye vector, so i try with blinn phong but it doesn’t work too. Maybe i simply forgot to normalize halfVector and the problem with phong is the reflection vector?

In fact can you tell me if eye vector is computed correctly?

Thanks for answering bye [/b]
thanks for clarification.
First: Your eye vector computation is incorrect. There are two problems. First you use the vertexPosition varying passed from the vertex shader to compute it, which is ok, however you compute the vertexPosition incorrectly. It should be computed as:
vertexPosition = (gl_ModelViewMatrix * gl_Vertex).xyz;

(so you should use gl_ModelViewMatrix as you want to transform the vertex position into the eye space, not into a projected eye-space (as you do with the gl_ModelViewProjectionMatrix).

the second problem is that you are computing the eye-vector as:
vec3 eye = normalize (vertexPosition);
but then, you are using it as if it points FROM the fragment. This can be solved for example by computing it as
vec3 eye = normalize (-vertexPosition);

Now if you want to compute Phong shading you can do:

vec3 reflectionVec = reflect(-nLightDir, nNormal);
reflectionVec = normalize(reflectionVec);

(the only change is to use the normalized normal vector for the reflect function)

and then:
float RdotE = max( dot (eye,reflectionVec) , 0.0);
etc…

for the blinn-phong case it is:
vec3 halfVector = normalize(eye+nLightDir);
etc…

edit1: just a quick note… also, because you are passing the lightDir vector as an uniform variable to the vector shader, you should transform it into a eye-space coordinates manually (do it in the main application …just one matrix x vector multiplication). If you do not transform it, the light will be always pointing from one direction, no matter what is the position of your camera in the scene.

first off thank you a lot for time that you spend with me. Now phong shader work!! i will post a screen shot. Sotty but i have a question (i’m newbie :slight_smile: ).

of course you’re right, but i don’t understand why i need to transform light direction in eye coordinate. in the real world i’m the camera and there is a cube affected by sun light. sun light affect the cube with its own light direction. if i move the light direction wouldn’t change, if i rotate (camera rotation) the light direction from sun to cube wouldn’t change. Can you tell me why i need that?

Thanks

another question. for texture interpolation what is bettere? modulate or decal? in an example i get bettere result with modulate, but in my FPS demo i get bettere result with decal (but i set manually fragmente alpha to 1 and texture alpha to 0.5). why?

Originally posted by tiger:
[b]

of course you’re right, but i don’t understand why i need to transform light direction in eye coordinate. in the real world i’m the camera and there is a cube affected by sun light. sun light affect the cube with its own light direction. if i move the light direction wouldn’t change, if i rotate (camera rotation) the light direction from sun to cube wouldn’t change. Can you tell me why i need that?

Thanks [/b]
Yes, the light direction would not change relatively to the cube if you rotate the camera, however you are transforming the cube into the eye-space coordinates
( by using gl_NormalMatrix * gl_Normal to transfer normals and gl_ModelViewMatrix to transfer the vertices ).

The eye-space is defined by a position of the camera (thats why it is called eye-space). And the light direction changes relatively to the camera when you rotate it, therefore you have to transform the light direction into the eye-space coordinates too because both the cube and the light direction have to be in the same coordinate system to compute the lighting model correctly.

Hope it is clear.

To your second question (about the textures)… I have to admit that I’m not sure what exactly do you mean. The decal and modulate modes are there only for the fixed-function pipeline and they determine how should be individual texels applied on a given fragment. When you are using fragment shaders you can do with the texels whatever you want and these modes have no meaning there.

And imho no mode is superior to other, they are just different and they shell be used in different cases, so it really depends on what you want :slight_smile:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.