# Thread: My 'view' direction in glsl

1. gl_Position isn't equal to anything in the vertex shader unless you set it equal to something.
gl_Vertex is the vertex passed in.

2. i am already setting it

gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;

doesn't it give me the real world coordinates ?

3. No, that is in screen space. You want it in world space, so it is just gl_Vertex (or worldMatrix * gl_Vertex, if you are transforming your vertices by matrices besides just view and projection).
OpenGL combines world and view matrices into one (modelview), so if you want to use stuff like glTranslatef on things besides just camera setup (like if you want to move stuff around in your scene), you need to pass in the world matrix separately as a uniform.

So it should look like this:

Code :
uniform mat4 worldMatrix;

varying vec3 worldPos;

void main()
{
worldPos = (worldMatrix * gl_Vertex).xyz;

gl_Position = ftransform(); // Same thing as gl_ModelViewProjectionMatrix * gl_Vertex

gl_FrontColor = gl_Color;
}

Alternatively, you can just transform the camera position into view space (on the CPU), and then instead of passing in the world matrix, use the worldView (aka modelView) matrix that opengl already supplies to calculate the view space position of the vertex, and use that in your calculations instead.

4. it seems i lack too much information about opengl.. since every example and tutorial is REALLY old and out of current techniques, i can't seem to find a correct lessons for opengl..

5. Originally Posted by cireneikual
No, that is in screen space.
No, it's in [-w, w] for each x, y and z - the so called clip-space. When it's divided by w, the coordinates are transformed to normalized device coordinates which are in [-1, 1] for each x,y and z. You go to screen-coordinates during viewport mapping from [-1,1] in x,y,z to x in [0, 1, .., width -1 ] and y in [0, 1, .., height - 1].

Originally Posted by cireneikual
You want it in world space, so it is just gl_Vertex (or worldMatrix * gl_Vertex, if you are transforming your vertices by matrices besides just view and projection).
I'd like to elaborate a bit on this one. If you define your model's coordinates in world-space that's ok, but in many cases a model is defined in a local or object-space. Only because object-space seems to be identical to world-space they are conceptually different spaces. It's not called model-view-matrix for no reason. The model-matrix transforms vertices from object-space to world-space and the view-matrix transforms the resulting world-space coordinates to eye-space. A good example where a single model is not defined in world-space is when you do instancing: You have a single model which is defined in object-space and for n instances you'll have n model-matrices which transform the coordinates into n (generally mostly disjunctive) sets of world-space coordinates.

Originally Posted by artariel
it seems i lack too much information about opengl.. since every example and tutorial is REALLY old and out of current techniques, i can't seem to find a correct lessons for opengl..
What you do there is legacy OpenGL and you shouldn't trust any tutorial telling you that doing this is a good thing - especially the NeHe tutorials are a source of confusion. You should go with something like http://www.arcsynthesis.org/gltut/ .

6. Originally Posted by artariel
what i can't understand is gl_position doesn't work correctly, or as i wanted it to work.
You haven't given us any details to help you out with that.

I am passing camera position to glsl and substract it from gl_position, normalize and dot product with terrain normals, i also tried backwards but everywhere looks black.

However, I'm a little puzzled by your question, because following the GL convention, the "camera" point (aka "eye" point) is at the origin in EYE-SPACE (0,0,0). Lighting is typically done in EYE-SPACE. So you transform your vertex position to EYE-SPACE, subtract from the eyepoint (0,0,0) -- i.e. negate it -- normalize, and there you have a unit vector from the vertex (or fragment) toward the eyepoint, ready and rearing to use for stock phong specular.

And now I think I understand your vague gl_Position reference. No, this is not the EYE-SPACE vertex position. This is what you write the CLIP-SPACE vertex position to in the vertex shader. Check it out in the GLSL Spec.

7. now in this code, everything is black

[vert]

#version 120

uniform float tilingFactor;
uniform vec3 campos;

varying vec4 normal;
varying vec3 poi;

void main()
{
normal.xyz = normalize(gl_NormalMatrix * gl_Normal);

gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
poi = normalize(gl_Position.xyz - campos);
gl_TexCoord[0] = gl_MultiTexCoord0 * tilingFactor;
}

[frag]

#version 120

varying vec4 normal;
varying vec3 poi;

void main()
{
vec3 n = normalize(normal.xyz);

float nDotL = max(-0.01, dot(n, normalize( poi )));

vec4 ambient = gl_FrontLightProduct[0].ambient;
vec4 diffuse = gl_FrontLightProduct[0].diffuse * nDotL;
vec4 color = gl_FrontLightModelProduct.sceneColor + ambient + diffuse;

if (nDotL < 0.10)
gl_FragColor = vec4(0.0,0.0,0.0,1.0);
else if (nDotL < 0.20)
gl_FragColor = color * vec4(0.0,0.0,0.0,1.0) * vec4(1.0,0.0,0.0,1.0);
else if (nDotL < 0.70)
gl_FragColor = color * vec4(0.15,0.15,0.15,1.0) * vec4(1.0,0.0,0.0,1.0);
else if (nDotL < 0.90)
gl_FragColor = color * vec4(0.35,0.35,0.35,1.0) * vec4(1.0,0.0,0.0,1.0);
else if (nDotL < 0.98)
gl_FragColor = color * vec4(0.57,0.57,0.57,1.0) * vec4(1.0,0.0,0.0,1.0);
else if (nDotL < 0.99)
gl_FragColor = color * vec4(0.78,0.78,0.78,1.0) * vec4(1.0,0.0,0.0,1.0);
else if (nDotL < 1)
gl_FragColor = color * vec4(0.9,0.9,0.9,1.0) * vec4(1.0,0.0,0.0,1.0);
else
gl_FragColor = color * vec4(1.0,1.0,1.0,1.0) ;

}

8. First of all, doing normalization is the fragment shader

Code :
normal.xyz = normalize(gl_NormalMatrix * gl_Normal);

is not a good idea, if you don't actually need the normalized normal in the vertex shader. You can simply interpolate

with (and I'm not suggesting you should use gl_NormalMatrix and gl_Normal)

Code :
normal.xyz = gl_NormalMatrix * gl_Normal;

You're wasting one normalization per vertex - if you think about normalization you realize that there is a square root involved, which is too heavy to simply throw it around.

What you do with your campos is mathematically nonsense, albeit possible, since you're substracting vectors from two completely different spaces, i.e. you take gl_Position, which is in clip-space and substract a (I assume) camera position which is in world-space.

What exactly do you want to do? You fragment shader suggests some russian roulette style shading but I don't understand your POI thing.

BTW:

Code :
float nDotL =  max(-0.01, dot(n, normalize( poi )));

Why is one operand -0.01 and not 0? Why do you need such a threshold?

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•