My 'view' direction in glsl

How can i calculate where does my entire screen look at, in 3d vector ? I mean, what is my camera’s direction in glsl ? I will try to code an outline shader so i must dot product terrain normals with my camera’s view

If your camera is uses Euler angles, try this:


Vec3f RotationToVector(float xRotRads, float yRotRads)
{
    Vec3f dir;
    float cosY = cosf(yRotRads);


    dir.x = sinf(xRotRads) * cosY;
    dir.y = -sinf(yRotRads);
    dir.z = cosf(xRotRads) * cosY;


    return dir;
}

It may be pointing in the opposite direction. If that happens, just flip it.

thanks but isn’t there any way to calculate it in vertex shader instead of passing variables ?

Yes, you can just transform the point (1.0, 0.0, 0.0) by the modelview matrix, but why would you want to do that?

You would needlessly recalculate it for every vertex! That would be expensive!

umm then passing variables -i mean computing in cpu instead of gpu- is the best way for now ?

Yes, such precomputed things should always be passed as uniforms.

i think i must find another way to do an outline effect… i had found another way but i couldn’t make stencil buffer work with VBOs…

You can extract view direction from modelview matrix
(no need to calculate anything in shader, just take apropriate column), look here:

Why don’t you simply maintain a forward vector with your camera implementation? You can use the basis vectors of the cam coordinate system for ton of things. Since the basis needs only be recomputed once per frame (in most cases at least) you have a marginal performance impact. Furthermore you can simply pass the camera position to a shader directly with glUniform4fv().

what i can’t understand is gl_position doesn’t work correctly, or as i wanted it to work. I am passing camera position to glsl and substract it from gl_position, normalize and dot product with terrain normals, i also tried backwards but everywhere looks black.

gl_Position isn’t equal to anything in the vertex shader unless you set it equal to something.
gl_Vertex is the vertex passed in.

i am already setting it

gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;

doesn’t it give me the real world coordinates ?

No, that is in screen space. You want it in world space, so it is just gl_Vertex (or worldMatrix * gl_Vertex, if you are transforming your vertices by matrices besides just view and projection).
OpenGL combines world and view matrices into one (modelview), so if you want to use stuff like glTranslatef on things besides just camera setup (like if you want to move stuff around in your scene), you need to pass in the world matrix separately as a uniform.

So it should look like this:


uniform mat4 worldMatrix;


varying vec3 worldPos;


void main()
{    
    worldPos = (worldMatrix * gl_Vertex).xyz;


    gl_Position = ftransform(); // Same thing as gl_ModelViewProjectionMatrix * gl_Vertex

    gl_FrontColor = gl_Color;
}

Alternatively, you can just transform the camera position into view space (on the CPU), and then instead of passing in the world matrix, use the worldView (aka modelView) matrix that opengl already supplies to calculate the view space position of the vertex, and use that in your calculations instead.

it seems i lack too much information about opengl… since every example and tutorial is REALLY old and out of current techniques, i can’t seem to find a correct lessons for opengl…

No, it’s in [-w, w] for each x, y and z - the so called clip-space. When it’s divided by w, the coordinates are transformed to normalized device coordinates which are in [-1, 1] for each x,y and z. You go to screen-coordinates during viewport mapping from [-1,1] in x,y,z to x in [0, 1, …, width -1 ] and y in [0, 1, …, height - 1].

I’d like to elaborate a bit on this one. If you define your model’s coordinates in world-space that’s ok, but in many cases a model is defined in a local or object-space. Only because object-space seems to be identical to world-space they are conceptually different spaces. It’s not called model-view-matrix for no reason. The model-matrix transforms vertices from object-space to world-space and the view-matrix transforms the resulting world-space coordinates to eye-space. A good example where a single model is not defined in world-space is when you do instancing: You have a single model which is defined in object-space and for n instances you’ll have n model-matrices which transform the coordinates into n (generally mostly disjunctive) sets of world-space coordinates.

What you do there is legacy OpenGL and you shouldn’t trust any tutorial telling you that doing this is a good thing - especially the NeHe tutorials are a source of confusion. You should go with something like http://www.arcsynthesis.org/gltut/ .

You haven’t given us any details to help you out with that.

I am passing camera position to glsl and substract it from gl_position, normalize and dot product with terrain normals, i also tried backwards but everywhere looks black.

You got a bug in your code somewhere then. Post some and we’ll help you out.

However, I’m a little puzzled by your question, because following the GL convention, the “camera” point (aka “eye” point) is at the origin in EYE-SPACE (0,0,0). Lighting is typically done in EYE-SPACE. So you transform your vertex position to EYE-SPACE, subtract from the eyepoint (0,0,0) – i.e. negate it – normalize, and there you have a unit vector from the vertex (or fragment) toward the eyepoint, ready and rearing to use for stock phong specular.

And now I think I understand your vague gl_Position reference. No, this is not the EYE-SPACE vertex position. This is what you write the CLIP-SPACE vertex position to in the vertex shader. Check it out in the GLSL Spec.

now in this code, everything is black

[vert]

#version 120

uniform float tilingFactor;
uniform vec3 campos;

varying vec4 normal;
varying vec3 poi;

void main()
{
normal.xyz = normalize(gl_NormalMatrix * gl_Normal);

gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
poi = normalize(gl_Position.xyz - campos);
gl_TexCoord[0] = gl_MultiTexCoord0 * tilingFactor;

}

[frag]

#version 120

varying vec4 normal;
varying vec3 poi;

void main()
{
vec3 n = normalize(normal.xyz);

float nDotL =  max(-0.01, dot(n, normalize( poi )));

vec4 ambient = gl_FrontLightProduct[0].ambient;
vec4 diffuse = gl_FrontLightProduct[0].diffuse * nDotL;
vec4 color = gl_FrontLightModelProduct.sceneColor + ambient + diffuse;   

if (nDotL < 0.10)
gl_FragColor = vec4(0.0,0.0,0.0,1.0);
else if (nDotL < 0.20)
gl_FragColor = color * vec4(0.0,0.0,0.0,1.0) * vec4(1.0,0.0,0.0,1.0);
else if (nDotL < 0.70)
gl_FragColor = color * vec4(0.15,0.15,0.15,1.0) * vec4(1.0,0.0,0.0,1.0);
else if (nDotL < 0.90)
gl_FragColor = color * vec4(0.35,0.35,0.35,1.0) * vec4(1.0,0.0,0.0,1.0);
else if (nDotL < 0.98)
gl_FragColor = color * vec4(0.57,0.57,0.57,1.0) * vec4(1.0,0.0,0.0,1.0);
else if (nDotL < 0.99)
gl_FragColor = color * vec4(0.78,0.78,0.78,1.0) * vec4(1.0,0.0,0.0,1.0);
else if (nDotL < 1)
gl_FragColor = color * vec4(0.9,0.9,0.9,1.0) * vec4(1.0,0.0,0.0,1.0);
else
gl_FragColor = color * vec4(1.0,1.0,1.0,1.0) ;

}

First of all, doing normalization is the fragment shader

normal.xyz = normalize(gl_NormalMatrix * gl_Normal);

is not a good idea, if you don’t actually need the normalized normal in the vertex shader. You can simply interpolate

with (and I’m not suggesting you should use gl_NormalMatrix and gl_Normal)

 normal.xyz = gl_NormalMatrix * gl_Normal; 

You’re wasting one normalization per vertex - if you think about normalization you realize that there is a square root involved, which is too heavy to simply throw it around.

What you do with your campos is mathematically nonsense, albeit possible, since you’re substracting vectors from two completely different spaces, i.e. you take gl_Position, which is in clip-space and substract a (I assume) camera position which is in world-space.

What exactly do you want to do? You fragment shader suggests some russian roulette style shading but I don’t understand your POI thing.

BTW:

float nDotL =  max(-0.01, dot(n, normalize( poi )));

Why is one operand -0.01 and not 0? Why do you need such a threshold?