Getting camera position relative to vertex

hi,

im having trouble trying to get camera position. so for a given camera position x, y, z, i want to get the direction from it to every vertex in the vertex shader.

i notice that the ftransform() get the vertex position relative to the view position. how can i get the space x, y, z, position of the vertex? eg if my vertex is (0, 1, 1), i’ll get exactly the same coordinate on the vertex shader, not the one that transformed from view.

thanks in advance

In EYE SPACE, your eyepoint (camera) is at the origin.

So you simply transform your vertex to EYE SPACE in the shader, and then your transformed vertex position vector is a vector from the eyepoint to the vertex.

ftransform() * gl_Vertex is one way to get the EYE SPACE vertex position. gl_ModelViewProjectionMatrix * gl_Vertex is another, roughly equivalent, method.

hi,

why do you multiply ftransform() with gl_Vertex? as i read somewhere ftransform already include glVertex rite?

so here is my problem: i want to find the vertex position based on the camera position (which is basically eye space), and convert it to spherical coordinate. this is my code:


void main()
{
    vec3 spherical;

	gl_TexCoord[0] = gl_MultiTexCoord0;
	gl_Position = ftransform();

	vec3 v = gl_Position.xyz;

	vec3 cameraPos;
	cameraPos.x = cameraPosX;
	cameraPos.y = cameraPosY;
	cameraPos.z = cameraPosZ;


    vec3 cameraDir = normalize(-1.0 * v);

	spherical.x = sqrt((cameraDir.x*cameraDir.x) + (cameraDir.y*cameraDir.y) + (cameraDir.z*cameraDir.z));
	spherical.y = acos(cameraDir.y/spherical.x);
	spherical.z = atan(cameraDir.z, cameraDir.x);

    sphIndexY = spherical.y * 180.0/(3.14159);
    sphIndexZ = spherical.z * 180.0/(3.14159);

    if (sphIndexY < 0) {
        sphIndexY = sphIndexY * -1.0;
    }
    if (sphIndexZ < 0) {
        sphIndexZ = sphIndexZ * -1.0;
    }
}


but im not getting the right spherical value. where did i miss the code?

thanks in advance

i want to find the vertex position based on the camera position (which is basically eye space), and convert it to spherical coordinate.

Spherical coordinates in what space? Camera (eye) space? World space? Model space?

the spherical coordinate is in world space.

ftransform() * gl_Vertex is one way to get the EYE SPACE vertex position. gl_ModelViewProjectionMatrix * gl_Vertex is another, roughly equivalent, method.

Neither method transforms to eye space. Instead, you end up in clipspace.

gl_ModelViewMatrix * gl_Vertex will create an eye space position.

the spherical coordinate is in world space.

What exactly do you mean by this?

Spherical coordinates are generally specified relative to the space’s origin point, using the spaces axes to define what rho and theta are relative to.

If you want the spherical coordinates of a point in world space, that’s one thing. But you seem to want the spherical coordinates of a point, with the spherical coordinate system being relative to the world-to-camera transform rather than the world coordinate origin and axes.

Is this true? If you move the camera, do you want the spherical coordinates “in world space” to change, even though the vertex has not moved in world space? Or do you want something else?

Thanks for the correction! Ouch. I was obviously going way too fast last night. Sorry about that. Major goof.

Jos, to clarify, ftransform and ModelViewProjectionMatrix take you to from OBJECT SPACE to CLIP SPACE. Not what you want. You want OBJECT-SPACE to EYE-SPACE. So you want to use ModelViewMatrix.

ok i guess i’ll just explain my intention briefly:

so i capture textures from a set of camera. each vertex (which defined by texture pixel) i create a hemisphere sorrounding it, an capture the value of the pixel based on the light position on the scene. so for example at RHO 0 THETA 0, i put rgb (0, 0.5, 1), at RHO 20 THETA 0, i put rgb(0, 0.5, 0.5), and so on and so on. now i save those result in a aset of textures. so for example texture index 0 contains pixel captured from a camera RHO 0 THETA 0 RELATIVE from the vertex/texel.

now when i try to render the result, i save the textures to the gpu memory, and for example cam1 is RHO 0 THETA 0 from a vertex, i put the texture index 0 on that vertex (through the fragment shader). then i go to other vertex, i found the spherical position of the camera (direction of the camera from texel) is RHO 10 THETA 20 then i put a texel value from eg texture index 4, and so on.

so basically what i want to find in the vertex shader is the value of spherical coordinate of the camera relative from the vertex.

in world space, i think i can do that by simply find the camera direction (camPosition - vertexPosition), normalize, and convert from cartesian coordinate to spherical coordinate. now i wonder how i can simulate the same effect in the GLSL.any idea?

thanks in advance

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.