mynameisjohn

08-02-2015, 11:26 AM

Hello All,

I've got a bit of a problem. In my OpenGL (3.0 core, GLSL 1.4, context from SDL2) scene I've got objects being rendered from the point of view of a moving camera (meaning it can be translated and rotated). As such, in order to get my eye vector I need to translate the vertex position from world space (using a model view matrix) to eye space (using the camera transform.) This eye space vertex position seems to work as the eye vector for my lighting computations, provided I do everything else in eye space.

That's all well and good, but I'm moving on to environment mapping and have generated a cube map that I wish to compute reflections from. In order to sample a cube map I need to find the reflection of the eye vector off of the surface normal, but I think that I need that reflection in world space. It would be possible to compute this reflection in eye space (by transforming the vertex normal to eye space.) However, I've already got a reason to compute my model's vertex normal in World Space (using inverse(transpose(MV)), so I'd prefer to find the eye vector in world space as well.

Given:

Model View matrix (model space -> world space) MV

Camera matrix (world space -> eye space) C

Model space position m_Pos

How can I get the world space eye vector?

One shoddy solution I have is to find the eye vector and then invert

vec4 w_Pos = MV * m_Pos; // World space position

vec4 e_Eye = -(C * w_Pos); // Eye space eye vec

vec4 w_Eye = inverse(C) * e_Eye; // World space eye vec

but obviously that just gives me the world space position, negated, so nothing changes when I move my camera. For some reason, though, this "sort of" works:

vec4 w_Pos = MV * m_Pos; // World space position

vec4 e_Eye = -(C * w_Pos); // Eye space eye vec

vec3 w_Eye = (mat3(inverse(C)))* e_Eye.xyz; // World space eye vec

My guess is that by making the inverse camera matrix a mat3 I rotate the eye space Eye Vector to World Space while not undoing any translation, but the whole thing feels convoluted and wasteful. Is there something obvious I'm missing? Ideally I'd work out of one basis (i.e transform my light positions to eye space) and work exclusively out of there, but I don't know if I can get around having this reflection vector in world space. Is it possible to transform the cube OpenGL uses during the texture lookup?

Thanks for any input,

John

I've got a bit of a problem. In my OpenGL (3.0 core, GLSL 1.4, context from SDL2) scene I've got objects being rendered from the point of view of a moving camera (meaning it can be translated and rotated). As such, in order to get my eye vector I need to translate the vertex position from world space (using a model view matrix) to eye space (using the camera transform.) This eye space vertex position seems to work as the eye vector for my lighting computations, provided I do everything else in eye space.

That's all well and good, but I'm moving on to environment mapping and have generated a cube map that I wish to compute reflections from. In order to sample a cube map I need to find the reflection of the eye vector off of the surface normal, but I think that I need that reflection in world space. It would be possible to compute this reflection in eye space (by transforming the vertex normal to eye space.) However, I've already got a reason to compute my model's vertex normal in World Space (using inverse(transpose(MV)), so I'd prefer to find the eye vector in world space as well.

Given:

Model View matrix (model space -> world space) MV

Camera matrix (world space -> eye space) C

Model space position m_Pos

How can I get the world space eye vector?

One shoddy solution I have is to find the eye vector and then invert

vec4 w_Pos = MV * m_Pos; // World space position

vec4 e_Eye = -(C * w_Pos); // Eye space eye vec

vec4 w_Eye = inverse(C) * e_Eye; // World space eye vec

but obviously that just gives me the world space position, negated, so nothing changes when I move my camera. For some reason, though, this "sort of" works:

vec4 w_Pos = MV * m_Pos; // World space position

vec4 e_Eye = -(C * w_Pos); // Eye space eye vec

vec3 w_Eye = (mat3(inverse(C)))* e_Eye.xyz; // World space eye vec

My guess is that by making the inverse camera matrix a mat3 I rotate the eye space Eye Vector to World Space while not undoing any translation, but the whole thing feels convoluted and wasteful. Is there something obvious I'm missing? Ideally I'd work out of one basis (i.e transform my light positions to eye space) and work exclusively out of there, but I don't know if I can get around having this reflection vector in world space. Is it possible to transform the cube OpenGL uses during the texture lookup?

Thanks for any input,

John