Hmm… I thought that the ‘camera’ should always be assumed to be at 0.0,0.0,0.0, though I know there’s no concept of an actual camera in OpenGL.
However, if I alter the CameraPosition variable in the FS to vec3(0.0,0.0,2.0), I get pretty-much the effect I’m looking for. I guess the rim-lighting effect only works for normals that are behind the camera position in Eye Space. I’m very hazy on all these basic concepts though, so if anyone could explain if I have this right, I’m be very grateful.
Also, does moving the camera position in this way cause directional lighting like this to be inaccurate?
I guess the rim-lighting effect only works for normals that are behind the camera position in Eye Space
What do you mean? oÔ
I don’t understand what you do here:
vec3 V = normalize(CameraPosition - (ecPos.xyz / ecPos.z));
vertex position after modelview transformation is in eye space and ecPos is related to the camera position. After this transformation, the camera is considered to be at (0,0,0) looking toward z negative.
So, what you are doing in the last code line is useless and what do you want to do by divinding ecPos.xyz by ecPos.z?
I guess the rim-lighting effect only works for normals that are behind the camera position in Eye Space
I meant that normals that point away from the camera position aren’t properly dealt with. Seemed to make sense to me at the time, but it’s probably based on an incomplete understanding of how the shader works…
I see. So, in fact, I don’t need the CameraPosition variable at all, and V can be just
normalize(-ecPos)
Makes sense.
So, what you are doing in the last code line is useless and what do you want to do by divinding ecPos.xyz by ecPos.z?
Ah, that was a typo. It should have been
ecPos.xyz/ecPos.w
which I thought was the correct way of making a vec3 from a vec4 (rather than just using the xyz values directly).
You may have spotted the root of the problem there, I think.