Im using the following vertex and fragment shader to do parallax mapping. Everything work fine as long as I don’t move my test cube to another location (glTranslatef)… when I translate the position the parallax mapping look distorted and not right… what am I doing wrong and how can I fix that?
I assume you are doing tangent space parallax-mapping. When just rotating, The camera position doesn’t change so I suppose the source of your problem is in the calculation of the view vector. More specifically, I think the transformations you apply in your application to calculate the camPos vector must be wrong.
The camPos must be in object space for your code to work properly. Make sure you’re doing this in your application first.If you’re not doing this, I -think- (that is, I’m not sure if I’m right)that you can calculate viewVec as glModelViewMatrixInverse*(-gl_Vertex).
Yes the camera position change, I use a first person camera to move around the object, when the object is at 0,0,0 it’s ok… but when I put the object at like 10,10,0 well I got some really strange results…
I tried your suggestion but Im afraid that it’s not working…
But it sound right to my hear that the operation should be in object space, but I still don’t get it why when I move the object the effect is all distorted…
I think that’s part of the problem. When viewing an object you always assume the camera is at the origin and the object is at ModelView*Vertex(Hence the name Model-View. In the end it is not different whether you move the object or the camera).However, in your code it does make a difference because you supply the camera position via camPos.
so, to make a long story (relatively) short, the view vector is calculated as -gl_modelView*gl_Vertex(don’t supply a uniform camPos) This vector is in world space so you must supply a transformation to convert it to object space(I thought glModelViewMatrixInverse did that but apparently I am wrong). Another proposition I will make (More probable to be correct than the previous one, but less optimized mathematically)
is transform the object space base vectors to world space. Then project viewVec to that base. This can be accomplished by
//create a base matrix
mat3 objectBase = mat3(Tangent, Binormal, Normal)
/*
base vectors are direction vectors so multiply by
normal matrix to obtain world-space object-base
matrix.Then multiply viewVec by this base to get
projection of viewVec in object space
*/
vVec = glNormalMatrix*objectBase*viewVec
(assuming matrices are multiplied left to right, don’t remember if that is the case with GLSL)
I really hope this works…
Keep us informed!
(I would try this at home to be sure, but only I only have a humble TNT2 to play with…)
Yes, just yesterday I was thinking that my second post was incorrect…You have to multiply by the transpose of the matrix I proposed to be correct. Sorry for any wasted effort I may have caused!
yooyo: Tks for the tip, Im already doing the calculation of the tangent / face normal in my exporter and calculate the binormal on the fly… I don’t think that a little cross product will change drastically the performance But I agree that the binormal could be also precalculated outstide the shader…