Rotating and scaling an object using gluLookAt (Mathematical conversion)
I'm having trouble wrapping my head on how I can use the gluLookAt() function to rotate and scale an object. This is not a question about how gluLookAt() works, which is why I'm not posting it at the "Toolkits" section. The question is more centered on mathematically converting values such that I can use gluLookAt() unconventionally.
Trying to rotate and scale an object using gluLookAt() might sound weird, because this function is normally used when we want to simulate moving the camera around a scene. Since OpenGL always has the camera at the origin targeting the -z-axis, we give gluLookAt() the values and it will transform the scene to achieve that effect (Correct?).
I'm being asked to use gluLookAt() to achieve a scene transformation such that it will rotate and scale an object (instead of using the tranformation functions). This is requiring me to use gluLookAt() the opposite way it was intended (For learning purposes). To achieve that, I'm converting the values I'm given from cartesian coordinates to spherical coordinates. I got almost everything right.
The problem I'm having is with the scaling, which is basically being done by moving the camera closer to the object or further away from it. Making the object bigger will eventually start making it smaller. In other words, after crossing the center of the object the camera turns around to target the object's center again. Although this effect is correct given my approach, it is not correct based on what I'm being asked.
Maybe someone can help me figure out another approach or tweaks to my approach that will fix this issue.
Values I'm given:
-Distance from center of object to camera (X, Y and Z of object's center)
-Angle of rotation around X and Y axis for object
Scene and object description:
-Object rotates around its own center based on angle values given above.
-Object is centered at screen
How I'm converting the coordinate values:
Take EYE as camera position, TARGET as where camera is pointing and UP as vector from camera position oriented "upwards"
-I assume the object at the origin of the space ( Its center is at (0,0,0) )
-I always look at the center of the object, which in my case is the same as the origin. ( TARGET = (0, 0, 0) )
-Camera moves around object in a spherical orbit ( EYE and UP changes using distance as radius and angles of rotation as theta and phi)
//Theta and phi in radians
radius = objectCenter.x + objectCenter.y + objectCenter.z; //Not using Pythagorean theorem on purpose
EYE.x = radius * sin(theta) * sin(phi);
EYE.y = radius * cos(theta);
EYE.z = radius * sin(theta) * cos(phi);
UP.x = (radius * sin(theta - 1) * sin(phi)) - EYE.x;
UP.y = (radius * cos(theta - 1)) - EYE.y;
UP.z = (radius * sin(theta - 1) * cos(phi)) - EYE.z;
Thanks for the attention