PDA

View Full Version : I can't understand how to use gluLookAt with glm. Can anybody help me?



kdepirate
11-15-2016, 02:28 AM
Code i want to write is posted on below site.
http://nehe.gamedev.net/article/camera_class_tutorial/18010/

There are two modes.

First mode is that Camera's target is rotated.
Second mode is an Orbit mode.

I found recently GLM which is very good library written by C++.

you may already know....

Well... I have obtained two rotations that are x-axis and y-axis from mouse event(OnMouseMove) in MFC.

Among the two mode i mentioned above, I want to know orbit mode.

I usually use below code for 3D space.

glMatrixMode(GL_PROJECTION);
glPushMatrix();
{
glLoadIdentity();

float fAspect = float(m_win_w)/m_win_h;
gluPerspective(80.0f, fAspect, 1.0, 1000.0);

glMatrixMode(GL_MODELVIEW);
glPushMatrix();
{
const float DISTANCE = 100.f;
glLoadIdentity();
gluLookAt( eye.x, eye.y, eye.z,
0.f, 0.f, 0.f,
0.f, 1.f, 0.f);
DrawSomeObject();
}
}

I think the function, gluLookAt(), is the key.

For orbit mode, I should know how to handle eye's parameters and I should apply rotations of matrix and vector to those.

However, I can't grasp these concepts with glm.

I have tried to make orbit mode camera with below code.

glm::mat4 mat = glm::mat4();
glm::vec4 pos(0.f, 0.f, 100.f, 0.f);
glm::vec4 target(0.f, 0.f, 0.f, 0.f);
glm::vec4 up(0.f, 1.f, 0.f, 0.f);

mat[0] = pos;

mat = glm::rotate(mat, glm::radians(m_fRotYaw), glm::vec3(1.f, 0.f, 0.f));
mat = glm::rotate(mat, glm::radians(m_fRotPitch), glm::vec3(0.f, 1.f, 0.f));
gluLookAt(
mat[0].x, mat[0].y, mat[0].z,
target.x, target.y, target.z,
up.x, up.y, up.z);

But, I don't know where i have to fix.

May you think that I don't understand both matrix and vector.

That's right. My knowledge on these is not enough.

So, Can't anybody help me understand concepts and what i misunderstand.

I wish to grasp these concept, then I could write and complete this program with Euler and Quaternion.

Any reference site/URL would help me.

Thank you in advance.

BBeck1
11-15-2016, 03:39 AM
I might start by pointing out that GLM has a LookAt function as well. Keep in mind that there are many ways to create cameras. Or really, the view matrix is the camera, but there are many ways to manipulate it so that it controls and operates in various ways.

The orbit camera is a bit harder to understand than the first person camera. With the first person camera, you just tell the view matrix to stay parallel to the ground and a certain height above it. Then you manipulate the view matrix to move through the scene. Here's my OGL 4.5 code to do that:



bool Game::Initialize()
{
bool GameObjectInitializedProperly = false; //Must be set to true to keep the program from closing.


CameraHeight = 1.68f; //Roughly the average eye height in meters of an average man. Our camera will stay at this level to make it feel like we are in the scene.
CameraTilt = 0.0f; //Will tilt the camera up and down.

GameObjectInitializedProperly = true; //This should probably be set by error checking, but we don't have any error checking here.
View = glm::lookAt(glm::vec3(0.0f, CameraHeight, 2.0f), glm::vec3(0.0f, CameraHeight, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f)); //1.63 meters is roughly the height of the average man's eyes.
Projection = glm::perspective(0.96f, OperatingSystem.AspectRatio(), 0.1f, 700.0f); //0.96 is 55 degrees and 1.7708333 is the width to height ratio on my computer.

DiffuseLightDirection = glm::normalize(glm::vec3(1.0f, -1.0f, -1.0f)); //Direction that the primary light of the scene is "shining" in.
AmbientLightColor = glm::vec4(0.05f, 0.05f, 0.1f, 1.0f); //Light color in the "shadows".
DiffuseLightColor = glm::vec4(1.0f, 1.0f, 0.9f, 1.0f); //Direct light color.

return GameObjectInitializedProperly;
}
//================================================== ================================================== =================

void Game::Update()
{
const float MaxTiltAngle = glm::radians(45.0);
const unsigned char* Buttons;
int JoyStick1Present = false;
int NumberOfJoyStickAxes = 0;
int NumberOfJoyStickButtons = 0;
const float* AxesArray = nullptr;
float LeftThumbStickY = 0.0f;
float LeftThumbStickX = 0.0f;
float Triggers = 0.0f; //XBox 360 controller triggers are a single axis for both triggers. Positive = Left. Negative = Right.
float RightThumbStickY = 0.0f;
float RightThumbStickX = 0.0f;
bool AButton = false;
bool BButton = false;


if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_ESCAPE && OperatingSystem.Keyboard.ActionPressed == GLFW_PRESS) OperatingSystem.ShutDown();

JoyStick1Present = glfwJoystickPresent(GLFW_JOYSTICK_1);
if (JoyStick1Present)
{
AxesArray = glfwGetJoystickAxes(GLFW_JOYSTICK_1, &NumberOfJoyStickAxes);
Buttons = glfwGetJoystickButtons(GLFW_JOYSTICK_1, &NumberOfJoyStickButtons);
LeftThumbStickY = AxesArray[0];
LeftThumbStickX = AxesArray[1];
Triggers = AxesArray[2];
RightThumbStickY = AxesArray[3];
RightThumbStickX = AxesArray[4];

//Camera Controls with XBox 360 controller.
if (RightThumbStickX > 0.2 || RightThumbStickX < -0.2) View = glm::rotate(glm::mat4(), RightThumbStickX *0.06f, glm::vec3(0.0f, 1.0f, 0.0f)) * View;
if (LeftThumbStickX > 0.2 || LeftThumbStickX < -0.2) View = glm::translate(glm::mat4(), glm::vec3(0.0f, 0.0f, -LeftThumbStickX * 0.1f)) * View; //*0.1f to slow it down. Negative to flip the axis. -0.2 for deadzone.

if (RightThumbStickY > 0.2 || RightThumbStickY < -0.2) CameraTilt += 0.03 * RightThumbStickY;
if (LeftThumbStickY > 0.2 || LeftThumbStickY < -0.2) View = glm::translate(glm::mat4(), glm::vec3(-LeftThumbStickY * 0.1f, 0.0f, 0.0f)) * View;

if (Triggers > 0.2 || Triggers < -0.2) View = glm::translate(glm::mat4(), glm::vec3(0.0f, Triggers*0.1f, 0.0f)) * View;

if (Buttons[0] == '\x1') AButton = true;
if (Buttons[1] == '\x1') BButton = true;
if (Buttons[6] == '\x1') OperatingSystem.ShutDown();
}


//Camera Controls with keyboard.
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_W && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
View = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, 0.05f)) * View;
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_S && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
View = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, -0.05f)) * View;
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_E && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
CameraTilt += 0.1;
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_Q && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
CameraTilt -= 0.1;
if (OperatingSystem.Keyboard.ModePressed == GLFW_MOD_SHIFT)
{
//Keys while Shift keys are also held down.
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_A && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
View = glm::translate(glm::mat4(), glm::vec3(0.1f, 0.0f, 0.0f)) * View;
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_D && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
View = glm::translate(glm::mat4(), glm::vec3(-0.1f, 0.0f, 0.0f)) * View;
}
else
{
//Keys when shift keys are not being held down.
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_D && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
View = glm::rotate(glm::mat4(1.0f), 0.05f, glm::vec3(0.0f, 1.0f, 0.0f)) * View;
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_A && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
View = glm::rotate(glm::mat4(1.0f), -0.05f, glm::vec3(0.0f, 1.0f, 0.0f)) * View;
}


//Yellow cube controls.
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_I && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
Cube.Transform(glm::translate(glm::mat4(), glm::vec3(0.0f, 0.0f, 0.05f)));
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_K && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
Cube.Transform(glm::translate(glm::mat4(), glm::vec3(0.0f, 0.0f, -0.05f)));
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_L && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
Cube.Transform(glm::rotate(glm::mat4(), glm::radians<float>(-1), glm::vec3(0.0f, 1.0f, 0.0f)));
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_J && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
Cube.Transform(glm::rotate(glm::mat4(), glm::radians<float>(1), glm::vec3(0.0f, 1.0f, 0.0f)));


if (CameraTilt > MaxTiltAngle) CameraTilt = MaxTiltAngle;
if (CameraTilt < -MaxTiltAngle) CameraTilt = -MaxTiltAngle;

}
//================================================== ================================================== =================

void Game::Draw()
{
glm::mat4 TiltedView = glm::rotate(glm::mat4(), CameraTilt, glm::vec3(1.0, 0.0, 0.0)) * View;


glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

Triangle.Draw(TiltedView, Projection, &Shader, DiffuseLightDirection, AmbientLightColor, DiffuseLightColor);
Cube.Draw(TiltedView, Projection, &Shader, DiffuseLightDirection, AmbientLightColor, DiffuseLightColor);
Ground.Draw(TiltedView, Projection, &Shader, DiffuseLightDirection, AmbientLightColor, DiffuseLightColor);
}
//================================================== ================================================== =================


In this code, I use LookAt in the Initialize method to create the original camera. You could use an empty identity matrix and then translate the view matrix up to the "eye" height as well as about 2 units back and invert it to get pretty much the same results. After I initially set it, I never use LookAt again in this example. Then from frame to frame I let the View matrix hold the camera position and orientation information.

You may notice that I don't allow the view matrix to pitch up or down. Yet, I want the user to be able to look up and down as if they are tilting their head. I used to build the View matrix every frame using LookAt and I would have all different kinds of ways of storing the position and orientation of the camera in order to allow for the head to pitch up and down but limit it. I finally came up with the solution presented here which is to simply not allow the view matrix to pitch up and down but rather keep it perfectly parallel with the x/z plane at all times. Then, last minute before drawing, I take a pitch angle that I store as a float called "CameraTilt" and apply it to the View matrix to give a "TiltedView" matrix each frame right before drawing. By keeping it as a separate value, it's very easy to limit it between 45 degrees and negative 45 degrees of pitch. This allows the View matrix to hold the position and orientation and still maintain the "eye level" height over the terrain.

Anyway, unfortunately I do not have an example of an orbit camera in OpenGL. I've been meaning to put together a video explaining rotations and the orbit camera is a good example of that. I'm just too busy with too many things these days.

But conceptually it's pretty simple if you understand matrix algebra or how matrices work. In the example above, you move the camera side to side with the following code:


//Keys when shift keys are not being held down.
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_D && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
View = glm::rotate(glm::mat4(1.0f), 0.05f, glm::vec3(0.0f, 1.0f, 0.0f)) * View;
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_A && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
View = glm::rotate(glm::mat4(1.0f), -0.05f, glm::vec3(0.0f, 1.0f, 0.0f)) * View;


If I simply modify the code to reverse the order of multiplication like this it turns into an orbit camera:


//Keys when shift keys are not being held down.
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_D && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
View = View * glm::rotate(glm::mat4(1.0f), 0.05f, glm::vec3(0.0f, 1.0f, 0.0f));
if (OperatingSystem.Keyboard.KeyPressed == GLFW_KEY_A && OperatingSystem.Keyboard.ActionPressed != GLFW_RELEASE)
View = View * glm::rotate(glm::mat4(1.0f), -0.05f, glm::vec3(0.0f, 1.0f, 0.0f));


So, that's the simple answer to your question. I have the complete code posted on my website (http://virtuallyprogramming.com/OGL45/OGL45.html), if you want to see a working example of this. It's under BaseEngine.zip (https://files.secureserver.net/0sgDy5cNTBolEV). There you can try it out. The code has a rotating triangle in the scene (among other things) and because the camera was initially created with a 2 unit offset and the triangle is above the origin, this simple code change of merely reversing the multiplication order when modifying the view matrix causes it to orbit the triangle.

For a proper "orbit" camera, you actually want to take this a step further. This will make the camera orbit the origin, but it's the world origin in this case. In other words, this orbit camera cannot move through the scene. This two lines of code change causes the camera to behave entirely differently. Now forward and back (without modifying any other lines of code) zooms in and out towards the origin rather than allowing you to move forward and backwards through the scene. This is largely because we eliminated the ability to turn around in favor of the ability to orbit the origin. Strafing right and left still works but has really weird results. It kind of turns the camera now (again without changing those lines of code).

If what you really want is a "chase" camera that orbits the character or some object, what you need is some sort of "pivot" matrix. That matrix acts as the origin that the camera rotates around if you combine it with the view matrix each frame. So, the way that works is that you move the camera forward and side to side (or even up and down) using the pivot matrix. The pivot matrix can be the world/object matrix of the object in your scene. For example, if you have a character model for the player, that model's world/object matrix can be used as the pivot matrix. Or without an object in the scene, you can just create a matrix for your pivot and use that. It doesn't mater. Either way it will serve as the origin your View matrix is rotating around.

And the way to make it work is to convert it before drawing each frame much like I did with the head tilt thing. By maintaining them separately, you can combine them right before drawing to give you something like Camera = Pivot * View; where Camera is the matrix you send to the shader and actually use as your View matrix, View is what stores the camera orientation and position relative to the pivot point, and Pivot is a matrix that moves the pivot point through the scene which allows you to move forwards/backwards and side to side.

All this matrix stuff takes a bit of getting used to. If you are new to matrices, I might recommend you watch my videos on my YouTube channel VirtuallyProgramming.com related to vectors and matrices. Watch the Vector video (https://www.youtube.com/watch?v=56v9BgwSzsg) first and then the Matrix video (https://www.youtube.com/watch?v=T7sb4yKKzFg). They are designed to kind of build on one another. Also, there's a lot of vector math in matrices if you get into how they work internally. I cover the math without teaching too much Matrix algebra that you don't need to know. Most of the math is actually done for you in GLM, or whatever you are using. So, most of what you need to know is that when you multiply matrices you are combining the information in them much like you combine things with addition.

Another thing you really need to know for cameras is that the View matrix is an inverse matrix. It works just like an object's world matrix except it is inverted. This is because the View matrix actually moves the entire scene to "simulate" a camera moving through the scene. There is no actual camera; it's all just math. The projection matrix looks straight down the Z axis and projects whatever is in front of it onto a 2D plane so it can be drawn to your 2D computer screen. It can't move. The View matrix moves the entire scene around the origin and Z axis to make it look like the camera is moving through the scene. Because of this, everything it does is backwards. If you want the "camera" to rotate clockwise, the scene has to be rotated counter-clockwise. If you want the camera to move forward through the scene, the scene has to be moved back by the same amount. Everything is reversed. That's what they mean by "Inverse". There's an inverse function you can use. So, if you use the Pivot/View/Camera formula above and the "Camera" becomes the "view matrix" that you actually use to draw with, then I believe the code would look something like "Camera = glm::inverse(Pivot * View);". Keep in mind that multiplication order makes all the difference in the world. And I have to admit that I'm not sure which comes first, the View matrix or the Pivot matrix. But if it's wrong just reverse it.

Anyway, in general - changing the order of multiplication is how you control whether a rotation orbits or rotates. I never memorize which direction is which. If it's wrong, I just reverse it. And I've seen places where it changes. Like the order of rotation for quaternions seems opposite the order used for matrices for some reason.

john_connor
11-15-2016, 03:57 AM
I usually use below code for 3D space.

glMatrixMode(GL_PROJECTION);
glPushMatrix();
{
glLoadIdentity();

float fAspect = float(m_win_w)/m_win_h;
gluPerspective(80.0f, fAspect, 1.0, 1000.0);

glMatrixMode(GL_MODELVIEW);
glPushMatrix();
{
const float DISTANCE = 100.f;
glLoadIdentity();
gluLookAt( eye.x, eye.y, eye.z,
0.f, 0.f, 0.f,
0.f, 1.f, 0.f);
DrawSomeObject();
}
}

I think the function, gluLookAt(), is the key.

For orbit mode, I should know how to handle eye's parameters and I should apply rotations of matrix and vector to those.

However, I can't grasp these concepts with glm.

I have tried to make orbit mode camera with below code.


you are using legacy ("fixed-function pipeline") OpenGL funcion for rendering, my advice would be to learn the modern OpenGL API

glm has its own functions to create the camera matrices, instead of ...
gluPerspective(80.0f, fAspect, 1.0, 1000.0);
... glm has
mat4 result = glm:: perspective(80.0f, fAspect, 1.0, 1000.0);

instead of pushing view matrices to internally managed matrix stack ...
glMatrixMode(GL_MODELVIEW);
... glm has
mat4 result = glm::lookAt(position, target, up); // each argument is a vec3

instead of
glBegin(...); ... glEnd();
shaders are used to render

here i've made an example camera (with modern GL API)
https://sites.google.com/site/john87connor/home/tutorial-06-1-example-camera

kdepirate
11-15-2016, 04:45 AM
you are using legacy ("fixed-function pipeline") OpenGL funcion for rendering, my advice would be to learn the modern OpenGL API

glm has its own functions to create the camera matrices, instead of ...
gluPerspective(80.0f, fAspect, 1.0, 1000.0);
... glm has
mat4 result = glm:: perspective(80.0f, fAspect, 1.0, 1000.0);

instead of pushing view matrices to internally managed matrix stack ...
glMatrixMode(GL_MODELVIEW);
... glm has
mat4 result = glm::lookAt(position, target, up); // each argument is a vec3

instead of
glBegin(...); ... glEnd();
shaders are used to render

here i've made an example camera (with modern GL API)
https://sites.google.com/site/john87connor/home/tutorial-06-1-example-camera



I already know The Modern OpenGL and have a book "Superbible 6th", but I have to read and write code with legacy code in my workplace.

I understand what you are meaning.

I really appreciate your tip.

And I did'n read your code, but it would be helpful to me.

Thank you again.

GClements
11-15-2016, 01:06 PM
but I have to read and write code with legacy code in my workplace
You can still use GLM to construct the matrices then just use glLoadMatrix() or glMultMatrix() to upload matrices.

If you can get to the point where the only legacy matrix function you use is glLoadMatrix(), then you'll never need to read matrices back from the hardware with glGetDoublev() (which can be slow).

kdepirate
11-15-2016, 06:56 PM
You can still use GLM to construct the matrices then just use glLoadMatrix() or glMultMatrix() to upload matrices.

If you can get to the point where the only legacy matrix function you use is glLoadMatrix(), then you'll never need to read matrices back from the hardware with glGetDoublev() (which can be slow).



I roughly know that Modern GL is more effective than legacy gl.

My final goal is to master Morder GL, but legacy gl is also needed for me for my current work place.

Appreciate your attention on my question.

Thank you.

kdepirate
11-15-2016, 07:55 PM
I would like to thank you for your time and help.

Your site and videos will be very helpful to me, and I plan to watch these videos whenever I have time.

I have studied both Vector and Matrix, but I didn't quite understand.

So, I should study these with some books and your video.

These days lots of people seem to use modern gl instead of legacy that.

I also think that Modern gl is much better than legacy in terms of some features such as pipeline and direct manipulation of matrix.

I will finally use it.

Thanks to your explanation of handling matrices for Orbit camera, I can understand better.



I really thank you again for your long explanation and attention.