How to limit x-axis rotation

I tried building a first-person 3D camera class in C++. My problem is that while I can rotate the camera, if I rotate the x-axis far by looking up or down, the camera flips around and the y-axis ends up inverted. How can I solve this problem?

I’ve been trying to follow learnopengl. com tutorial to do this, but I decided to learn to use quaternions instead of Euler angles for my camera class.

Here is the code:


#ifndef CAMERA_H
#define CAMERA_H

#include <GL/glew.h>

#include <GLFW/glfw3.h>

#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/quaternion.hpp>
#include <glm/gtx/quaternion.hpp>
#include <glm/gtx/rotate_vector.hpp>

#define WORLD_UP glm::vec3(0.0f, 1.0f, 0.0f)

#include <iostream>


enum CamDirection {
    CAM_FORWARD,
    CAM_BACKWARD,
    CAM_LEFT,
    CAM_RIGHT
};


class Camera {
public:
    void cameraUpdate();

    glm::mat4 getViewMatrix();

    Camera();

    Camera(glm::vec3 startPosition);

    void move(CamDirection dir, GLfloat deltaTime);

    void look(double xOffset, double yOffset);

    void update();

private:

    glm::vec3 camPos;
    glm::vec3 camFront;
    glm::vec3 camUp;
    glm::vec3 camRight;

    glm::quat orientation;

    const GLfloat camSpeed = 5.05f;

};

glm::mat4 Camera::getViewMatrix() {
    return glm::lookAt(camPos, camPos + camFront, camUp);
}

Camera::Camera():
    camPos  (glm::vec3(0.0f, 0.0f,  0.0f)),
    camFront(glm::vec3(0.0f, 0.0f, -1.0f)),
    camUp   (WORLD_UP)
{}

Camera::Camera(glm::vec3 startPos):
    camPos   (startPos),
    camFront (glm::vec3(0.0f, 0.0f, -1.0f)),
    camUp    (WORLD_UP)
{}

void Camera::move(CamDirection dir, GLfloat deltaTime) {
    const GLfloat v = camSpeed * deltaTime;
    if (dir == CAM_FORWARD)
        camPos += v * camFront;
    else if (dir == CAM_BACKWARD)
        camPos -= v * camFront;
    else if (dir == CAM_RIGHT)
        camPos += v * camRight;
    else
        camPos -= v * camRight;
}
void Camera::look(double xOffset, double yOffset) {
    glm::quat rotation = glm::angleAxis((GLfloat)xOffset, camUp);
    orientation = orientation * rotation;

    rotation = glm::angleAxis((GLfloat)yOffset, glm::vec3(-1.0f, 0.0f, 0.0f));
    orientation = orientation * rotation;

    camFront = camFront * orientation;
    orientation = {1, 0, 0, 0};
}

void Camera::update() {

}
#endif // CAMERA_H

Thanks in advance.


class Camera {
public:

...

private:

    glm::vec3 camPos;
    glm::vec3 camFront;
    glm::vec3 camUp;
    glm::vec3 camRight;

    glm::quat orientation;

    const GLfloat camSpeed = 5.05f;

};

your camera has too much data, these components can later lead to contradicting results, depending on which data you use to calculate the result. your camera ONLY needs a position at which it is located, and a rotation / orintation

position = vec3
rotation = quat

everything else is not necessary

you can calculate “front” / “back” / “left” / “right” / “up” / “down” direction using only the “quat rotation;”
the cameras “view matrix” can also be calculated from position + rotation

another way would be to store teh view matrix only, and read from it the necessary parts you need (pos / directions / etc)

example:
https://sites.google.com/site/john87connor/basics/tutorial-06-1-example-camera
john.87.connor - Tutorial 01: 3D Scene (Camera) // the “Orientation” class

by the way:
an “update()” method for the camera isnt a good idea, its better to “update” the cameras location (depending on input) where you update everything else (like objects / environments / lights / etc). or do you have for each object / light / environmental item also an “update()” method ? you can easily merge all of them in 1 global “update()” function

More than likely your problem is “camUp”:
glm::lookAt(camPos, camPos + camFront, camUp);

I only glanced over your code rather than actually reading it, but you are probably not setting this properly. That parameter is a vector normal that points “above” the camera. As long as your camera never rotates more than 179 degree you can probably get away with defining this as something like glm::vec3(0f, 1f, 0f) as a cheat. But if the camera every rotates beyond that, you are guaranteed to have the problems you describe.

The best solution is not to rebuild the View Matrix every frame but learn to maintain it from frame to frame. In that scenario, you set it once to an Identity Matrix or use LookAt() once and then never rebuild it again, letting it hold the camera data from frame to frame. Then this whole issue goes away because Up will always be correct.

But if you want to keep building it every frame like this, then Up is the area above the camera. I would go with the first solution. But if you do this, then you have to figure out a way to provide the correct parameter here. So, when the camera is upside down, then Up will point to the floor. If the camera is looking at the floor, then Up could be pointing to one of the walls. It’s the area above the camera regardless of what way the camera is oriented, not the world’s up. The cheat only works as long as the world’s up basically matches the camera’s up. And because of the way the math works, it’s binary where it’s either wrong or “right enough”. In other words, it can be as much as almost 180 degrees off and still be considered right. It’s because of the way the math works. It will work until up actually points below the camera and then because you’ve defined up as a constant value it will flip the camera in the way you describe when you try and turn the camera upside down. But it’s one or the other, so you can rotate the camera a bit and still get away with the cheat. But if you go more than 180 degrees your camera will literally flip out. What’s basically happening is that you’ve defined “above the camera” to match the world’s up and when the area above the camera begins to point down you’ve hard coded it to force it to be up; so the camera can never be upside down and the code forces it right side up very violently if you do attempt to make the camera upside down. Upside down is an impossibility when the area above the camera is hard coded to never point down.

I know this because I wrestled with this problem for several days and posted on forums (not this forum because I wasn’t doing OGL back then) looking for the answer. Finally solved it myself after someone incorrectly told me it was “gimbal lock”. And then at some point, I made my own LookAt() function coding it from scratch and learned how it works mathematically. What it’s actually doing is using the fact that the vector cross product of two sides of a triangle will give you a vector that points perpendicular straight out of that plane. So, Up is used together with the Area to Look At vector to define a plane and then a vector cross product is done to get a third vector that points out of that plane. It’s building a private axis of 3 mutually perpendicular normals which is what a 3 by 3 matrix is. You can store a 3 by 3 matrix in a 4 by 4 matrix which allows you to store position as well. Then it can hold position and orientation of the object. Cameras are inverted compared to world objects by the way. The Up vector defines a plane that describes the orientation, but if you get it upside down the vector produced by the vector cross product will point in the opposite direction that you intended. So the resulting orientation will be upside down from what you intended if Up is not relative to the camera rather than relative to the world and UP is pointed in the wrong direction.

Thank you BBeck and john_connor, I made a new camera class that I hope addresses all of problems you mentioned.

Here is the code:


#ifndef CAMERA_H
#define CAMERA_H

#include <GL/glew.h>

#include <GLFW/glfw3.h>

#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtx/transform.hpp>
#include <glm/gtc/quaternion.hpp>
#include <glm/gtx/quaternion.hpp>
#include <glm/gtx/rotate_vector.hpp>

#define WORLD_UP vec3(0.0f, 1.0f, 0.0f)

#include <iostream>

using glm::vec3;
using glm::mat4;
using glm::quat;

enum CamDirection {
    CAM_FORWARD,
    CAM_BACKWARD,
    CAM_LEFT,
    CAM_RIGHT
};


class Camera {
public:
    void cameraUpdate();

    mat4 getViewMatrix();

    Camera();

    Camera(vec3 startPosition);

    void move(CamDirection dir, GLfloat deltaTime);

    void look(double xOffset, double yOffset);

    void update();

private:
    mat4 viewMatrix;

    vec3 camForward;
    vec3 camRight;

    const GLfloat camSpeed = 5.05f;

};

mat4 Camera::getViewMatrix() {
    return viewMatrix;
}

Camera::Camera():
    camForward(0.0f, 0.0f, 1.0f),
    camRight  (1.0f, 0.0f, 0.0f)
{}


Camera::Camera(vec3 startPos):
    viewMatrix(glm::lookAt(startPos, vec3(0.0f, 0.0f, 0.0f), vec3(0.0f, 1.0f, 0.0f))),
    camForward(0.0f, 0.0f, 1.0f),
    camRight  (1.0f, 0.0f, 0.0f)
{}

void Camera::move(CamDirection dir, GLfloat deltaTime) {
    mat4 trans;
    if (dir == CAM_FORWARD)
        trans = glm::translate(trans,      camSpeed * deltaTime * camForward);
    else if (dir == CAM_BACKWARD)
        trans = glm::translate(trans, -1 * camSpeed * deltaTime * camForward);
    else if (dir == CAM_RIGHT)
        trans = glm::translate(trans, -1 * camSpeed * deltaTime * camRight);
    else
        trans = glm::translate(trans,      camSpeed * deltaTime * camRight);
    viewMatrix *= trans;
}

void Camera::look(double xOffset, double yOffset) {
    quat rotation;
    rotation = glm::angleAxis((GLfloat)xOffset, vec3(0.0f, 1.0f, 0.0f));
    rotation = rotation * glm::angleAxis((GLfloat)yOffset, vec3(-1.0f, 0.0f, 0.0f));

    viewMatrix = glm::mat4_cast(rotation) * viewMatrix;
    camForward = camForward * rotation;
}

void Camera::update() {

}
#endif // CAMERA_H

Still, I don’t know how to limit x-axis rotation. What should I do?

You can try something similar to this (adapt the angles, here given in degrees for simplicity):


if (angle.x > 89) angle.x = 89;
else if (angle.x < -89) angle.x = -89;

what do you mean by “x-axis rotation” ?

there are 3 types of rotations you can apply to the camera (or any other item): pitch, yaw and roll
– pitch rotates the camera around its “right” direction
– yaw rotates the camera around its “up” direction
– roll rotates the camera around its “forward” direction

I mean rotation around the x-axis, or pitch.

I mean rotation around the x-axis of the camera, or pitch.

exactly like “Silence” said:
if (pitch > pitchlimit)
pitch = pitchlimit;

i assume that you dont want to be able to “yaw” and “roll” the camera (like a spaceship), instead you want a camera that is only allowed to rotate around the global “up” direction (y-axis) and to “pitch” (like a “turret camera”)

if so, then i would use a class like that:


class Camera
{
public:

private:

vec3 position;
float pitch;
float yrotation;

};

that classes job is to make sure that “pitch” always lies within the intervall (-90°;+90°), without (!!!) the borders
“yrotation” isnt limited in any way

how to feed the “glm::lookAt(…)” function:
– position is clear
– up = vec3(0, 1, 0), the global “up” direction
– target: ?

target = positon + forward, but how to get the cameras forward direction ?
using spherical coordinates:

forward.x = cos(yrotation) * cos(pitch)
forward.y = sin(pitch)
forward.z = sin(yrotation) * cos(pitch)

As to how I personally would address limiting rotation on a given axis, check out this thread:
https://www.opengl.org/discussion_boards/showthread.php/199316-camera-problems?p=1285922#post1285922

My solution thus far has been to store that axis orientation separate from the main storage. So, for the camera I store the position and orientation in the View matrix. But I store the X tilt as a separate variable and build a “Tilted Matrix” right before I draw, each frame, and use that instead of the View Matrix. I’m looking for a better solution that uses the View matrix, but I haven’t found it yet.

The problem is that the X axis is the local X axis and thus spins in world space. I really don’t like that I’m storing orientation information outside of the View matrix, but it works. I would love to figure out a way to limit the X axis rotation of the View matrix itself. That would probably involve determining the X axis rotation of the matrix itself after rotation and then resetting it to what it was before if it exceeds a certain angle. Seems like a lot of math to basically accomplish what I’m already accomplishing just storing the tilt outside of the View matrix and applying that tilt rotation right before drawing.

There’s a more complete listing of the code in the other thread but it basically boils down to this:


Update()
	if (CameraTilt > MaxTiltAngle) CameraTilt = MaxTiltAngle;
	if (CameraTilt < -MaxTiltAngle) CameraTilt = -MaxTiltAngle;

Draw()
         glm::mat4 TiltedView = glm::rotate(glm::mat4(), CameraTilt, glm::vec3(1.0, 0.0, 0.0)) * View;

Then I use TiltedView to draw everything rather than the View matrix.

Now you’ve got me trying to think up a way to do this all within the View matrix. The 3rd column (or is it a row, I never can remember if it’s row major or column major) in a 4 by 4 matrix should be a local Y (up) axis vector. You might be able to measure the angle between that and a global Y axis vector and if it exceeds the max tilt angle reset the matrix to what it was in the previous frame when it did not exceed the angle. This could maybe a simple Vector dot product calculation. That worries me that the difference between frames could be too severe. I would rather reset the matrix to a specific angle, but I’m thinking the math on that would be too ugly to make it a better solution than storing CameraTilt as a separate value.

EDIT: Note that I sometimes get my axes confused. I originally said Z is up. Well, it is in Blender. :slight_smile: I move between so many environments that I literally can’t remember which direction is up. I also get confused between a left handed coordinate system and a right handed coordinate system, or worse they start talking about OGL vs. DirectX coordinates. I believe I wrote all my DirectX code to use the same axes as OGL knowing that one day I would want to transition to OGL not to mention that I started in XNA which I believe uses the same as OGL. (It’s a matter of whether Z moving into the screen is positive or negative.)

I finally got it! I used BBeck’s solution of storing the pitch separately and applying it at the last moment possible.

In case if you are wondering, here is the final camera class:


#ifndef CAMERA_HPP
#define CAMERA_HPP

#include <GL/glew.h>

#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtx/transform.hpp>
#include <glm/gtc/quaternion.hpp>
#include <glm/gtx/quaternion.hpp>
#include <glm/gtx/rotate_vector.hpp>
#include <glm/gtx/string_cast.hpp>

#include <iostream>

static const float PI = 3.1415926f;

using glm::vec3;
using glm::mat4;
using glm::quat;

enum CamDirection {
    CAM_FORWARD,
    CAM_BACKWARD,
    CAM_LEFT,
    CAM_RIGHT
};

class Camera {
public:
    Camera(vec3 pos);


    mat4 getViewMatrix();

    void move(CamDirection, GLfloat deltaTime);

    void look(double xOffset, double yOffset);

    void setPosition();


private:
    vec3 position;

    vec3 camRight = vec3(1.0f, 0.0f, 0.0f);
    vec3 camForward = vec3(0.0f, 0.0f, -1.0f);

    quat orientation;
    GLfloat pitch;

    const float CAM_SPEED = 5.f;

};

Camera::Camera (vec3 pos):
    position(vec3(pos.x, pos.y, pos.z))
{}

mat4 Camera::getViewMatrix() {
    return glm::rotate(mat4{}, pitch, glm::vec3(1.0f, 0.0f, 0.0f))
         * mat4_cast(orientation)
         * glm::translate(vec3(-1 * position.x, -1 * position.y, -1 * position.z));
}

void Camera::look(double xOffset, double yOffset) {
    quat yaw = quat(vec3(0.f, xOffset, 0.f));

    camForward = vec3(0.0f, 0.0f, -1.0f) * orientation;
    camRight   = vec3(1.0f, 0.0f,  0.0f) * orientation;

    orientation = orientation * yaw;

    pitch -= yOffset;
    if (pitch > PI/2)
        pitch = PI/2;
    else if (pitch < -PI/2)
        pitch = -PI/2;

}

void Camera::move(CamDirection dir, GLfloat deltaTime) {
    GLfloat distance = deltaTime * CAM_SPEED;
    switch (dir) {
        case CAM_FORWARD:
            position = position + (camForward * distance);
            break;
        case CAM_BACKWARD:
            position = position - (camForward * distance);
            break;
        case CAM_RIGHT:
            position = position + (camRight * distance);
            break;
        case CAM_LEFT:
            position = position - (camRight * distance);
            break;
    }
}

#endif // CAMERA_HPP