Alternative to a matrix stack

All the tutorials I’ve found for OpenGL 3.3+ re-implement a matrix stack in one way or another in order to transform objects relative to each other. Is this really necessary? I think I have a way to do it without a stack, by creating a class like this:


class Node
{
protected:
    std::list<Node*> Children;
    std::function<void(glm::mat4)> OnDraw;
    glm::mat4 Transform;

public:
// Methods for setting the OnDraw function, setting the transform, and adding child nodes
};

Then that way, the nodes could be traversed starting with a call to something like RootNode.Draw() (or something similar). Then the root node will invoke its OnDraw function (set by the object creator), passing its current transformation matrix to it to calculate uniforms and whatnot. And then, it will pass on to all its children, and so forth. Would this be feasible? I’m trying to make my code as object oriented as I can, but I don’t want to jump into this without knowing that I haven’t passed over something important in my mind.

Yes that would work fine

On this way you have virtually reimplemented matrix-stack. During the function call all parameters are set on the stack (Transform is among them). You will probably need another matrix (as an attribute of the calss) to define relative transformation (a matrix that is multiplied with the passed matrix) in order to avoid its recalculation in each draw call.

Yeah, true. I’m refining the idea a bit more, now that I’ve actually slept :smiley: I also read on a tutorial that it’s a good idea to calculate all your matrices as double precision floats inside the program to avoid building up precision errors as much, and then convert them to single precision to pass to a glUniform call. Is this reasonable, or unnecessary? I assume if you have a very large “world space” then it is a good idea.

It is not so much a large world as to what level of precision you need. A small world with millimeter precision has rounding problems the same as a 1000 km world with meter precision both would benefit from working in double precision. But a 100 km world with 10 km precision would not need it.

One thing to note is working with double precision exclusively on a cpu will not cause a performance hit (only cost a little extra space). Try to avoid mixing floats and doubles as this does cost.

With the caveat that they eat more memory, generate more cache misses and memory fetch/store latency. So if you’re crunching a lot of computations on relatively little input data, you’re good. But if you’re doing few simple ops across a lot of input data, you’re more likely to be bound by memory latency, particularly if you don’t use a cache-friendly access pattern.

One comment I forget to make regarding object oriented coding. It is one of the better ways to write maintainable code but it can cost you in performance so be prepared to make some compromises at some point.

Dark Photon:
What is your recommendation then? Do you think it is a better idea all around to do matrix computations in single or double precision?

tonyo_au:
True. Right now, I’m basically building a “framework” for myself for learning OpenGL and graphics development concepts, so I’m not TOO worried about performance (though I may be in the future!). However, I have been trying to do basic optimizations (writing all classes and methods in header files to encourage inlining, avoiding computations that don’t need to be done, etc). It probably doesn’t help that I am relying pretty heavily on C++11 features like lambdas though.

For those interested, here’s my current working copy of my node code (haha):


class _NodeBase
{
protected:
    glm::dmat4 RelativeTransform;
    bool RelativeTransformChanged;

public:
    _NodeBase(): RelativeTransform(1.0d), RelativeTransformChanged(true) {}
    virtual ~_NodeBase() {}

    virtual void Draw() {}
    virtual void Draw(glm::dmat4 &ParentTransform) {}
};

class _Node : public _NodeBase
{
protected:
    std::function<void(const glm::mat4&)> DrawFunc;
    std::forward_list<_NodeBase*> Children;

public:
    _Node() {}
    virtual ~_Node() {}

    void SetDrawFunc(std::function<void(const glm::mat4&)> DrawFunc_in)
    {
        DrawFunc = DrawFunc_in;
    }
    void SetRelativeTransform(const glm::dmat4 &RelativeTransform_in)
    {
        RelativeTransform = RelativeTransform_in;
    }
    void AddChild(_NodeBase *Child)
    {
        Children.push_front(Child);
    }
    void RemoveChild(_NodeBase *Child)
    {
        Children.remove(Child);
    }
    void Draw()
    {
        DrawFunc((glm::mat4)RelativeTransform);
        for(auto Child : Children)
        {
            Child->Draw(RelativeTransform);
        }
    }
    void Draw(glm::dmat4 &ParentTransform)
    {
        glm::dmat4 NetTransform = ParentTransform * RelativeTransform;
        DrawFunc((glm::mat4)NetTransform);
        for(auto Child : Children)
        {
            Child->Draw(NetTransform);
        }
    }
};

I split them up so I can create other pseudo-node classes in the future, like lights and stuff.

[QUOTE=chbaker0;1256901]Dark Photon:
What is your recommendation then? Do you think it is a better idea all around to do matrix computations in single or double precision?[/QUOTE]

If you need the extra precision you get from doubles, use them. If your world space is very large, you probably do. Just be aware that there may be cons depending on your usage.

OK, I’ll keep it in mind, thank you.

BTW, I love your avatar and screen name :slight_smile: That movie is amazing

How many matrices are you going to multiply in order to get the final result which is passed to the shader? As a rough guide, assume that each matrix multiply will cost you two bits of precision.

Also, if you update matrices incrementally (moving/rotating objects by multiplying their current transformation by a “delta” matrix each tick), the matrices will decay over time (i.e. orthonormal matrices will slowly cease being so). They will decay more slowly if you use double precision, but if their lifetimes can last for hours, you will need to re-normalise periodically.

I’m not updating incrementally, so that won’t be a problem for me. However, losing precision for every matrix multiplication could conceivably be an issue, as each node’s relative transform matrix will be multiplied by each of its children’s. Perhaps it would be a good idea for me to use double precision for that.

Never hesitate to use double precision for the calculation on the CPU.