Why different behavior using global vs global const

I’m just learning OpenGL and managed to make a rectangle move around with arrow keys and jump (move up then back down) with the spacebar.

The initial rectangle vertices (2 triangles):
GLfloat vertices[] = {
-0.9, 0.7,
-0.7, 0.7,
-0.9, 0.9,

-0.9, 0.9,
-0.7, 0.9,
-0.7, 0.7,

};

My display function calls keyFunc():
void display() {
keyFunc();
glDrawArrays(GL_TRIANGLES, 0, 6);
glFlush();
}

The following code works because jumpFrameMax is defined as a local variable within the function. However, I was previously pulling my hair out because jumpFrameMax was declared as a global and caused the jump to increase in height infinitely without ever falling back down. After much frustration I changed the jumpFrameMax global to a const and it resolved the issue. But I don’t understand why it worked. The behavior seemed to suggest that jumpFrameMax wasn’t the value it was initialized to until set as a const. Is the significance of using a const specific to the way OpenGL works? Does it have to do with the way the display callback references global variables or is it independent of OpenGL and I’m misunderstanding of how c++ global variables vs global const works?


void keyFunc() {
    glClear(GL_COLOR_BUFFER_BIT);
    int jumpFrameMax = 1000;
    int down = GLUT_KEY_DOWN;
    int up = GLUT_KEY_UP;
    int left = GLUT_KEY_LEFT;
    int right = GLUT_KEY_RIGHT;

    int index = 1;
    int increment = 2;
    if (keys[left] || keys[right]) {
        index = 0;
    }
    if ((keys[left] || keys[right]) && (keys[up] || keys[down] || jumpFrame > 0)) {
        increment = 1;
    }

    float add = 0.001;
    for (int i = index; i < sizeof(vertices); i = i + increment) {
        if (i % 2 == 1) {//handle y axis
            if (keys[up]) {
                vertices[i] += add;
            } else if (keys[down]){
                vertices[i] -= add;
            } else if ((keys[32] && jumpFrame == 0) || jumpFrame > jumpFrameMax / 2) {//handle jump up
                if (jumpFrame == 0) {
                    jumpFrame = jumpFrameMax;
                }
                vertices[i] += add;
            } else if (jumpFrame <= jumpFrameMax / 2 && jumpFrame > 0) {//handle jump down
                vertices[i] -= add;
            }
        } else if (i % 2 == 0) {//handle x axis
            if (keys[right]) {
                vertices[i] += add;
            } else if (keys[left]) {
                vertices[i] -= add;
            }
        }
    }
    if (jumpFrame > 0) {
        jumpFrame -= 1;
    }
    glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
}

That sounds interesting, like it might be a compiler glitch? What compiler are you using?

Thank goodness there is usually more than one way to do something…!

Jeff

[QUOTE=OceanJeff40;1290224]That sounds interesting, like it might be a compiler glitch? What compiler are you using?

Thank goodness there is usually more than one way to do something…!

Jeff[/QUOTE]

Jeff,

IKR. I’m using gcc in Eclipse on Ubuntu (gcc (Ubuntu 5.4.0-6ubuntu1~16.04.5) 5.4.0 20160609). I just wanted to make sure I learn something if it was an OpenGL issue but it sounds like it probably isn’t.

It’s probably because of a bug in your program. Memory corruption seems likely. E.g. overrunning the end of a global array or passing a pointer to an int where pointer-to-long is required will typically modify a global variable, and it will do so consistently. Making the global variable “const” or making it a local variable will result in the corruption affecting something else (possibly something which isn’t being used).

As GClements suggested, you’ve got memory problems. You should run valgrind which will point out some of the memory bugs for you, and retrace your program carefully to find the rest.

I’ll give you a hint as to one of them:


GLfloat vertices[] = {
    -0.9, 0.7,
    -0.7, 0.7,
    ...
};
...
for (int i = index; i < sizeof(vertices); i = i + increment) {
    // Do stuff with vertices[i]
}

I hope you see a problem with this.

sizeof(vertices) gives you the size “in bytes” of a C++ sized array. It appears that you want the size in elements. Might I suggest:


#define ARRAY_SIZE(a) (sizeof(a)/sizeof((a)[0])
...
for (int i = index; i < ARRAY_SIZE(vertices); i = i + increment) {

Alternatively, use a std::vector to hold your vertex data, and then query vector.size() to get the currently-used size in elements.