Small memory leak, minimal example

Recently I was trying to fix a big memory leak that ended being caused by me forgetting to overwrite the operator= to deallocate some opengl stuff.

However, I noticed that still a small amount of memory seems to be leaking, unrelated to the first problem.

The following is a minimal example of what I mean.

if you #define TEST_LEAK (see the main loop), the opengl functions will be called continually,
and the program made seems to keep increasing its memory usage over time.
It’s very small but could potentially pose a problem if the program is on for hours.

If you comment out the #define TEST_LEAK, there is no noticeable leak.

I’m using OpenGL 4.3 with Nvidia drivers GT 555M, Windows 7, OpenGL context made with SFML 2.2 (not core context).

If anyone has any ideas or has experienced anything similar to this, I’d greatly appreciate any feedback!

[code=cpp]
#include <GL/glew.h>
#include <SFML/Graphics.hpp>
#include <iostream>
#include <vector>
int main()
{
// Setting up window/glew just for context/functions:
sf::Window window(sf::VideoMode(800, 600), “Test”,
sf::Style::Default, sf::ContextSettings(32, 8, 0, 3, 3));
init_glew(true); // calls glewInit and checks for some errors

// Vertex is simply glm::vec3 in this case
const int N = 3000;
std::vector&lt;Vertex&gt; vertices(N);
std::vector&lt;Vertex&gt; colors(N);

while (true)
{
    static int i = 0; std::cout &lt;&lt; "I'm still working! " &lt;&lt; i++ &lt;&lt; std::endl;;

    #define TEST_LEAK

    #ifdef TEST_LEAK
     GLuint vao_ID[1];
     GLuint vbo_ID[2];

     // Allocate:
     GLsizeiptr vbo_size = vertices.size() * 3 * sizeof(GLfloat);

     glGenVertexArrays(2, &vao_ID[0]); // Create VAO
     glBindVertexArray(vao_ID[0]);     // Bind VAO so we can use it
     glGenBuffers(2, &vbo_ID[0]);      // Generate VBO

     glBindBuffer(GL_ARRAY_BUFFER, vbo_ID[0]); // Bind VBO

     // Set size and data of VBO (and rendering style):
     glBufferData(GL_ARRAY_BUFFER, vbo_size, vertices.data(), GL_STATIC_DRAW);
     glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0);
     glEnableVertexAttribArray(0); // Enable the first vertex attribute array

     glBindBuffer(GL_ARRAY_BUFFER, vbo_ID[1]);
     glBufferData(GL_ARRAY_BUFFER, vbo_size, colors.data(), GL_STATIC_DRAW);
     glVertexAttribPointer((GLuint)1, 3, GL_FLOAT, GL_FALSE, 0, 0); // Set up our vertex attributes pointer
     glEnableVertexAttribArray(1); // Enable the second vertex attribute array


     // Clean up:
     glDisableVertexAttribArray(0);
     glDisableVertexAttribArray(1);
     glDeleteBuffers(2, vbo_ID);
     glDeleteVertexArrays(1, vao_ID);
    #endif // TEST_LEAK
}

}

First of all, since OpenGL® is a specification and not a library, memory leeks are problems of the specific implementations (e.g. Nvidia® or AMD drivers). How did you measure increased memory usage? The implementation might try to keep memory objects around and reuse them instead of deleting them immediately for better performance. (Possibly broken or not optimized for your usage pattern of allocating and freeing a lot of objects continously). In my experience, running even a simple OpenGL® program that creates a window via Xlib and GLX through valgrind yields largely different results. With Mesa3D, I usually expierence next to no issues, whereas with the proprietary drivers I usually get flooded with error messages and a LOT of memory leaks.

You gen two VAOs but only delete one. This should be obvious if you look at the VAO ids you get.

Thanks, I knew I had to have made some dumb mistake :slight_smile:

When I change the code to
glGenVertexArrays(1, &vao_ID[0]);
it doesn’t seem to leak at all, although allocates more memory from the start, but I guess I shouldn’t worry about that as long as it doesn’t leak.

@Agent D
I didn’t mean to imply that it was the fault of OpenGL (the spec) itself or that I was blaming someone else besides myself, but the problem seems to be gone anyway.