memory leak? while creating simple textures

I have a very simple OpenGL test program that, on my machine, appears to eat a bunch of memory erroneously.

All I do is create 50 1024x1024 textures very quickly, and then delete them all. I would expect my memory usage to spike while the textures are in memory, and then retreat to normal after I’ve deleted them.

I measured mem usage for the process with ‘top’, KDE System Monitor, ‘htop’, and /proc/PID/status and they all report 200MB+ of used memory AFTER the deletions!

When I run the program with valgrind’s massif tool, it reports no errors, and shows all of the memory that I allocate in the beginning being correctly freed.
Similarly, valgrind’s leak-check tool does not report any issues.

I let the program sit for a while, and the 200MB of used memory does not decrease.

My system is:
Scientific Linux 6.1
Lenovo W520 Laptop
NVIDIA Quadro 1000M
NVIDIA 325.15 Drivers (tested also with 319.32 and 319.49)

The simple test program is below.

Can anyone else repeat this? Unless I’ve done something obviously wrong in the code, my only guess is that it’s a driver bug, or perhaps an intended feature.
Either way, why would my massif results look fine, but the mem usage in ‘top’ be incorrect?


// COMPILE:
//  g++ main_bug.cpp -lglut -lGLU -lpthread


// RUN: ./a.out


#include <GL/freeglut.h>
#include <vector>
#include <iostream>


using namespace std;


#define IMAGE_SIZE 1024
#define FRAMES_PER_SECOND 10


int mainWindowHD = 0;


// create a new RGBA texture from pData
GLuint manualTex(const unsigned char* pData)
{
  GLuint texname;
  glGenTextures(1, &texname);


  glBindTexture(GL_TEXTURE_2D, texname);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);


  glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
               IMAGE_SIZE, IMAGE_SIZE, 0, GL_RGBA,
               GL_UNSIGNED_BYTE, pData);


  return texname;
}


int renderLoopCounter = 0; // 0-20
int cnt = 0; // one-up counter
std::vector<GLuint> mTexts; // list of textures i've created
bool done = false;


void Display (int t)
{


  cnt++;
  if (renderLoopCounter == FRAMES_PER_SECOND)
  {
    renderLoopCounter = 0;
  }


  glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
  glColor3f(1.0, 1.0, 1.0);


  // until i've created 50 textures
  if (!done && cnt > FRAMES_PER_SECOND*2 && mTexts.size() < 50)
  {
    // create 10 textures at a time
    for (int i=0;i<10;i++)
    {
      unsigned char* img = new unsigned char[IMAGE_SIZE*IMAGE_SIZE*4];
      GLuint tex_id = manualTex(img);


      // @note: if i do this right here, there is no apparent memory issue:
      // glDeleteTextures(1, &tex_id); tex_id = 0;


      // but instead, i add to a list of textures to be deleted later
      if (tex_id != 0)
      {
        mTexts.push_back(tex_id);
        cout << "creating texture id=" << tex_id << endl;
      }
      delete[] img;


    }
  }


  // once i've created 50, delete all the textures at once
  if (renderLoopCounter == 0 && mTexts.size() >= 50)
  {
    for (int i=0;i<mTexts.size();i++)
    {
      GLuint tex_id = mTexts[i];
      cout << "deleting: " << tex_id << endl;
      glDeleteTextures(1, &tex_id);
    }
    mTexts.clear();
    done = true;
  }


  renderLoopCounter++;
  
  glutSwapBuffers();
  glutPostRedisplay();
  glutTimerFunc(1000/FRAMES_PER_SECOND, Display, 0);
}


main (int argc, char *argv[]) 
{
  glutInit(&argc, argv);
  glutSetOption(GLUT_ACTION_ON_WINDOW_CLOSE, GLUT_ACTION_GLUTMAINLOOP_RETURNS);
  glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
 
  glutInitWindowSize(1024, 768);
  mainWindowHD = glutCreateWindow("Testing screen");
 
  glutTimerFunc(1000/FRAMES_PER_SECOND, Display, 0);
 
  glShadeModel(GL_SMOOTH);
  glEnable(GL_BLEND);
  glEnable(GL_NORMALIZE);
  glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
  glCullFace(GL_FRONT);
 
  glutPostRedisplay();
  glutMainLoop();
}

No.

All that glDeleteTextures is specified to do is make the texture names available for reuse (in another subsequent glGenTextures call); it doesn’t say anything about when the memory used by the textures is released, or even if it is released at all. Your expectation is therefore wrong - we’re not talking about malloc and free here.

GL implementations are completely free to manage this memory themselves, and one potentially useful thing they can do is keep the memory allocated and hand it back to you next time you ask for a block of memory rather than having to do a new allocation.

Also note that we’re talking in the main about video RAM here, not system memory, so you shouldn’t expect tools that work with system memory to give you any kind of useful info.

Thanks for the insight.

I monitored the video RAM usage using nvidia-settings and it behaved as expected – gradual increase followed by a sharp decrease.

I guess what I’m confused about is why ‘top’ et al – tools used to monitor system memory – are showing such high memory usage after the glDeleteTextures, while valgrind/massif (also measuring system memory) and nvidia-settings (measuring video memory) do not.

Could the GL implementation be maintaining a memory pool on the system side? But then why would massif not detect that?

This concerns me because I typically use ‘top’ as a “sniff test” for memory leaks, and if I see something suspicious I’ll dig deeper w/ valgrind. If I can’t trust the linux system memory reporting tools when OpenGL is involved, this is good to know.

The GL implementation could in theory be doing anything, including maintaining it’s own system memory copies of resources (even if it’s just for temporary staging purposes), it may be allocated in user space, it may be allocated in kernel space, and in general that’s not something you need to worry about. Pre-emptively sniffing for memory leaks with such fundamental usage is really a form of “premature optimization” if you think about it.

What’s important to realize here is the old saw that OpenGL is not software, and a typical OpenGL implementation is not just a software library - it’s really just an interface between your program and your graphics hardware; your graphics hardware is what actually does all the work, an OpenGL implementation just provides a means whereby your program can tell the graphics hardware what work to do. Exceptions do exist (such as pure software-only implementations, and parts of OpenGL that may be software emulated rather than accelerated in hardware) so it’s just a kind of fuzzy general rule, but it remains a good one to work by.

So if you see something that looks as if it would be abnormal in a software-only environment, remember that you’re no longer in a software-only environment, shift your expectations and assumptions to take account of that, and see if it still looks abnormal from the new perspective.