Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 4 of 4

Thread: Memory leak using glTexSubImage2D

Hybrid View

  1. #1
    Junior Member Newbie
    Join Date
    Mar 2011
    Posts
    17

    Memory leak using glTexSubImage2D

    In the course of my application I create a 2048x2048 texture which then gets updated periodically. Every few times this texture gets updated via a glTexSubImage2D call there is a significant hit on the available memory of the system which never appears to be released.

    I've attached a full application that demonstrates how I'm using this texture in my real application and having tested it find that this has the same memory issue as my real application. Can anyone see something I'm not doing properly that might account for this memory growth?

    Code :
    #define GL_GLEXT_PROTOTYPES
     
    #include <GL/freeglut.h>
    #include <GL/glext.h>
    #include <stdio.h>
    #include <stdlib.h>
    #include <memory.h>
    #include <string.h>
    #include <iostream>
    #include <fstream>
    using namespace std;
     
    GLuint texid(0);
    unsigned char rawTex[2048*2048*3];
    int totalMem(0), freeMem(0);
     
    void GetMemUsage (void)
    {
      unsigned char rawData[512];
     
      memset(rawData, 0, sizeof(rawData));
      fstream f;
      f.open("/proc/meminfo", ios::in|ios::binary);
     
      if (f.is_open()) {
        f.read((char*) rawData, 512);
        f.close();
     
        char *ptr(0);
     
        ptr = strstr((char*)rawData, (const char*)"MemTotal:");
        if (ptr != 0) {
          totalMem = atoi(ptr+strlen("MemTotal:"));
        }
        ptr = 0;
        ptr = strstr((char*) rawData, (const char*)"MemFree:");
        if (ptr != 0) {
          freeMem = atoi(ptr+strlen("MemFree:"));
        }    
      }
    }
     
    void SetBitmap(void)
    {
      static int init(0);
     
      if (!init) {
        memset(rawTex, 0xA0, sizeof(rawTex));
      } else {
        memset(rawTex, 0xFF, sizeof(rawTex));
      }
      init ^= 1;
    }
     
    void Display (void)
    {
      char txtString[120];
     
      glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
      glEnable(GL_TEXTURE_2D);
     
      glActiveTexture(GL_TEXTURE0);
      glBindTexture(GL_TEXTURE_2D, texid);
      glBegin(GL_TRIANGLES);
     
      glTexCoord2f(0.0, 0.0); 
      glVertex2f(-1, 0);
      glTexCoord2f(0.0, 1.0); 
      glVertex2f(-1, 1);
      glTexCoord2f(1.0, 0.0); 
      glVertex2f(0, 0);
     
      glTexCoord2f(0.0, 1.0); 
      glVertex2f(-1, 1);
      glTexCoord2f(1.0, 0.0); 
      glVertex2f(0, 0);
      glTexCoord2f(1.0, 1.0); 
      glVertex2f(0, 1);
     
      glEnd();
      glDisable(GL_TEXTURE_2D);
     
      GetMemUsage();
      snprintf(txtString, 120, "Total(%d) Free(%d) Per(%f)", totalMem, freeMem, freeMem / ((float) totalMem));
      glRasterPos2i(0, 0);
      glColor4f(1.0f, 0.0f, 1.0f, 1.0f);
      glutBitmapString(GLUT_BITMAP_HELVETICA_18, (const unsigned char*) txtString);
     
      glutSwapBuffers();
      glutPostRedisplay();
      return;
    }
     
    void Keyboard(unsigned char key, int x, int y)
    {
      switch(key) {
      case 'n':
        SetBitmap();
        glBindTexture(GL_TEXTURE_2D, texid);    // Bind the texture
        glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 2048, 2048, GL_RGB, GL_UNSIGNED_BYTE, rawTex);
        break;
      default:
        break;
      }
      return;
    }
     
    void InitTexture (void)
    {
        // Initially Load the texture
        glGenTextures(1, &texid);               // Creates the texture
        glBindTexture(GL_TEXTURE_2D, texid);    // Bind the texture
        /*
         *  Set edge handling
         */
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
        /*
         * Set filtering
         */
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
     
        /*
         * Upload the texture to GL
         */
        glTexImage2D(GL_TEXTURE_2D,             // 2D Texture
                     0,                         // level
                     GL_RGB,                    // internal formal
                     2048,                      // width
                     2048,                      // height
                     0,                         // border
                     GL_RGB,                    // format
                     GL_UNSIGNED_BYTE,          // type
                     rawTex);                   // data
    }
     
    int main (int argc, char *argv[])
    {
      GLint mainWindow;
     
      glutInit (&argc, argv);
      SetBitmap();
     
      glutSetOption(GLUT_ACTION_ON_WINDOW_CLOSE, GLUT_ACTION_GLUTMAINLOOP_RETURNS);
      glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
     
      glutInitWindowSize (1024, 768);
      mainWindow = glutCreateWindow("Texture Test");
     
      glutDisplayFunc(Display);
      glutKeyboardFunc(Keyboard);
     
      glShadeModel(GL_SMOOTH);
      glEnable(GL_BLEND);
      glEnable(GL_NORMALIZE);
      glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
      glCullFace(GL_FRONT);
      InitTexture();
     
      glutPostRedisplay();
      glutMainLoop();
     
      cout << "Freeing Memory!" << endl;
      glDeleteTextures(1, &texid);
     
      return 0;    
    }

  2. #2
    Super Moderator OpenGL Guru
    Join Date
    Feb 2000
    Location
    Montreal, Canada
    Posts
    4,264
    You are using glTexSubImage2D correctly. I don't know if /proc/meminfo reports driver memory usage but it is possible. How much is the memory leak every time you call it?
    2048x2048x3 or 2048x2048x4?
    ------------------------------
    Sig: http://glhlib.sourceforge.net
    an open source GLU replacement library. Much more modern than GLU.
    float matrix[16], inverse_matrix[16];
    glhLoadIdentityf2(matrix);
    glhTranslatef2(matrix, 0.0, 0.0, 5.0);
    glhRotateAboutXf2(matrix, angleInRadians);
    glhScalef2(matrix, 1.0, 1.0, -1.0);
    glhQuickInvertMatrixf2(matrix, inverse_matrix);
    glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
    glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);

  3. #3
    Junior Member Newbie
    Join Date
    Mar 2011
    Posts
    17
    It's ranging between 0 and 16,772 bytes, so it doesn't appear to be correlated to the size of the texture. <edit, I just saw it jump up a little over 1MB on a 'n' press, so I really don't know where the size of the mem leak is coming from>

    It's looking like the memory leak is stemming from the ati drivers the device my app is running on is using. I'm examining some valgrind results at the moment to figure out more about it, while working on a work around. So far, deleting the texture then recreating it from scratch doesn't clear up the problem.

  4. #4
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,183
    Using GL_RGB for a TexSubImage call, your driver must allocate a new buffer, expand (and possibly reswizzle) your data to 32-bit into that new buffer, then transfer that new buffer to your GPU. Making a wild guess, your driver is failing to properly free the new buffer when done; it may not be even leaking - it may just be keeping that buffer around in case it needs to reuse it later in order to save the cost of a new memory allocation sometime in the future.

    In general, you should use GL_BGRA for the format param of TexSubImage calls; this is more likely to match what the driver and hardware are actually using internally, and get you a direct transfer without the intermediate software step. You may or may not also need to use GL_UNSIGNED_INT_8_8_8_8_REV for the type param. It's worth making this switch to see if your observed leak goes away, and as a bonus it will get you higher performance for the data transfer itself.

    See further here: http://www.opengl.org/wiki/Common_Mi...nd_pixel_reads

    The driver will likely not have to perform any CPU-based conversion and DMA this data directly to the video card.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •