Memory Leak - can't find it

In following code somewhere I have a leak:

void UpdateGLBuffers()
{
if( g_GLdone ) {
return;
}
g_GLdone = true;
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, 1, g_Image[1-g_iCounter].GetCols(), g_Image[1-g_iCounter].GetRows(), 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, g_Image[1-g_iCounter].GetData());
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
}

int DrawGLScene(GLvoid)
{
QueryPerformanceCounter( &thisFrame );
int sleeptime = (1000 / 30) - (1000*(thisFrame.QuadPart - lastFrame.QuadPart) / tps.QuadPart); //exchange 30 with new FPS
if( sleeptime > 0)
Sleep( sleeptime );
UpdateGLBuffers();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear The Screen And The Depth Buffer
glLoadIdentity(); // Reset The View
glTranslatef(0.0f,0.0f,-5.0f);

glBindTexture(GL_TEXTURE_2D, texture);

glColor4f(1.0, 1.0, 1.0, 1.0);

glBegin(GL_QUADS);
	// Front Face
	glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f, -1.0f,  0.5f);
	glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f, -1.0f,  0.5f);
	glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f,  1.0f,  0.5f);
	glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f,  1.0f,  0.5f);
glEnd();

QueryPerformanceCounter( &lastFrame );
return TRUE;										// Keep Going

}

and I just can’t pinpoint it. Even the lines
glTexParameteri(); seem to leak already. Any known issues or what is happening here?

What do the function calls such as

g_Image[1-g_iCounter].GetData()

do? Does it just return a pointer to existing data, or does it do anything extra?
You could try with constant width/height + null data, and see if there’s still a problem.
Another guess is that the array index [1-g_iCounter] looks a bit suspicious too, are you sure it’s not meant to be [g_iCounter-1].

Er, sorry, forgot to mention, that is the camera’s API which returns the pointer to the image data in memory. I have already tried disabling the image capture so that it’s always the same pointer / data in memory.
The functions are:
Image:
virtual unsigned int GetRows() const;
virtual unsigned int GetCols() const;
virtual unsigned char* GetData();

which has proven so far in command line applications to not leak.

Edit: memory even leaks if I comment the glTexImage2D function out, but if I comment DrawGLScene, no leak. Image acquisition itself doesn’t seem to be causing this.
Edit2: the 1-g_iCounter is okay, it’s 2 images, and this is some sort of image buffer hack. I’ve tested with constant row / cols and NULL pointer, it’s still leaking.

Okay, found the error.

The memory is constantly rising, and gains 1MB of RAM used within 5 minutes, at which it just stops growing, and the memory usage is constant.

I have absolutely no idea where this is coming from, but apparently this is within nVidia driver stack or their OpenGL implementation.

Sorry for the confusion here, never let it ran 5 minutes, since it was just continuously acquiring more RAM :frowning:

  1. Could u try to comment the UpdateGLBuffers line and see if u still get this. U r creating a texture every frame could try to use fbo instead.
  2. The third parameter is the internal format. Shouldn’t it be GL_INTENSITY?

glTexImage2D(GL_TEXTURE_2D, 0, GL_INTENSITY, g_Image[1-g_iCounter].GetCols(), g_Image[1-g_iCounter].GetRows(), 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, g_Image[1-g_iCounter].GetData());

“1” is an allowed value for internal format in the compatibility profile http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml , but has been removed from core

but also:

So if you want to use core, you should really use GL_RED.

Thanks for the input, after an hour of debugging I noticed that after a few minutes the leaking would stop actually, so this is something else.

Nonetheless I have one additional question:

The cameras here have 8bit, 12bit or 16bit monochrome output, and I want to make the application viable for all 3 colour formats. Now I was able to get 12bit and 8 bit running properly via
8-bit:
glTexImage2D(GL_TEXTURE_2D, 0, GL_INTENSITY, cols, rows, 0, GL_RED, GL_UNSIGNED_BYTE, data);

12-bit:
glTexImage2D(GL_TEXTURE_2D, 0, GL_INTENSITY12, cols, rows, 0, GL_LUMINANCE12, GL_UNSIGNED_BYTE, data);

but with 16-bit I am failing
glTexImage2D(GL_TEXTURE_2D, 0, GL_INTENSITY16, cols, rows, 0, GL_LUMINANCE16, GL_UNSIGNED_SHORT, data);

I get a white image, or rather no image at all. The camera works with 16bit, the manufacturers software displays an image with 16bit mode.

Any idea what I am doing wrong? This is with Win7 x64 and a GeForce9600GT

This


12-bit:
glTexImage2D(GL_TEXTURE_2D, 0, GL_INTENSITY12, cols, rows, 0, GL_LUMINANCE12, GL_UNSIGNED_BYTE, data);

should use GL_UNSIGNED_SHORT.


12-bit:
glTexImage2D(GL_TEXTURE_2D, 0, GL_INTENSITY12, cols, rows, 0, GL_LUMINANCE12, GL_UNSIGNED_SHORT, data);

Have u called glEnable(GL_TEXTURE_2D) before u call glBindTexture and glTexImage2D? I dont see this call in the code snippet posted by u earlier.

Okay, replaced with UNSIGNED_SHORT, works fine as well. Not sure why it should be short or byte, since its a MONO12 image, which means 1.5 bytes (neither short nor byte).

The complete code actually looks like

void UpdateGLBuffers()
{
if( g_GLdone ) {
return;
}
g_GLdone = true;
glBindTexture(GL_TEXTURE_2D, texture);
if(g_Color) {
g_Image[1-g_iCounter].Convert(PIXEL_FORMAT_RGB, &g_tempImage);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, g_Image[1-g_iCounter].GetCols(), g_Image[1-g_iCounter].GetRows(), 0, GL_RGB, GL_UNSIGNED_BYTE, g_tempImage.GetData());
}
else
{
switch(g_Image[1-g_iCounter].GetPixelFormat()) {
case PIXEL_FORMAT_MONO8: glTexImage2D(GL_TEXTURE_2D, 0, GL_INTENSITY, g_Image[1-g_iCounter].GetCols(), g_Image[1-g_iCounter].GetRows(), 0, GL_RED, GL_UNSIGNED_BYTE, g_Image[1-g_iCounter].GetData()); break;
case PIXEL_FORMAT_MONO12: glTexImage2D(GL_TEXTURE_2D, 0, GL_INTENSITY12, g_Image[1-g_iCounter].GetCols(), g_Image[1-g_iCounter].GetRows(), 0, GL_LUMINANCE12, GL_UNSIGNED_SHORT, g_Image[1-g_iCounter].GetData()); break;
case PIXEL_FORMAT_MONO16: glTexImage2D(GL_TEXTURE_2D, 0, GL_INTENSITY16, g_Image[1-g_iCounter].GetCols(), g_Image[1-g_iCounter].GetRows(), 0, GL_LUMINANCE16, GL_UNSIGNED_SHORT, g_Image[1-g_iCounter].GetData()); break;
default: break;
}
}
}

And works for every pixel format except MONO16.

Thanks a lot!

If that’s the case, I would look closely at the data array (g_Image[1-g_iCounter].GetData()) that is passed to glTexImage2D. Are u sure that the data is correct? may be its padded with zeros? May be u can try adding the following call before the glTexImage2D call


glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

See if it makes any difference?

Nope, that did not help. The data itself is valid. I think there’s something wrong with the way nVidia’s OpenGL handles 16bit (at least I read that somewhere when searching).

Since 16bit image display is not an issue in the first place, I guess this one is resolved.

Thanks for all the input.

glTexImage2D(GL_TEXTURE_2D, 0, GL_INTENSITY16, width, height, 0, GL_LUMINANCE16, GL_UNSIGNED_SHORT, Data());

should generate GL_INVALID_VALUE or GL_INVALID_ENUM because GL_LUMINANCE16 is not in the book. You need

glTexImage2D(GL_TEXTURE_2D, 0, GL_INTENSITY16, width, height, 0, GL_LUMINANCE, GL_UNSIGNED_SHORT, Data());

Thanks for that, with LUMINANCE it’s working now for all 3 monochrome modes.
One last question though, why does it work with MONO12? I specify it as GL_UNSIGNED_BYTE, so that’d be too little, but UNSIGNED_SHORT would be too much. As Internal format I set it to GL_INTENSITY12, so my guess is, this overrides the other value, but according to documentation the 3rd value represents how that texture should be used, and not what the data looks like?

Edit: turns out, an error in my code used the default conversion to BGR, so the display of Mono12 actually never worked.