OpenGL driver "memory leak"

Ok, I don’t know whether the driver leaks, but I had to find a thread title.

Here’s the problem:

glTexImage2D allocates a hell of a lot of system RAM, and I have no clue why this happens.

Here’s the texture loading code:


int CTexture::Load (ubyte* buffer)
{
OglGenTextures (1, reinterpret_cast<GLuint*> (&m_info.handle));
m_info.prio = m_info.bMipMaps ? (m_info.h == m_info.w) ? 1.0f : 0.5f : 0.1f;
glPrioritizeTextures (1, (GLuint *) &m_info.handle, &m_info.prio);
Bind (); // basically calling glBindTexture
 // setup mipmap generation
glTexParameteri (GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GLint (m_info.bMipMaps && gameStates.ogl.bNeedMipMaps));
glTexEnvi (GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
if (m_info.bSmoothe) {
	glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, gameStates.ogl.texMagFilter);
	glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, gameStates.ogl.texMinFilter);
	}
else {
	glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
	glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
	}
// m_info.internalFormat: Component count (e.g. 4 for RGBA)
// m_info.format: Texture format (e.g. GL_RGBA)
// m_tw, m_th: Texture dimensions (always powers of two)
glTexImage2D (GL_TEXTURE_2D, 0, m_info.internalFormat, m_info.tw, m_info.th, 0, m_info.format, GL_UNSIGNED_BYTE, buffer);
}

The weirdest thing is that although Process Explorer shows that there’s almost 2 GB available memory, I get an out of memory error when loading the third or forth level in a row (like load level 1, use cheat to load level 2, then level 3, … BAM!). Memory fragmentation? I have checked my own texture/bitmap management: It nicely returns all memory it has used.

It’s also weird that when loading level 2, available memory goes up to about 2 GB (after having freed level 1’s data), but then goes down below what had been available after loading level 1, when loading level 3 available memory goes up again and then below what was available after loading level 2.

I have made very sure that all textures allocated in the OpenGL driver are released when unloading a level’s data (glDeleteTextures).

I can say for sure that my program doesn’t directly allocate all that memory. It takes about 530 MB for level 1, but then another 530 are taken during loading the textures to the OpenGL driver. I have checked this in the debugger: When arriving at the code that sends the texture data to the driver, almost all texture data has been loaded from disk to memory (program will still compute a few transparency masks consuming about 10 MB). When I turn off mipmap generation, the memory loss is significantly lower, but still present (looks like glTexImage2D consumes RAM).

Before sending the textures to the driver I am having 1533 MB free RAM and 523 MB allocated for texture data. After having sent them, I am having 1 GB free RAM and 533 MB allocated for texture data. It looks like the entire texture data gets duplicated by the driver. Now when I disable mipmap and texture generation (by basically not calling CTexture::Load()), no extra memory gets consumed.

The problem doesn’t depend on mipmap generation settings. It still occurs when turning mipmap generation off and loading the textures to the driver.

glGetError() after glTexImage2D always returns zero.

After having freed up all memory used for the level, and before loading the next level, free memory goes back up to about 2 GB.

Hardware/driver: ATI Radeon HD 4870 1 GB, Catalyst 9.2. Other people with different hardware (also NVidia) have the same problem though.

WTF?!

As far as I’ve understood it this is normal behavior - the driver must maintain a copy for situations where the data on the GPU has to be paged out and in. Free the memory in your application if it’s not needed and the same for resources allocated by GL on your behalf. However, someone else in this forum probably knows more about the memory management details than I :slight_smile:

The weird thing is that the previous version of the program doesn’t exhibit this behavior. I just cannot detect any difference in the calls I deem relevant (setting up OpenGL and the render context).

Your are right, OpenGL drivers keep a copy of your texture image data.
That’s necessary for a simple reason: you are free to allocate more textures than you have texture memory for.
So in order to have at any given time all currently bound texture in memory accessible to your gpu (be it on-card-memory or agp/pcie accessible system ram), your driver will need to replace textures repeatedly.

Since OpenGL has no way do ask your application to re-upload texture image data, it needs to keep a copy of all texture images. Even in situations where you don’t exceed the available texture memory, because OpenGL cannot know if this might happen at a later time.

By the way, I noticed you are using glPrioritizeTextures(), which is in fact a way to hint to OpenGL which textures it should try to keep in texture memory, or, put the other way round, which ones should be considered to be replaced first.

Afaik glPrioritizeTextures() is only used “in case”, i.e. when the driver actually needs to free up memory. Given my gfx card has 1 GB of RAM, it should never have to swap out any data on my system with my application.

And why doesn’t this happen with the previous program version?

I wonder whether I have a subtle difference in setting up OpenGL I just fail to detect.

Edit:

Looks like the previous version exhibits the same behaviour.

Depends on your usage :wink:

You might know that your particular application will never exceed the amount of memory of your gfx card. But there is no way your driver can tell.

A driver might employ a strategy different to keeping texture copies in every situation. Defering the creation of those copies to the point an out-of-texture memory situation occurs, by reading the texture data back, comes to my mind.
Or any other criterion, for example size of texture or a threshold of used memory or …

That way, changing your application (allocate more and/or bigger textures) can trigger a change in your drivers memory management.

I think the OpenGL doc states that texture priorities are only considered when actual swapping has to take place.

You are right, the driver cannot tell whether graphics memory will suffice for an apps requirements or not, and I have understood why it keeps texture buffer copies in system RAM.

Is there a way to suppress this behavior of the driver? I.e. when it runs out of graphics memory, have it just return an error code?

I’d rather control this myself, e.g. by decreasing texture size on the fly.

If I only could tell the driver to use my application’s texture buffers as RAM backup …

There was talk of an optional backing store of sorts for GL3 but it rode off into the sunset with many other good ideas put forward at the time.

Though now with everything virtualized to the extent that it is such a feature probably makes less sense going forward.

Btw, PrioritizeTextures, AreTexturesResident, etc. are deprecated.

My entire program is deprecated :slight_smile:

It’s an age old shooter classic I am trying to keep alive enough for it to be playable on modern OS.

Well, I will remove that call then.

Iff this is due to (system) memory fragmentation, the driver writers could fix this, and quite easily too.

Instead of just failing if there isn’t a contigous block of memory large enough to hold the whole texture in system memory, but the total free memory in the process’ address space is large enough; introduce a proxy (not in the OpenGL sense; as a programming pattern) that splits the allocations in chunks.

karx11erx:
But, as it seems that’s not (yet? :slight_smile: ) done by the driver, could you split the texture(s) into smaller textures (basically the reverse of an atlas)?

I however can’t put the pieces together for “It’s an age old shooter classic I am trying to keep alive” and “530 MB”.

Is “age old shooter classic” this one : http://www.descent2.de/d2x.html ?

ZBuffer,

yes, it is. I have understood the nature of my problem though and already have worked around it.

tamlin,

fans of the game have created a lot of 512x512 high res textures for the game. I could apply texture compression, but then I would have to rewrite animated texture handling, and my looking into that has revealed that this would be a PITA.