Note that the leak still occurs if I replace the immediate-mode rendering with rendering from a VBO, and if I manually detach the shaders from the program before deleting it. I’ve also tried doing just about everything else before deleting it, as well as calling glFinish / glFlush afterwards.
This behavior occurs on my desktop’s Radeon 6850 with the 5/9/2011 drivers, as well as on my laptop’s Radeon Mobility HD 3200 with the 5/9/2011 drivers. I gave a test program to my friend, and it didn’t cause a leak on her non-AMD card.
I hope I’ve given enough information here. I can upload the test application if anyone wants me to.
How do you know that a leak occurred? How are you testing your memory usage? Are you sure it’s a leak and not just the driver holding on to some memory in case it might need it again?
I’m observing my program’s memory usage through Windows’ task manager. I’m aware that this doesn’t report the exact amount of memory actually used by the program, but my test application started with 20 MB allocated at initialization and rose to 700 MB over time with no signs of stopping - and this was basically just running the few lines posted above. If I change the program so that any criterion I mentioned is not met, memory usage stabilizes very quickly.
I considered the possibility that the driver is just holding on to memory, but I don’t see why it would. The only time such memory would be useful is if I recreate the exact same shader program, but that’s exactly what I’m doing and the driver clearly isn’t taking advantage.
If you have some more enlightening tests I could do, I’m willing to try them.
Are you trying to link the program (glLinkProgram) always? The correct way is to link the program once and use the program anywhere. Another point is that some memory is released when the context is destroyed.
It seems your sample code is simple and I think it’s not easy to reproduce the problem by it. Could you please send a small project to me by frank.li@amd.com? Thanks.