View Full Version : glMapBufferRange causes out of memory exception

01-29-2016, 05:39 AM
Hi there,

I've written an OpenGL application for playing High-Res videos from 4K upr to 8K which is mainly working fine.
When playing HEVC, 4K videos, I’ve encountered a very strange problem:

I use a ping pong PBO in order to update the texture, which is actually holding the frames of the video.
I use two PBOs of maximum size, which depends on the graphics card I use. Currently I have a limit of 4096x4096xRGBA
I've been testing some HEVC coded videos in 4K resolution and now I have some examples causing trouble.
glMapBufferRange is returning a null pointer and a glerror 1285 -> out of memory.
Well, I thought this can't happen, as I've allocated enough memory on both buffers. My understanding of the function was that once I allocate the memory, no other application can get it, and I expected the glMapBufferRange would never cause this problem.
I thought the problem occurs before this call, so I checked the glerror before calling mapBuffer – no error at all.
The strange thing is, it only happens sometimes, but most in the same position of the video. GPU-Z shows me that I’m not using more than 600MB on my nvidia GT 640. I guess the error is wrong.
I’ve then tried to check if the problem is somewhere else, and I’ve unchecked the “use cuda to decode” on the decoder of the video – well now the error seems to never occur.
This means to me that another application is “stealing” the memory I’ve mapped for my video frames – which actually should not happen – right?
Here is how I’ve created the PBOS:

glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, _hPixelBuffer[0]);
GetVideoDimensionSize(), NULL, GL_DYNAMIC_DRAW);

-> Where GetVideoDimensionSize() returns 67,108,864 (4096x4096x4).
I then reset the buffer:

unsigned char * pPixelsPBO =
static_cast< unsigned char*>(glMapBufferARB(
memset(pPixelsPBO, 0, m_bufferSize);

And as soon as I get a new Video Frame:

unsigned char * pPixelData = static_cast< unsigned char*>(glMapBufferRange(GL_PIXEL_UNPACK_BUFFER, 0, textureSize,

And exactly here the error happens.

Although I’ve found the problem – which is using the cuda decoder – I want to understand why the error happens inside my application. This is somehow a memory violation from a third party decoder which has actually no access to my render context.

This is very strange, especially because it works sometimes – from my tests, around 10% of them worked, the other 90% did not and from these most were getting an out of memory error but some returned a invalid operation.

I did not change the code. The video is all the time the same.

Any ideas what is happening and how to fix it?