OUT_OF_MEMEORY during glDrawElements

This is a relatively weird one I beleieve. I’m rendering a scene using VBOs and glsl shaders and after rendering properly for some time (which seems to be arbitrary) the window goes black.

Now I’ve tracked the problem down to the glDrawElements call generating an OUT_OF_MEMORY error and of course not drawing anything. I’ve also made sure the glsl shaders are not the problem by rendering with the fixed path and by also trying to render using the shaders with immediate mode primitivies instead of using VBOs (basically drawing a quad with glVertex* calls instead of calling glDrawElements).

So everything seems to point to the VBOs but how is it possible that they initially work correctly with no gl errors and then suddenly break down for seemingly no reason. Nothing special happens, no change in the scene at all, except for the modelview matrix. If the modelview matrix (the camera) doesn’t change all is well…

Anybody has any ideas? I’d consider this a driver bug but a driver update didn’t help. I’m using an nVidia card with latest drivers in Linux.

Sounds strange. Maybe you should post all your VBO related code just to make sure you are not overlooking something :slight_smile:

kind regards,
Nicolai de Haan Brøgger

This is weird. Are you absolutely sure the glDrawElements is the offending call? Did you check this with glIntercept or some other debugging tool?
Maybe you have a memory leak elsewhere in the code which uses up all the memory so that on this call no more memory is available?

[ www.trenki.net | vector_math (3d math library) | software renderer ]

I’ll post the code later when I’m at home but I doubt there’s anything wrong with is as it works initially.

Trenki, I have only tried to sandwich the glDrawElements call between 2 glGetError() calls but I’m pretty sure that’s the problem. If I remove all VBO’s from the scene or just comment out the DrawElements() call everything works fine.

This is the code to create the buffers where n[0] holds the number of vertices, n[1] the number of indices (not triangles) and b[0], b[1] the vertex and index buffers respectively. The vertices consist of interleaved spatial, normal and texture coordinates.


    glGenBuffers(2, b);
    
    glBindBuffer(GL_ARRAY_BUFFER, b[0]);
    glBufferData(GL_ARRAY_BUFFER, n[0] * 8 * sizeof(GLfloat),
		 vertices, GL_STATIC_DRAW);
	

    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, b[1]);
    glBufferData(GL_ELEMENT_ARRAY_BUFFER, n[1] * sizeof(GLushort),
		 indices, GL_STATIC_DRAW);

    self->size[0] = n[0];
    self->size[1] = n[1];

    self->buffers[0] = b[0];
    self->buffers[1] = b[1];

Then to draw the buffers I do this.


    glClientActiveTexture (GL_TEXTURE0);
    glEnableClientState(GL_VERTEX_ARRAY);
    glEnableClientState(GL_NORMAL_ARRAY);
    glEnableClientState(GL_TEXTURE_COORD_ARRAY);

    glBindBuffer(GL_ARRAY_BUFFER, self->buffers[0]);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, self->buffers[1]);

    glVertexPointer(3, GL_FLOAT, 32, (void *)0);
    glNormalPointer(GL_FLOAT, 32, (void *)(3 * sizeof(GLfloat)));
    glTexCoordPointer(2, GL_FLOAT, 32, (void *)(6 * sizeof(GLfloat)));

    glDrawRangeElements(GL_TRIANGLES, 0, 0, self->size[1], GL_UNSIGNED_SHORT, (void *)0);
   
    glDisableClientState(GL_VERTEX_ARRAY);
    glDisableClientState(GL_NORMAL_ARRAY);
    glDisableClientState(GL_TEXTURE_COORD_ARRAY);

I can’t find anything that might explain this behaviour though. And as I’ve said, replacing the glDrawElements call with:


    glBegin(GL_TRIANGLES);
    glVertex2f(0, 0);
    glVertex2f(1, 0);
    glVertex2f(1, 1);
    glVertex2f(0, 0);
    glVertex2f(1, 1);
    glVertex2f(0, 1);
    glEnd();

Draws the quads with no problems at all. So the problem should be caused by the VBOs, right?

How large is your VBO data and how much memory does your graphics card have?

The whole scene consists of less than 10000 faces in all but I get the same problem if I leave only one model with a few hundreds of faces in the scene.

I’ve also tried using simple vertex arrays. In this case I get no OUT_OF_MEMEORY errors but at the same time that the VBOs would stop working the whole scene starts to “blow up”. I mean the models gradually turn to grabage. I doubt it’s some bug in my code corrupting the vertex arrays themselves as then the VBO approach should work since the data resides in vram and can’t be altered.

I’m more and more inclined to consider this a driver bug but how is it possible that such a serious bug has remained undetected and unresolved?

And why does this only happen when the viewpoint moves?

Actually i have a similar problem with an application. There, when i load a mesh that consists of a few million triangles and i enable shadow-mapping the mesh becomes corrupted. But only when i move. When i disable shadow-mapping right after application startup, before i move the camera, the mesh will stay intact, although shadow-mapping has been (correctly) applied already.

I never got to solve the problem, it only appeared with very detailed models. I don’t use one VBO, but a few hundred partitions (to use 16 Bit indices).

This happened on my ATI Radeon X1600 Mobility with 128 MB VRAM, Catalyst 7.6 (haven’t checked more recently). What is your configuration?

Jan.

AFAIK, glDrawElements cannot report GL_OUT_OF_MEMORY. Maybe you got error in some other part.

According to VBO spec:

OUT_OF_MEMORY may be generated if the data store of a buffer object cannot be allocated because the <size> argument of BufferDataARB is too large.

OUT_OF_MEMORY may be generated when MapBufferARB is called if the data store of the buffer object in question cannot be mapped. This may occur for a variety of system-specific reasons, such as the absence of sufficient remaining virtual memory.

According to GL_EXT_draw_range_element your example is very strange if its work.

glDrawRangeElementsEXT is a restricted form of glDrawElements. All
vertices referenced by indices must lie between start and end inclusive.

Not all vertices between start and end must be referenced, however
unreferenced vertices may be sent through some of the vertex pipeline
before being discarded, reducing performance from what could be achieved
by an optimal index set. Index values which lie outside the range will
cause implementation-dependent results.

In your example start & end are 0, and you are drawing geometry which probably use indices outside of range 0 - 0. Try to modify that call with proper values for start and end.

That [0, 0] range is another mystery. I noticed that the range didn’t seem to matter. In fact the models show just fine with this zero range although it’s not what I normally use.

Jan, I’m running this on a geForce Go 7300 and an AMD64 CPU. The interesting part here is that both our setups seem to be laptops. Googling about I’ve come across other occurences of this sort of behaviour. Those were on laptops as well and they were attributed to GPU overheating or somesuch. The cure was supposed to be to underclock the GPU and memory slightly but I haven’t been able to do this.

Other than that, I’m also using shadow maps but disabling them doesn’t seem to solve the problem.

Hi Zen. It’s a shot in the dark but have you tried disabling TurboCache? The recent GPU you are running on, makes me suspect a driver issue with memory allocation. Maybe this can help?

http://howtotroubleshoot.blogspot.com/2007/05/how-to-disable-turbocache.html

kind regards,
Nicolai de Haan Brøgger

Well as I’ve said I’m running this on linux so this method of turning of turbocache is not applicable. AFAIK pre-9xxx drivers didn’t use this feature so I’ve tried reverting to 8766 and the bug is still there. I’ve also tried it on another laptop with an radeon and it seems to be there as well. I say seems because it didn’t work well enough to be sure. Anyway it seems not to be a driver issue and probably not a HW bug. On the other hand the glDrawElements call shouldn’t normally return an OUT_OF_MEMORY error code which it does. That is, doing this


    glFinish();
    glGetError();
    glDrawElements(GL_TRIANGLES, self->size[1], GL_UNSIGNED_SHORT, (void *)0);
    puts (gluErrorString(glGetError()));

prints “out of memory” continouously after the screen goes blank. So what can this be then? And how the heck do I go about debugging this?

To the author of the post: did you finally managed to solve this problem?
I’m having exactly the same problem as you: I get an gl_out_of_memory inside a glDrawelements call (drawing only 6 tris). I am able to workaround this problem by reducing a little the size of a texture (3 channel/RGB32/FLOAT/4096x4000) texture I am using but it doesn’t make sense.

The memory consumed by my texture is less than 100 MB and my graphics card is an Nvidia GeForce 6800 Ultra / 256 MB.
Texture is uploaded to OpenGL flawlessly.

Any ideas?

Sounds correct that you’re running out of video memory, a 4096x4096 RGB32f texture consumes 256MB of video memory.

Unless you’re pre-rendering with the texture, it’s most likely only being stored on the CPU when you “upload” it.

Textures often aren’t pushed to the GPU until you actually tell OpenGL to render with them, so your out-of-(GPU)-memory condition may be deferred until rendering (see Ysaneya’s post).

And even once they’re on the GPU, they can be silently shuffled off afterwards based on internal driver logic you have no control of. But that’s another discussion…