glDrawElements / Buffer size limit

Hello there!

I’m currently trying to display large model (without speed considerations, I’m aware I’ll need to use additional algorithms to draw the models with a reasonable speed) and I seem to have hit some kind of limit when it comes to glDrawElements or buffer size. My vertex buffer contains 600k vertices and my index buffer about 35 million elements.

I expected it to give me some kind of feedback (i.e. an error, failed buffer creation, etc.), but the buffers have all been successfully created and the values are correct. Regardless of the created buffers, it won’t draw anything at all; it still works with around 30 million elements in the index buffer.

My current graphics card is an Intel 4000 HD (only graphics card that’s currently available to me). If I use a Mesa software driver, it at least draws everything correctly, although at horrible speed.

My questions are:
[ul]
[li]Is the driver/hardware to blame for this?[/li][li]If yes, is there any way to find out limitations regarding buffer size/drawElements calls?[/li][/ul]

Edit:
Same problems on a GeForce GT 630M

OpenGL doesn’t impose any limit on the number of vertices or indices beyond any limit imposed by total memory availability. Consequently, there’s no way to query such a limit (in OpenGL 4.3 and later, glGet() accepts GL_MAX_ELEMENT_INDEX for compatibility with OpenGL ES, but the result is required to be 232-1).

There are “recommended limits” to the to the size of the vertex and index arrays for glDrawRangeElements(), which can be queried using glGet() with GL_MAX_ELEMENTS_INDICES and GL_MAX_ELEMENTS_VERTICES. But exceeding those limits should only affect performance, not the final result.

If possible, I suggest splitting the glDrawElements() call into multiple calls, each drawing a subset of the data. For disconnected primitives, this is straightforward. For strips, it just requires one or two vertices of overlap between the calls. Line loops, triangle fans and polygons can’t be split so easily.

Thank you very much, I’ll try splitting it up for now. Are there any recommendations for maximum buffer size?

OpenGL doesn’t specify limits like this. The best way to play it safe is to use 16-bit (GL_UNSIGNED_SHORT) indices and therefore a maximum of 65545 (the extra one may or may not have special meaning, again OpenGL doesn’t specify this) vertices per draw call. That will be guaranteed to work, and to be hardware accelerated, on absolutely everything.

The issue isn’t the range of indices, but their number. The element array has 30 million entries and is being processed with a single glDrawElements() call.

Also, 16-bit indices can’t be used because he has 600k vertices, and splitting up the element array won’t change that.

In order to reduce the range of the indices, the data would have to be split into disconnected subsets. Even if that’s possible (it may or may not be, depending upon the data), it could be far from straightforward.