PDA

View Full Version : glDrawElements / Buffer size limit



cluosh
08-05-2015, 01:44 AM
Hello there!

I'm currently trying to display large model (without speed considerations, I'm aware I'll need to use additional algorithms to draw the models with a reasonable speed) and I seem to have hit some kind of limit when it comes to glDrawElements or buffer size. My vertex buffer contains 600k vertices and my index buffer about 35 million elements.

I expected it to give me some kind of feedback (i.e. an error, failed buffer creation, etc.), but the buffers have all been successfully created and the values are correct. Regardless of the created buffers, it won't draw anything at all; it still works with around 30 million elements in the index buffer.

My current graphics card is an Intel 4000 HD (only graphics card that's currently available to me). If I use a Mesa software driver, it at least draws everything correctly, although at horrible speed.

My questions are:

Is the driver/hardware to blame for this?
If yes, is there any way to find out limitations regarding buffer size/drawElements calls?


Edit:
Same problems on a GeForce GT 630M

GClements
08-05-2015, 03:44 AM
OpenGL doesn't impose any limit on the number of vertices or indices beyond any limit imposed by total memory availability. Consequently, there's no way to query such a limit (in OpenGL 4.3 and later, glGet() accepts GL_MAX_ELEMENT_INDEX for compatibility with OpenGL ES, but the result is required to be 232-1).

There are "recommended limits" to the to the size of the vertex and index arrays for glDrawRangeElements(), which can be queried using glGet() with GL_MAX_ELEMENTS_INDICES and GL_MAX_ELEMENTS_VERTICES. But exceeding those limits should only affect performance, not the final result.

If possible, I suggest splitting the glDrawElements() call into multiple calls, each drawing a subset of the data. For disconnected primitives, this is straightforward. For strips, it just requires one or two vertices of overlap between the calls. Line loops, triangle fans and polygons can't be split so easily.

cluosh
08-05-2015, 04:13 AM
Thank you very much, I'll try splitting it up for now. Are there any recommendations for maximum buffer size?

mhagain
08-05-2015, 04:10 PM
OpenGL doesn't specify limits like this. The best way to play it safe is to use 16-bit (GL_UNSIGNED_SHORT) indices and therefore a maximum of 65545 (the extra one may or may not have special meaning, again OpenGL doesn't specify this) vertices per draw call. That will be guaranteed to work, and to be hardware accelerated, on absolutely everything.

GClements
08-05-2015, 05:57 PM
OpenGL doesn't specify limits like this. The best way to play it safe is to use 16-bit (GL_UNSIGNED_SHORT) indices and therefore a maximum of 65545
The issue isn't the range of indices, but their number. The element array has 30 million entries and is being processed with a single glDrawElements() call.

Also, 16-bit indices can't be used because he has 600k vertices, and splitting up the element array won't change that.

In order to reduce the range of the indices, the data would have to be split into disconnected subsets. Even if that's possible (it may or may not be, depending upon the data), it could be far from straightforward.