glDrawElements not drawing end of large buffer

I have a very large array of GL_TRIANGLES that I am giving to gl with glBindBuffer/glBufferData. I then draw them with glDrawElements. The last 20,000 or so triangles are not drawn.

Here is some pseudo code

glGenBuffers(1, &posName);
glBindBuffer(GL_ARRAY_BUFFER, posName);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*posDataLen, posData, GL_STATIC_DRAW);

glGenBuffers(1, &idxName);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, idxName);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLuint)*idxDataLen, idxData, GL_STATIC_DRAW);

glEnableVertexAttribArray(AttributePosition);
glBindBuffer(GL_ARRAY_BUFFER, posName);
glVertexAttribPointer(AttributePosition, 3, GL_FLOAT, GL_FALSE, 0, 0);

glDrawElements(GL_TRIANGLES, idxDataLen, GL_UNSIGNED_INT, 0); 

In the position data there are 1,300,556 vertices.
In the index data there are 2,522,028 indices.
This makes for 840676 triangles.

If I split the data in two it draws fine.
If I don’t generate the first 20,000 triangles into the list the 20,000 extra triangle at the end of the list draw fine.

Is there a size constraint associated with glBindBuffer, glDrawElements or somewhere else I cannot find?

glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*posDataLen, posData, GL_STATIC_DRAW);

Is posDataLen = #vertices x 3? Otherwise, you may be only getting a 1/3 of your vertices actually uploaded.

The h/w implmentation does define some limits with vertex arrays and other GL features. The limit you’re after can be obtained with:

glGetIntegerv(GL_MAX_ELEMENTS_VERTICES, &value)

What h/w are you using? ATI, nNvidia, Intel?

On my ATI Radeon 4580, these are the max returned values from GL:
MAX ELEMENTS_INDICIES: 16777215
MAX ELEMENTS_VERTICIES: 2147483647
On my Geforce 8600m, these are the max returned values from GL:
MAX ELEMENTS_INDICIES: 1048576
MAX ELEMENTS_VERTICIES: 1048576

Therefore on nVidia h/w you may have broken the limits.

Breaking the limits generally just means that you’ll fall back to software emulation though; at least on a conformant driver. Also note that glDrawElements doesn’t define a max, but glDrawRangeElements does - also assuming a conformant driver.

I’d actually recommend splitting into multiple draw calls for precisely this reason; if you do get a software emulation fallback it will likely run much slower than if you just use multiple draw calls.

The h/w implmentation does define some limits with vertex arrays and other GL features. The limit you’re after can be obtained with:

No. This limit is for glDrawRangeElements, and it isn’t a hard limit even for that. It is simply the maximum suggested size; OpenGL will still carry out the rendering command even if you exceed it.

I would also point out that 16-MiElements is a lot. And 2147483647 represents 2-GiVertices. That’s fully half the addressable space with 32-bit pointers (and I don’t think GPUs are yet capable of 64-bit addressing).

Basically, what that means is that glDrawRangeElements probably means nothing on ATI hardware. At least, nothing different from glDrawElements.

On my Geforce 8600m, these are the max returned values from GL:

I get those from my GT 250 as well.

Across a large variety of Nvidia drivers (185 - present) and hardware (7900, 8800, 9500GT, GTX260, GTX460, QuadroFX 3800 and Quadro 4000), the max vertices and elements are all 1048576.

On OSX, however, they’re an odd 2048 (verts) and 150,000 (elements) for both Nvidia and AMD cards. I’d love to see what kind of model has 50,000 triangles with only 2048 unique vertices. :confused:

They’re just hints. The numbers don’t mean that your performance will certainly degrade if these limits are exceeded. They’re only hints, and specifically, they’re hints for glDrawRangeElements.

I’m aware of that, but it doesn’t make the hint values any less odd (IMO).