VBO/IBO use plus issues on Nvidia

Hi,

I have searched the forums but could not find any definitive answer to either of these questions.

#1 VBO Issue.
Is there a maximum size to the buffer data supported by VBOs? And if there is, how are you supposed to find out what the maximum supported by each driver is?

The issue:

  • Everything works fine on every ATI based computer I have tried, even large buffer sizes of 256k vertices work fine.
    However, it fails on every Nvidia based computer I have tried when I try to use buffers with more than approximately 64k vertices worth of vertex attribute data.

  • I know for sure that my code and array data is correct.
    Also if I render the data as VA’s then it works fine on either platform, other than the performance difference with VAs versus VBOs.

  • I am generating 2 VBOs, one for the Index array, the second for the vertex attributes (vertex, normal, color) array.

  • On every Nvidia, as long as the buffer data is 64k vertices or less, then it works fine. As soon as I try to bind buffer data that is more than about 64k vertices, it completely hangs on glDrawElements and doesn’t return back to my code. Because of the hang, I can’t even get an error return to know that the buffer size is too large, making it very difficult to program any error handler for this (stupid Nvidia).

  • I have tried multiple computers with different Nvidia cards, and multiple different driver versions from 191 to 275, all with the same Nvidia hang results.

What I am doing is basically:

- glGenBuffers at app startup.
- allocating and filling the arrays with the index and vertex attribute data.
- glBindBuffer and glBufferData on the index array.
- glBindBuffer and glBufferData on the vertex attributes array.
- glBindBuffer and glVertexPointer, glNormalPointer, glColorPointer with the array interleave offsets.
- glBindBuffer and glDrawElements with the index VBO.

I am rendering the entire contents of the index and vertex attributes arrays, no sub array areas are being used or updated.

On Nvidia if the vertex attributes is around 64k vertices or less it works (ie a 256x256 plane), with 148k of vertices it hangs 80% of the time (ie a 384x384 plane), with 256k vertices it always hangs (ie a 512x512 plane).

Am I going to have to split every mesh I deal with into no more than 64k vertices per VBO array set just to get it to work on Nvidia?

And please no comments like “Nvidia is doing it correct and ATI is wrong”. :slight_smile:

#2 VBO Question.
Is it necessary to set the glBufferData to a null buffer prior to updating its buffer data on a STREAM or DYNAMIC buffer usage type VBO?
ie, in order to let OpenGL know that the current data can be discarded as new data is coming.

ie.

glBufferData(GL_ARRAY_BUFFER, 0, 0, vbo[0]);  // tell OpenGL to discard the current data in the VBO -- or is this just an error or wasteful call
glBufferData(GL_ARRAY_BUFFER, sizeof_array, array_ptr, vbo[0]);  // send the new data

I would assume that OpenGL knows that the buffer usage is not STATIC and that the data can change. So OpenGL should already be expecting to have to discard old data when a new glBufferData data update occurs shouldn’t it?

Thanks.

Is there a maximum size to the buffer data supported by VBOs?

No.

Is it necessary to set the glBufferData to a null buffer prior to updating its buffer data on a STREAM or DYNAMIC buffer usage type VBO?

No.

Thanks for the reply.

Would you have any idea why every Nvidia based system would be hanging on the DrawElements call on buffer sizes larger than 64k vertex attributes?

My code is way too long to post it all here, but the general bits used by the VBOs are pretty much plain and standard.
The fact that it works on every ATI based system that I try, and that it works on Nvidia with smaller buffer array sizes should mean that my code is correct. :stuck_out_tongue:

Would you have any idea why every Nvidia based system would be hanging on the DrawElements call on buffer sizes larger than 64k vertex attributes?

Because you’re doing something wrong. I’ve never heard of any rendering problem with regard to large buffers from NVIDIA renderers. Performance issues, perhaps, but not basic functionality.

My code is way too long to post it all here, but the general bits used by the VBOs are pretty much plain and standard.

Then distill it down to just those “plain and standard” bits that render from large buffers. Just fill a buffer object with 70,000 copies of a single quad and render it.

I never had a problem with this, performance or otherwise, in one program i use two buffers with6 million vertices to ping pong betweeen using transform feedback, no problems with that.
Just remember to use long instead of int since ints has that 64k limit.

Just remember to use long instead of int since ints has that 64k limit.

What? An “int” is 32-bits on pretty much every 32-bit compiler. And “long” is 64-bits on many 32-bit compilers. So I’m not sure what it is you’re recommending here.

FYI The issue has been fixed. The cause was thread racing.
It falsely “appeared” to show up as an issue with array size due to the respective time required to manage the array data based on its size.

I was not doing something wrong as all of the VA and VBO code was technically correct.
Helpful pointers and a short list of things to check would have been better than condescension.

Alfonse is correct on ints. I think zeo is confusing the names for short/int with int/long.

Technically an “integer” is the default data size for the processor. So a 16-bit processor has an integer size of 16-bits, a 32-bit processor has an integer size of 32-bits, etc.
You will also find definitions for Int16, Int32 and Int64 to differentiate the integer size on newer compilers for processors that can operate in multiple modes such as Intel x86 and x64.

However, glDrawElements only supports unsigned byte, unsigned short, and unsigned integer, there is no unsigned long support.

True, but in varies depending on compilers and systems, so sometimes you only get a 16bit int (especially on older compilers), using long pretty much forces the system to go 32 bit or higher.
I have had problems that where fixed by going to long in certain cases.