View Full Version : Regarding VAR and buffer batching

11-07-2002, 01:25 AM

I have a couple of questions regarding the nvidia VAR extension.

After reading Nvidia's VertexArrayRange.pdf (http://developer.nvidia.com/docs/IO/1323/ATT/VertexArrayRange.pdf) it seems that there's a limit on the number of indices sent to the video card with a single draw elements call. The paper states that, "GeForce products require indices to not exceed 65535. If more indices are necessary, break the object into smaller parts."

A very informative post (http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/001892.html) states that the 65535 index limit also applies to vertices (sent using glVertexPointer).

I am attempting to render a heightfield with the VAR extension. If my heightfield is 512x512 in size can I safely assume that by allocating one large buffer of size (512^2 * 3 * sizeof(float)), assuming 3 floats per vertex, and then split into 4 buffer pointers, each sent independantly, will satisfy the Geforce?

Am I correct in assuming that 'fencing' is only needed for dynamic geometry?

Thanks for the time!

[This message has been edited by Cirrus (edited 11-07-2002).]

11-11-2002, 11:31 PM
Only the GeForce 256 and GeForce 2 are limited to 65535. GeForce 3 and 4 can go much higher (see NVIDIA's OpenGL Spec. PDF).

This limit only applies with VAR, it doesn't apply when you use the default OpenGL mechanism.

Yes, you should allocate only one VAR buffer, and send the pointer to the buffer to the VAR extension.
That just tells the drivers that that part can be fetched by DMA.
If you split it into different part, do it logically with glVertexPointer and not the VAR extension.

Fencing is need when dealing with dynamic data and/or when dealing with arrays that are larger than the VAR buffer.