PDA

View Full Version : VBO indexing / non-indexing performance



mr somboon
03-03-2007, 09:19 PM
I am now using VBO with indexing rendering (glDrawElement).I have seen an old DirectX 8.0 demo(the skined-mesh one) and non-indexing mode has a much better performance than indexing mode.

Is it the same in OpenGL? Will I gain my application performance using "glDrawArray"

I also have another question about VBO.
Any one know how can I use VBO for dynamic data (skined-mesh).
Do i have to call glBufferData every frame ?,or VBO are suitable with static data only ?


Thank
somboon

Korval
03-03-2007, 10:57 PM
Is it the same in OpenGL? Will I gain my application performance using "glDrawArray" Not only is that unlikely, it's unlikely that the DX8 demo gained performance in indexed mode due solely to that fact. The indexed data was probably unoptimized. Optimized index data will almost always beat straight arrays. At least, for normal mesh data.


Any one know how can I use VBO for dynamic data (skined-mesh).Are you doing your skinning on the CPU? If you're using a shader (and you should be), then skinned meshes aren't dynamic data.

If you need to update your buffer on a frame-by-frame basis, you'll either just have to live with calling glBufferSubData (don't forget the "Sub"). or map the buffer. Though mapping may not always work.

Also, don't forget to use the appropriate hints on creating the buffer object if you're going to update it every frame.

mr somboon
03-03-2007, 11:40 PM
Thank for your answer , Yes I use shader and never thought about GPU skinning before.

If my skinned-mesh had 4000 vertices , that mean i have to upload 4000 vertex attribute variable that contain transformation matrix to GPU and perform transformation of each vertex in shader right ?

Korval
03-03-2007, 11:48 PM
If my skinned-mesh had 4000 vertices , that mean i have to upload 4000 vertex attribute variable that contain transformation matrix to GPU and perform transformation of each vertex in shader right ?No.

You can probably find out how to do this through using Google.

The basic idea is that you load a uniform array up with the matrices for each bone. For each vertex, you have 2 attributes: one that tells the weighting of the bone and one that tells which bone that weight applies to.

If you have a vertex that gets a 30% blend from bone 4 and a 70% blend from bone 35, then the first attribute is a 4-vector that looks like this: (0.3, 0.7, 0.0, 0.0). The second attribute is: (4.0, 35.0, 0.0, 0.0).

The only limit is that an individual vertex cannot be affected by more than 4 bones. But that's a fairly trivial limit and is no problem in practice.

Then, all you do is set up a loop in the shader (or an unrolled loop) where you get the bone index for the matrix, index into the uniform matrix array, transform the vertex position (and normal), and then blend it with the others by the appropriate scale factor. It's a simple construct and works pretty well.

The other limitation is that the number of bones per batch cannot exceed the hardware number of uniforms (and you'll probably need uniforms for other stuff too). So generally, on more modern hardware, that means no more than about 100 matrices. And that requires using quaternions+positions instead of actual 4x3 matrices. It saves room. Fortunately, a position can be transformed by a quaternion in two shader instructions using clever swizzling.