PDA

View Full Version : Special Vertex Array use...possible?



Cirrus
11-29-2002, 05:33 AM
I've been implementing a terrain engine in the past couple of weeks using Nvidia's wonderful VAR extension.

All index buffers are sent to the card via DrawElements. Right now I'm dividing the terrain mesh into a bunch of vertex buffers which get sent to AGP memory with VAR. This obviously causes vertex repetition as edge vertices are shared by multiple blocks.

I prefered doing this and wasting some agp memory as then one index buffer suits all blocks (there are actually multiple index buffers for different LODS, but still only one of each type is needed).

What would be ideal would be to send the entire mesh as a single block and tell DrawElements to add an offset to each index buffer value. This would keep memory use low, overhead low, and leave all incremental calculations to the graphics card.

For example I could pass an index buffer which suits a block and tell the card to add 256 to each value in the index buffer to render the second block (assuming each block has 256 vertices). Even better would be to tell the card that each block has 256 vertices per block once and then simply specify a multiplier for each DrawElements call.

Is this currently possible? Can it be done with vertex shaders or something of the like?

I hope I've been clear enough, if you have any questions or if something is misty please ask.

I'd really appreciate it if someone could 'fill me in'.

Thanks for the read!

___________
Luigi Rosso
Lead Developer
RealitySlip
http://www.RealitySlip.com


[This message has been edited by Cirrus (edited 11-29-2002).]

jwatte
11-29-2002, 01:31 PM
Cirrus,

You can already do that by just calling glVertexPointer() with a different value. The call itself to glVertexPointer() is essentially free; it causes no data copying or state change (except for the vertex pointer value :-).

Think of it like this:

void glVertexPointer( void * ptr, ... ) {
gVertexPointer = ptr;
gVertexStride = ...;
}

void glArrayElement( unsigned int ix ) {
glVertexNfv( gVertexPointer+ix*gVertexStride );
}

DrawElements() is defined in terms of ArrayElement (although there's significant potential optimizations inside the implementation of DrawElements()).

This, if you have:




struct { float x, y, z; } vertexArray[ 100000 ];


And you want to draw:




unsigned int indexArray[ 10 ] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };


at both offset 0, and offset 200, you'd do this:




void DrawAtBase( unsigned int base ) {
glVertexPointer( 3, GL_FLOAT, 0, &vertexArray[ base ] );
glDrawElements( GL_TRIANGLE_STRIP, 10, GL_UNSIGNED_INT, indexArray );
}


Of course, you wouldn't actually use globals quite like that in real code, but it should show the general gist of it :-)

Cirrus
11-29-2002, 01:53 PM
Hello jwatte,

thanks for replying! Very sensible explanation but my vertices aren't ordered in such a way. I'm terrible with words, lets see if I can explain it 'graphically'. Imagine your typical heightmap as a matrix stored in a one dimensional array indexed as follows:

0 1 2
3 4 5
6 7 8

So this is a heightmap with 4 quads in it, 9 vertices. Now my idx buffer for one 'block', which will be just one quad for this example, is the following index buffer: { 0, 1, 3, 4 }. Now if I wanted to use this same index buffer for the second block, I would need to add 1 to all of the values of the base index buffer so it becomes: {1, 2, 4, 5 }. For the third you would add 3 (row*verts_per_row): {3, 4, 6, 7}

- REVELATION -

... oh man ...

I'm not deleting what I typed as maybe it'll help someone else in the future but thinking it through made me see you're absolutely right, it can simply be done by shifting the base index in the vertex buffer...gosh and I was complicating things so uselessly! A price to pay for not having enough hands on experience in the field!

Thanks for taking the time to write that up, it really made me laugh when the light shined!

http://www.opengl.org/discussion_boards/ubb/smile.gif

Cirrus