PDA

View Full Version : VAR to VBO



un
09-25-2003, 01:08 PM
To render a clod terrain, I have successfully implemented the VAR extension. I am now trying use the VBO extension.

I've read the spec and numerous posts at various forums, but I am still uncertain as how to implement some VAR functionality in VBO.

For initializing VAR, I allocate one large physical buffer. I then define several logical buffers and assign a fence to each logical buffer.

When rendering, I start with the first buffer and fill that with a patch of terrain. I called glDrawElements(). Typically, that patch would not completely fill the buffer, so I add another patch, and call glDrawElements(). Note that the patches are of varying sizes. I repeat this process of adding patches until the buffer is filled. I then set a fence. I move to the next buffer, test the fence, and begin
writing patches as before. Through testing, I have allocated enough memory such that the fence test always suceeds, but you never know.

So, OpenGL renders with each glDrawElements() call, while I am still filling the same or next logical buffer with a patch. Fences
prevent me from overwriting data that is still being used. The frame rate and the triangles/sec is great.

How do I implement this using the VBO extension?

As I understand, after I write to a buffer object, I must then render that buffer object, and I cannot write to that buffer object again until OpenGL is finished rendering. So, if that buffer object was allocated as 4 mb and I only used 1 mb, the remaining memory goes unused. I would like to continue filling that buffer, as I did with VAR. With VAR, I minimized the amount of wasted memory using logical buffers.

It seems that I must allocate many, many buffer objects to mimic that same thing in VAR. Based on my VAR testing, if I only
allocate a few buffer objects, I'm sure that OpenGL would not be finished rendering the first buffer object by the time I've filled that last. The performance would drop as I have to wait until the first buffer object is finished, before I can write to it again. Is allocating a large number of buffer objects bad as I suspect?

I think VBO is going to hurt the performance of my terrain renderer.

Hopefully, I misunderstand VBO, or there may be suggestions to correct this problem.

Korval
09-25-2003, 02:56 PM
I think VBO is going to hurt the performance of my terrain renderer.

Your terrain renderer was built around using VAR. You need to rebuild the engine around using VBO. Or, put another way, you need to build the engine around how you would have rendered with regular vertex arrays, since VBO mimics that to a great extent.

Are you streaming vertices? If so, each patch of terrain should have a vertex buffer associated with it that is set up for streaming, rather than static. I don't know that you can define the size of the streamed buffer at stream time, so each buffer may have to be the maximum size that a patch can be.

How to give VBO your vertex data is a different question. You can map the buffer, but that isn't guarenteed (if it fails, you have to do it again). Or, you can build an internal working buffer and just give VBO a pointer to it. The later has the detriment of not being the most efficient way of sending vertex data; it'd be better to map the data.

BTW, what are you doing that you need to regenerate your vertices every frame?

If the concern is saving memory, I wouldn't bother; you lose too much performance on it. The driver is given flexibility in terms of where VBO's are placed, so in theory, infrequently used VBO's can be transfered to system memory.

If you're doing a streamed system, where you couldn't possibly make a VBO for every possible patch, then I'd suggest creating enough VBO's for the visible region, plus one patch beyond that. When the player moves to a new patch, you can replace the data in the VBO's that are too far away with data from new patches that have now moved in-range.

un
09-25-2003, 03:52 PM
Originally posted by Korval:
BTW, what are you doing that you need to regenerate your vertices every frame?

Korval,

Thanks for your reply.

This is for a scientific application for rendering the terrain for an entire planet.

Terrain patches are read from a file as needed, and the vertex positions are stored in RAM as double precision values. The terrain is drawn relative to the viewer for accuracy. The float precision vertices that I send OpenGL are the difference between the viewer position and the double precision vertices. Each time the user moves, the OpenGL vertices change.

Also, I will be implementing vertex morphing in the future.

I added the VBO support to my code, based on my understanding. I specified the buffer objects as GL_DYNAMIC_DRAW_ARB. I wrote to the buffers as a circular array. I map a buffer, write a patch, unmap the buffer, and call glDrawElements().

I did some performance tests. I found that at about 50 buffers, my frame rate max'd out.

For VAR I allocated 10 logical buffers at 1 mb.

The VAR code gave me better performance, but not by that much. For example, VAR at 59 fps, and VBO at 55 fps. Or, VAR at 17 fps, and VBO at 16 fps. Hopefully, future drivers for my Quadro4 will make the frame rate the same.

I am concerned about the memory, though. I needed 50 mb for VBO and only 10 mb for VAR.

Korval
09-25-2003, 08:45 PM
The float precision vertices that I send OpenGL are the difference between the viewer position and the double precision vertices. Each time the user moves, the OpenGL vertices change.

But, technically, you don't have to recompute the vertices every frame. What you can do is recompute the vertices when the user moves "too far" from the last position. I would suggest doing this when you're near the boundary between drawing patches. By doing it this way, you're not constantly streaming vertex data to the card, so you'll get better overall performance.

You do have to take steps to not upload everything in one frame, though.