PDA

View Full Version : Rendering the same polygon at different locations



jas511
04-17-2011, 05:55 PM
I have an application in which I'm trying to render several thousand polygons in 2D (I setup the 2d projection using glOrtho). Each of the polygons is the same, but at a different location/scale. For example, I have a "base" triangle, whose vertices are at (0.0f, 1.0f), (-1.0f,-1.0f), and 1.0f,-1.0f). Currently the way I render is:

Step 1: create a display list for the triangle
Step 2:
for each polygon:
translate to correct location
scale polygon
call display list

This is really slow when drawing several thousand polygons (my screen slows to a crawl). I have to imagine there is a faster way to do this, since 10k polygons really isn't that many.

Can anyone provide a way to improve this?

Thanks,
Jeff

Alfonse Reinheart
04-17-2011, 06:05 PM
Each of the polygons is the same, but at a different location/scale.

So what exactly is "the same" about these triangles?

If you apply a transform to any triangle, you can make it into any other triangle. Therefore by this logic, all triangles could be considered "the same".

Instead of considering these triangles to be "the same" but with different transforms, consider them to be different triangles with different vertex positions. That is, compute the vertex positions of all of these triangles, then put all of those in a single display list. Then, when you need to render the triangles, render that one list.

This means that you have a single "glCallList" call instead of 10,000. And you also don't have 10,000 "glScale" and "glTranslate" calls.

jas511
04-17-2011, 06:28 PM
Alfonse,

Thank you for the quick reply. I see your point about them not really being the same.
One issue is that since I am drawing in 2D, I am drawing in screen coordinates. So wouldn't I need to compute the translation (and then rebuild the display list) on each rendering pass?

Thanks,
Jeff

elanthis
04-17-2011, 09:11 PM
Don't use display lists. Those are ancient, crufty, slow, deprecated, and (in newer core profile versions of the spec) flat out gone.

What you want to be doing is using Buffer Objects.

So if you have 10,000 triangles, you need a buffer object to store 30,000 vertices (which likely includes position and color information, possibly texture information, all depends on what it is you're doing).

So the basic program flow is this:

(1) Create a buffer object of the necessary size, as a DYNAMIC_DRAW buffer [you will be updating it frequently, and using it only for drawing; if the triangles are moving every single time you draw, use STREAM_DRAW instead]. If you need a 2D position and a simple RGBA color, you can define a struct { float x; float y; unsigned int color; } and basically then you're just creating an array of 30,000 of those structs, but allocated on the GPU's memory instead of in your application's memory.

This is going to require calls to glCreateBuffers, glBindBuffer, and glBufferData.

(2) In your update logic, map the buffer into the application address space, and then iterate over the triangles and write out their new locations to the mapped memory region. Remember you can only write to this region, so if you are using iterative updates [that is, you need last frame's position to the calculate this frame's position], then you need to keep a copy of the position in your application code. Unmap the buffer at the end of the update logic.

This is going to require calls to glBindBuffer, glMapBuffer, and glUnmapBuffer.

(3) Set up the rendering context by binding the Buffer Object to the proper vertex attributes. "Modern correct code" requires you to write some very simple vertex and fragment shaders, but if you don't mind using a little bit of the compatibility profile, you can just use some of the OpenGL 2.1 features. I'm going to assume you're doing that as it makes this a bit simpler. So you're going to need to call glVertexPointer, glColorPointer, etc. to specify the offsets and strides in your Buffer Object. You need to use glEnableClientState to tell GL that you're using the vertex and color pointers.

This will require calls to glBindBuffer, glVertexPointer, glColorPointer, and glEnableClientState.

(4) Render the scene. This is the easiest part. :)

This will require a call to glDrawArrays, plus any other rendering state management you need.

That is the simplest way to get what you want and get decent performance. If you delve into the more advanced uses of OpenGL 3.x and GLSL, you can probably squeeze out even better performance with a little more effort and using a few more advanced features of modern GL/hardware.

Also, you didn't really specify what the context of these triangles are. If they are connected or part of a 2D mesh [that is, if the triangles share vertices with each other] then you can get an easy boost in performance on top of what I've already outlined by using an index buffer and glDrawElements.

There are more indepth tutorials on using Buffer Objects and indexed drawing online, of course. Google will find them for you.

jas511
04-17-2011, 09:44 PM
I've started using glDrawArrays and I'm seeing a HUGE improvement. Thanks.

Alfonse Reinheart
04-17-2011, 10:21 PM
Don't use display lists. Those are ancient, crufty, slow, deprecated, and (in newer core profile versions of the spec) flat out gone.

Well, the last two of those are true, but the rest are... misleading.

Display lists have no guaranteed performance, but one can reasonably expect DLs to provide at least good performance. On ATI cards, they tend to be about the same as buffer objects. On NVIDIA cards, they tend to perform faster than static buffer objects.

His main problem was drawing 10,000 triangles one at a time, with matrix changes between each one. By using glDrawArrays, he was able to draw all of them at once. But he could have done the same thing with a display list.