PDA

View Full Version : Overlapping creates massive slow down



Jeffg
12-04-2011, 06:19 AM
I'm not sure if it's due to using alpha or blending or what, but if I have 10,000 textures or 10,000 lines spread out, performance is great, but if those textures or lines get bunched up where they overlap significantly, performance drops to a crawl. What is happening? I use alpha on all my objects, so they're semitransparent.. is it getting caught in some massive depth recursion? Can you set a limit?

Thanks, Jeff

V-man
12-04-2011, 07:23 AM
You have 10,000 textures getting used at the same time? That would be a lot of glBindTexture calls and a lot of texture moving in and out of the texture cache.

If you are using blending, then there is the cost of read and write of the framebuffer. That problem exists on all GPUs past and present.

What is depth recursion? Are you doing some technique? Shaders?

Jeffg
12-04-2011, 07:52 AM
Most of the textures are the same texture being applied to many sprites in a VBO, so there are few bind operations. Depth recursion is something I made up to describe the possible process of showing all the layers of transparency to the max degree, perhaps accumulating in some O(N2) operation. I don't know, that's part of what I'm trying to understand. Running the textures and lines where they don't overlap to any large degree causes no performance problems, but if I have 10,000 lines radiate from a single point, it slows to a crawl - not because it's drawing 10,000 lines, but because all those lines are close together turning into a mass of brightness.

Ludde
12-05-2011, 01:01 AM
What's the performance when you disable blending?

Jeffg
12-05-2011, 07:46 AM
No meaningful change in performance when disabling blending, turning off alpha, disabling line smooth, or decreasing line width.

aqnuep
12-05-2011, 10:55 AM
The problem is most probably that you update the same piece of the framebuffer. No matter how many shader cores your GPU has if all the drawing you perform is clustered up in the same small tile in the framebuffer the GPU has to serialize the work as no shader core groups can work on the fragment processing of the same little screen tile.

Maybe your problem is totally different than what I think it is, but from what you described this is my best guess.