PDA

View Full Version : Separating Update and Render in threads



tobiaso
09-01-2002, 10:37 PM
My basic game loop looks something like this

Update
Feed Graphic Engine With Stuff To Render
Render

My idea is that while the gfx engine is rendering you could continue updating the next frame.

Do you think that there will be a performance increase on single processor systems?

One could argue that the Update thread could do some work while you feed a large texture/ bunch of vertices etc to the gfx card.

This will probably depend on how the driver is implemented, or?

Any ideas anyone?

Does anyone know of any good tools to visually see how threads in a process behave? It would be interesing to see how they behave at run time, that is if the update thread ever runs when the render thread does something on the gfx card...

I saw some tests by Marcus Geelnard using a particle engine. There it seems that there will be no speed increase on single processor systems.

regards

/hObbE

MattS
09-02-2002, 03:46 AM
I don't think you need to use threads to achieve the goal you are looking for. Your aim should be to balance the work of the CPU and the GPU, i.e. keeping them both busy. If you view your loop as:

SwapBuffers
Send rendering calls
Update data

then after you have sent the rendering calls the GPU will be busy. You can then use the CPU to generate the next frame of data, hence both are working together. This is a simple example of how to better balance the CPU-GPU work as certain GL calls will cause the CPU to wait, so it is probably better to feed the GPU for a bit, then do some CPU work, then feed the GPU and so on.

You're right though it is down to the drivers for how much of a win this is. Take for example the GeForce class nVidia boards. On these you can use VAR to store your vertex information. This allows the GPU to pull vertices through rather than having the CPU feed them. Using this in conjuction with NVfence provides you with much greater control for balancing the work. What I am unfortunately not clear on is how much use this CPU-GPU balancing is without using VAR (or ATI's VAO).

Hope that's of some help.

Matt

ToolTech
09-02-2002, 12:50 PM
WHat I have seen working with Gizmo3D is that I do benefit using two threads if I do the following. Best performance is to make clever sorting etc. first then send all data to GFX and then do work in a another thread to prepare the animations, etc for the next frame, do loading/unloading of data etc. while I have made a flush or swapbuffer.

Working with two processors gives me the problem that multiple threads do not share the same fast memory so therefore the parallell work must work with different memory areas.

Working with more processors, I allocate all work for gfx + sorting to one processor, animations and scene traversals for another and all other app work for the rest of the processors.

tobiaso
09-03-2002, 12:10 AM
Thanx for the replies.

After some testing it seems that performance does not increase on a single processor machine. http://www.opengl.org/discussion_boards/ubb/frown.gif

regards
/hObbE

Robbo
09-03-2002, 01:07 AM
someone tested this with a demo particle application which ran either multi- or single threaded. There was some minor increase in speed with dual processor machines but in general, the single processor machines didn't gain. It is such an archietectural issue (spelling!?) that I doubt it would be worth your while designing your application in this way, given that what works with one setup may not with another.

I think in general though, you should perhaps batch your primitives so that you can toggle between feeding the GPU and working with the CPU.

jide
09-03-2002, 04:14 AM
It could, if you car a lot, do some mutex and/or semaphores... and you dot a CPU that has APIC capabilities. New Athlon XP does have this APIC which all SMP systems have, but only few single CPU have.

But, I think it could be dangerous, and not very inscreasing as in one thread you update your data (so, change them), and the other (which is therically simultaneous), you read them. So, you must care all the time that you are reading a complete data and not an 'in change' data. So that's why you may won't gain a lot.

If you look at the OpenGL rendering pipeline, you could have enough time to get all your data updated before they enter a new cycle if you could update in time.

I had tryied to do something like that:
one thread for rendering only, and one other that is making the new data to render. It works better, but there had other factors: no more glut, using glx.

I hope it could help, or involve more discussion.

tobiaso
09-03-2002, 09:56 PM
What I actually did was to use Event Objects for syncronization. I had two event objects one that signalled that a frame was produced and one that signaled that a frame was consumed.

Update thread looked something like this

Update
Wait for consumed event
UpdateGfx
Signal produced event

Rendering thread looked something like this

wait for produced event
render
Signal consumed event


This way I had syncronization only when the gfx actually needed updating (I hope ;-)

Unfortunately there was no performance increase, and the implementation became a pain, especially when the game entered a new state in the Update thread and needed to load a lot of new stuff (textures)

regards

/hObbE