How to do optimized segmented rendering of Background and Foreground objects ?

Hi,

I have a scenario where I have a complex and large background object that largely remains the same without any transformation. I have foreground objects that constantly move/fly around but BG objects stay put.

I would like to know what’s the best way to hold my application from rendering the background continually when there is no background transformation or view transformation. But the foreground objects are constantly transforming so they need to undergo GPU cycles continually.

The way I can think of is to draw BG objects once to a framebuffer, copy it to another static buffer, render foreground objects to another framebuffer continously and add it to the static buffer to display.

Would be nice to hear if any of you implemented anything similar to this(holding BG objects from rendering every GPU cycle and just using a copy of already rendered buffer), and if there will be any significant improvement in the render rates. Any pointers/suggestions on how to implement this efficiently ?

Thanks in advance!

Is your background object expensive to render? Have you profiled how much time it takes it to render? If it’s not that expensive, perhaps you don’t need to do something special for it. That’s the best case. I would first make very sure that there is something real here to gain, rather than just add needless code complexity to your application.

On the other hand, if (let’s just say) the background is very expensive to render for some reason, you could consider the pre-render-background-once and then use it to seed your framebuffer each frame before rendering the foreground.

A question: Can the foreground objects interpenetrate or render behind the background object? If so, then you may need to save off not only the color buffer for your background object but also its depth buffer too. Depending on what you need here, there are a number of imposter techniques which make use of depth that you could consider.

Another question: Do you need the background/foreground color buffer result to be multisampled (e.g. for AA or multisample alpha reasons)? You want to carefully consider the answer to this question with the last to determine what capabilities you’re going to require from your OpenGL/OpenGL ES implementation.

Thanks for your reply! First of all to give a brief intro to the context of my question, I am developing a multi-view lightfield renderer that generates hundreds of views within a view volume. Hence we are trying to make some basic optimizations so the renderer doesnt do brute force rendering of all objects and not make all of them go through GPU cycle unless a view/transform happens.

[QUOTE=Dark Photon;1289104]Is your background object expensive to render? Have you profiled how much time it takes it to render? If it’s not that expensive, perhaps you don’t need to do something special for it. That’s the best case. I would first make very sure that there is something real here to gain, rather than just add needless code complexity to your application.

On the other hand, if (let’s just say) the background is very expensive to render for some reason, you could consider the pre-render-background-once and then use it to seed your framebuffer each frame before rendering the foreground. [/QUOTE]

I haven’t profiled it yet in complete detail, but I can see the frame rate drop when using the similar models with varying geometric complexity! Hence I believe there would be atleast something to gain as we are rendering hundreds of view just to generate one Light field display update.

[QUOTE=Dark Photon;1289104]
A question: Can the foreground objects interpenetrate or render behind the background object? If so, then you may need to save off not only the color buffer for your background object but also its depth buffer too. Depending on what you need here, there are a number of imposter techniques which make use of depth that you could consider. [/QUOTE]

Yes it may or may not. So I want to have that as an option to discard depth or to have it. But that is basically a sophistication as of now, so I might just discard depth if I get significant mileage doing so.

[QUOTE=Dark Photon;1289104]
Another question: Do you need the background/foreground color buffer result to be multisampled (e.g. for AA or multisample alpha reasons)? You want to carefully consider the answer to this question with the last to determine what capabilities you’re going to require from your OpenGL/OpenGL ES implementation. [/QUOTE]

I would like it to test the results that will be produced before multisampling and make a decision based off of that. I may not need this signifcantly atleast to start with and test.

I found/read an interesting option somewhere to write the background buffer into the final image. By rendering it onto a static orthographic projected quad and then superimpose foreground objects with proper perspective and view transformations. Just wondering if that would save or cost me more, compared to a direct copy of background buffer into the final buffer.