Hi there,

I have a question about an optimization to my Deferred Rendering setup.

So I've built a game, and the majority of the time the camera remains locked in the same position. As a result of this, a large portion of the meshes on screen are drawn every frame to the exact same locations without any changes to their data in the gBuffers.

A thought I had was I could have a set of "static" gBuffers which is composed of all the models which are not animated and which do not move. And another set of "final" gBuffers. Then in situations where the camera doesn't move I could simply skip rendering the static models and just copy the old "static" gBuffers into the "final" gBuffers before I render the moving meshes. Thus hopefully increasing performance when the camera doesn't move (the majority of the time.)

When I did a rough implementation I was disappointed to find the performance approximately the same. This was largely because every frame I did a blit for each of my color attachments (positions, normals, AlbedoSpec) (I know I still need to handle depth but I wanted to see if this was worthwhile before continuing.) I'm currentlying using 3.3, I know if I upgraded I could use glCopyImageSubData which would probably be faster.

What I'm interested in is if anybody has any thoughts on this? Are there any tricks I could try or is it just a fact that copying those large screen sized textures would be slower than rendering the actual meshes? I know I could (and probably should) switch my positions buffer to a depth buffer and do that optimization, but for this it doesn't seem like that would tip the scales much.

Thanks in advance and sorry if any of this is foolish or obvious,
John