VBO/FBO design questions

VBO related

1.a)
Assuming a static VBO with 1:n (Update:Draw) ratio, What is the ideal byte count for a Vertex struct? It is mentioned in old documentation that 16 bytes is sort of the holy grail, but I’m looking for the block size that DirectX10-alike GPUs do read.

1.b)
Is it a good idea to create a simplified copy of a mesh which only contains vertex position and padding, for the purpose of rendering to a shadow map? I’ve been told this would be a good optimization, but kinda doubt that this has any noticable impact for low-poly meshes.

1.c)
Also I’d like to hear if anyone has spent some serious time investigating the use of Quaternions instead of passing Normal/Tangents as vertex attributes. Is this a good tradeoff, or not worth looking into?

FBO related

2.a)
Assume a scenario where you want to use 2 shadow maps, so one can be read from while the second is already written to again. Is it preferable to use 2 separate FBO with 1 attachment each, or use 1 FBO and switching between drawbuffers?

2.b)
Is there any other difference between using a Texture or a Renderbuffer as FBO Attachment other than that Renderbuffers cannot have mipmaps and are limited to be 2D? Unfortunately there’s a user on these forums named Renderbuffer and searching for topics regarding this yields mostly posts completely unrelated to FBO.

Thanks!

1.a) Actually, that would be 32 bytes. I don’t know what DX10 GPUs prefer.

b) Most the performance loss will come from binding a FBO and rendering to it and then trying to use the texture.

  1. a) If both textures are of the same size and format, then using one FBO could yield better performance.

b) A renderbuffer is an offscreen surface. For example, if I want to render to a color texture, I would make a depth renderbuffer and attach to the FBO so that depth testing will work.

Another example is if I want to take a screenshot, I would make a color RB and depth RB, and render, then glReadPixels.

Thanks for the reply, but does it imply that a Texture2D with depth-component format bound to the FBO’s depth attachment is not accepted by depth testing? Or do you mean “attach any depth buffer at all to the FBO”? <confused>

But I believe to have gotten your point: a depth Renderbuffer is basically the same as a wgl/glx-provided depth buffer, but attachable to FBOs besides 0.

Any takers for 1.c? The only lead I found on this was the Lumina GLSL IDE, but it does not seem to like my graphics card and won’t start up properly. It uses it’s own .lum file format, so taking a quick look at sources isn’t possible either if you cannot launch the application :expressionless:

A Render buffer cannot be accessed as a texture later. You can use FramebufferBlit to copy the contents of a render buffer, but you cannot sample from a render buffer, for instance.

You would use render buffers for multisampled FBOs, then do a blit to resolve the multisamples, for instance, the target FBO could contain textures, then you could use those textures via samplers in later rendering operations.

So basically, ask yourself, “do I need to sample this as a texture later?”, if so, use a texture attachment, else use a render buffer attachment.

but does it imply that a Texture2D with depth-component format bound to the FBO's depth attachment is not accepted by depth testing? Or do you mean "attach *any* depth buffer at all to the FBO"? 

You can attach a texture as framebuffer depth attachment, as long as the texture format is supported.
Then, just think as Keith said to know if you need a renderbuffer or a texture.

EDIT:

About quaternions, you mean this?

Reading the conclusion, this does not look very convincing about performance gain on current hardware.

I’ve already implemented basic shadow mapping using depth-component Texture2D, was just curious about Renderbuffer object since it would not become framebuffer complete on Geforce FX when trying to create a FBO and I was forced to use Texture2D. Since the depth buffer is needed as a sampler for shadow mapping, Texture2D is obviously the better choice as it avoids copying from Renderbuffer into a texture afterwards. Thanks again for clearing the issue.

Thanks for the link! Not sure why my web searches didn’t bring this up.