deferred rendering and framebuffer bandwith

hi,
i want to implement deferred rendering for a university course.

They continue with a somewhat elaborated attachment scheme, where they for instance span the normal over two attachments.

I understand that bandwidth can be valuable, but does it really matter if i have two 32bit * 4channels attachments or lets say eight 8bit * 4channels attachments. both of them would add up to 256 bit per fragment output.

or do the graphics card vendors implement the attachments in a way , that they aren’t packed nicely and 8bits would be extended to 32? Maybe it’s only about the number of channels per attachment and it would extend a vec3 to vec4?

thanks,
adam

[QUOTE=adamce;1262012]i want to implement deferred rendering for a university course.

They continue with a somewhat elaborated attachment scheme, where they for instance span the normal over two attachments.

I understand that bandwidth can be valuable, but does it really matter if i have two 32bit * 4channels attachments or lets say eight 8bit * 4channels attachments. both of them would add up to 256 bit per fragment output.[/QUOTE]

What you have to remember is what is actually stored in the G-buffer is “not” necessarily the format that’s output at the tail-end of your fragment shader. The GPU does run-time format conversion to map the float/vec* output of the frag shader in your G-buffer rasterization pass to the format(s) in your FBO attachments. That’s what actually gets written to memory, and as importantly it’s what format gets “read” later from memory when you go to apply your lighting pass(es). So reducing how many bits you use for each component in your G-buffer can save you GPU write and read bandwidth.