I've been looking a lot into various lighting techniques lately, and am considering using deferred shading (haven't implemented anything yet though). Now, one of the downsides about deferred shading is that it requires so many extra buffers to store data - the G buffer, the normal buffer, the depth buffer, etc. On this point, I have to sit back and ask: why?
We already have a back buffer. If you use Z-buffering (which I am) you have a depth buffer attached to it. Why can't I simply use the back buffer instead of additional render targets? Or, is it impossible to read directly from the back buffer in a fragment shader? If it's impossible, then I would likely want the additional buffers anyway, since I would need them to do any sort of post-processing effects...
Also, since I'm here. Frankly I'm not sure if deferred shading is right for me or not. Pre-baked shadow/light maps almost always look better than anything you can do even with deferred shading, but I'm not sure if I can even take advantage of it. Most of the significant lights are dynamic (flickering flames, sun moving across the sky, the player carrying a lantern, etc), and I'm going to want to simulate radiosity from those lights somehow. How efficient are lightmaps from dynamic lights casting on dynamic objects? Somehow I'm thinking not so much.... How difficult is it to calculate light positions for radiosity for a deferred shader?
Transparency's an issue too, though it is in a forward renderer too (though less so), and it'll probably be another 5 years before they have an efficient order-independant-transparency that works in a deferred renderer (yes, I know about dual depth peeling - keyword "efficient").
Anti-Aliasing isn't an issue for me, I'd just as soon use a post-process technique like FXAA or SMAA.
Also: Baked Ambient Occlusion maps, or SSAO?