PDA

View Full Version : deffer render: depth



ravage
12-02-2010, 06:13 PM
Is creating the depth by pos.z/pos.w a good enough to construct the position (in world space)? I guess I'm wondering if there is another way or can maybe give me an idea why this is happening.

if I'm a not so close everything seems fine but the closer I get the position gets messed up.

I the pictures below in my pointlight shader I reconstructed the position from the depth and outputted the position for the color it's not a position rt.


correct:
http://img507.imageshack.us/i/weirdpos0.jpg/

bad:
http://img507.imageshack.us/i/bweirdpos1.jpg/

Dark Photon
12-03-2010, 11:51 AM
Is creating the depth by pos.z/pos.w a good enough to construct the position (in world space)?
Here's an excellent blog post that may answer your question:

* Attack of the Depth Buffer (Pettineo, 3/2010) (http://mynameismjp.wordpress.com/2010/03/22/attack-of-the-depth-buffer/)

ravage
12-03-2010, 04:55 PM
Thanks (you can never have too many resources about this topic), I ended up fixing my problem too. I was actually doing the z/w in the vertex shader when I should have been doing it in the frag shader.

Dark Photon
12-03-2010, 06:10 PM
Ah! Right! :)

Do you plan to go MSAA with your deferred render? If so, consider that while you can re-do the work the pipeline is already doing for depth and store it off as an explicit frag shader output in the G-buffer, this probably doesn't give you per-sample depth values, only per-pixel (as frag shader only run once per pixel during rasterization).

Alternatively, consider just writing Z-buffer to depth texture, and feeding that back in for potentially per-sample depth info. This also avoids duplicating the depth work already being done by the GPU during pipeline rasterization and will reduce write bandwidth.

BionicBytes
12-03-2010, 06:19 PM
Alternatively, consider just writing Z-buffer to depth texture, and feeding that back in for per-sample depth info. This also avoids duplicating the depth work already being done by the GPU during pipeline rasterization and will reduce write bandwidth.


Could you expand on this a little more please. I'm a little slow ;-)

Dark Photon
12-03-2010, 08:01 PM
Could you expand on this a little more please.
Sure thing. Not sure which part was unclear so if I don't hit it, let me know. And let me caveat: much of this is based on reading and inference not first-hand experience, so I can't tell you for sure that this is all 100% right. If anyone knows different, please do follow up!

With traditional MSAA, you have multiple samples per pixel (each of which is color, depth, and stencil), but you get one frag shader execution per pixel to populate them. So very likely an explicit G-buffer "depth channel" you write is going to have the same depth value assigned to every sample in a pixel touched by a given triangle.

However, I've seen it at least implied a few times that actual fragment depth (for Z-buffering purposes) written to the real DEPTH buffer for a given triangle may vary per sample within a pixel. Don't think the spec disallows or requires this. I'm skeptical though as I haven't validated that yet, and it seems this would fight against MSAA bandwidth compression and the concept of CSAA.

Even if true, whether that added depth value difference in subsamples within a pixel would actually ever make a "significant" difference in the resulting resolved image, I don't know, but in most cases I doubt it.

I'd suspect the real advantage in just using the actual depth buffer generated when rendering the G-buffer is: not adding the bandwidth writing a new 24+ bit depth value to the G-Buffer (and wasting the cycles computing it) that the pipeline is already computing and storing in the DEPTH target for Z-buffer depth testing during G-buffer rasterization.

ravage
12-04-2010, 05:48 PM
Actually my framebuffer class does that. I looked for a difference and I didn't really see one.

Dark Photon
12-04-2010, 07:55 PM
I looked for a difference and I didn't really see one.
Interesting - thanks for the info.