deffer render: depth

Is creating the depth by pos.z/pos.w a good enough to construct the position (in world space)? I guess I’m wondering if there is another way or can maybe give me an idea why this is happening.

if I’m a not so close everything seems fine but the closer I get the position gets messed up.

I the pictures below in my pointlight shader I reconstructed the position from the depth and outputted the position for the color it’s not a position rt.

correct:

bad:

Here’s an excellent blog post that may answer your question:

Thanks (you can never have too many resources about this topic), I ended up fixing my problem too. I was actually doing the z/w in the vertex shader when I should have been doing it in the frag shader.

Ah! Right! :slight_smile:

Do you plan to go MSAA with your deferred render? If so, consider that while you can re-do the work the pipeline is already doing for depth and store it off as an explicit frag shader output in the G-buffer, this probably doesn’t give you per-sample depth values, only per-pixel (as frag shader only run once per pixel during rasterization).

Alternatively, consider just writing Z-buffer to depth texture, and feeding that back in for potentially per-sample depth info. This also avoids duplicating the depth work already being done by the GPU during pipeline rasterization and will reduce write bandwidth.

Could you expand on this a little more please. I’m a little slow :wink:

Sure thing. Not sure which part was unclear so if I don’t hit it, let me know. And let me caveat: much of this is based on reading and inference not first-hand experience, so I can’t tell you for sure that this is all 100% right. If anyone knows different, please do follow up!

With traditional MSAA, you have multiple samples per pixel (each of which is color, depth, and stencil), but you get one frag shader execution per pixel to populate them. So very likely an explicit G-buffer “depth channel” you write is going to have the same depth value assigned to every sample in a pixel touched by a given triangle.

However, I’ve seen it at least implied a few times that actual fragment depth (for Z-buffering purposes) written to the real DEPTH buffer for a given triangle may vary per sample within a pixel. Don’t think the spec disallows or requires this. I’m skeptical though as I haven’t validated that yet, and it seems this would fight against MSAA bandwidth compression and the concept of CSAA.

Even if true, whether that added depth value difference in subsamples within a pixel would actually ever make a “significant” difference in the resulting resolved image, I don’t know, but in most cases I doubt it.

I’d suspect the real advantage in just using the actual depth buffer generated when rendering the G-buffer is: not adding the bandwidth writing a new 24+ bit depth value to the G-Buffer (and wasting the cycles computing it) that the pipeline is already computing and storing in the DEPTH target for Z-buffer depth testing during G-buffer rasterization.

Actually my framebuffer class does that. I looked for a difference and I didn’t really see one.

Interesting - thanks for the info.