Blending with depthbuffer?

I remember back in the old days when I used Glide that you could use the depthbuffer as a blend factor when blending. I never got that much use of it back then but I recently got a thought that it might have been useful for volumetric fogging. Say if you want to do a fog along the floor, you’d only need to draw a quad and blend it with the background using the depth value of the quad minus the depth value in the Zbuffer as a blend factor.
This way you could do very complex fog volumes too, just draw the outlines of the volume.

Maybe useful for shadow maps too?

Could perhaps be, not sure how though. How did you think?

I thought the depth values wouldn’t be linear, so they only exist to check wether one is in front of another. Well, if not, that would be definetely cool. But this would mean concurrent reads and writes to the zbuffer multiple times per pixel, wouldn’t it? I heard that the bandwidth would already be too small.

Well, they could be recalculated to a linear form.
The depthvalue is going to be read anyway (if you do depthtesting) so using it shouldn’t impact performance. However, the way many graphic chips are designed today you’d perhaps need to rearrange the pixel pipeline so you read the depthvalue at an earlier stage, instead of doing the depthtest in the end.

Suffice it to say that it would definitely have major pipeline implications.

  • Matt

Originally posted by Humus:
I remember back in the old days when I used Glide that you could use the depthbuffer as a blend factor when blending.

Glide does not allow to use depthbuffer as a blend factor.
But source alpha can be calculated from fragment Z (high 8 bits of it).

Indeed Glide has some useful stuff (not available in OpenGL).
It has some support for a detail textures, LOD blending in the texture combiner,…
It has better support for a multipass rendering with a fog:

  1. It can use pre-fog color (fragment color before fog is applied) as destination blending factor.
  2. Fog mode control allows to apply full fog equation or any of its 2 components (can be emulated with register combiners):
    Cout = (1-f)Cfr + fCfog
    Cout = (1-f)Cfr
    Cout = f
    Cfog

Originally posted by mcraighead:
[b]Suffice it to say that it would definitely have major pipeline implications.

  • Matt[/b]

You’re essentially saying that it would be hard to implement in hardware? I have never fully understood why Z testing must/should be in the end of the pipeline. To me it would make much more sense having it before texturing and stuff. Could you give me any insight on this?

Originally posted by Serge K:
[b] Glide does not allow to use depthbuffer as a blend factor.
But source alpha can be calculated from fragment Z (high 8 bits of it).

Indeed Glide has some useful stuff (not available in OpenGL).
It has some support for a detail textures, LOD blending in the texture combiner,…
It has better support for a multipass rendering with a fog:

  1. It can use pre-fog color (fragment color before fog is applied) as destination blending factor.
  2. Fog mode control allows to apply full fog equation or any of its 2 components (can be emulated with register combiners):
    Cout = (1-f)Cfr + fCfog
    Cout = (1-f)Cfr
    Cout = f
    Cfog[/b]

Yes, you’re right, I got it wrong.
I think blending with depthbuffer would be very useful though.

I didn’t say hard, I just said that it would constrain how the pipeline is set up – force you into making a certain set of design decisions.

  • Matt

Well, that could be said about any feature. Would adding support for blending with depthbuffer add more constraints than other features in general?

Yes, early Z reads sound great at first but have a large problem in practice: synchronizing Z reads with Z writes.

  • Matt

I see. I guess that’s the reason you can’t use the framebuffer as a input to the texturing units, it would be about the same thing, right?

Anyway, if we would drop the ability to blend with the zbuffer in the texturing units, but only be able to blend with it as we would do with the framebuffer it would work quite well, right?
I looked as this model of a pixel pipeline, (well it’s a Radeon but I guess it’s quite similar between different vendors), and as it looks there the depth value would be available when you come to the blending stage. http://www.ati.com/na/pages/resource_centre/dev_rel/sdk/RadeonSDK/Html/Info/RadeonPixelPipeline.html

Well, you can texture from the framebuffer, but only if the region you are writing to is nonoverlapping with the region you are texturing from. (It’s actually worse for texturing from the framebuffer because if the regions are overlapping, you are actually destroying the input data as you render; it’s more than just a synchronization hazard.)

I doubt you actually want the depth value to show up in the blender. After all, if you want, say, volumetric fog, what you really want is a blending factor of e^(-d*(z_fragment - z_buffer)), right? Grafting on something like this into the blenders is rather hackish, and it would make a lot more sense to simply make the fragment Z and buffer Z available in a pixel shader and do the calculation yourself.

It’s worse, however, because the Z’s are at minimum 16-bit and often 24-bit values.

And then it gets even worse, because that formula I gave is wrong, because it assumes that changes in Z correspond linearly to distances from the eye; in fact, they correspond nonlinearly to depths (not radial distances) from the eye.

In any case, the diagram you pointed to is definitely massively oversimplified; it’s just a standard OpenGL pixel pipeline diagram. In the real world, it’s far messier, because a naive implementation of the OpenGL pixel pipeline doesn’t scale well to a high-performance renderer.

We could very well provide you a way to bind your depth buffer as a texture, and you could get Z’s inside the combiners (screen-space texgen). However, for the previously mentioned synchronization reasons, those Z’s could not in any way be guaranteed to be correct.

  • Matt

Hmm … I realize that there are quite a few problems to solve to get it to work.

Anyway, if there is a way a to get a workable solution where you could effectively get the difference in distance between the current and the previous pass I think such a approach would be way better for volumetric fogging than current techniques of dynamically creating fogtextures or projecting static fog textures on surfaces, especially when the polygon count increases. You could then also do much more lifelike animations of smoke. And you could also then add an animated texture onto the fog to further enhance it.

If you want to do volumetric fog using the depth buffer, you can simulate it.

Render everything with an alpha texture using linear texgen mode. The alpha will correspond with the distance from the camera. Then render a textured quad over the screen using the destination alpha for the blending factor.

It uses an extra pass, but it would work.

You can even do it without destination alpha, but it would take yet another pass.

j

Well, that wouldn’t give the same effect.
I’ve thought about using alpha as depth, but I’d need a blending mode glBlend(GL_DST_ALPHA_MINUS_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA_PLUS_SRC_ALPHA);

You are right, it wouldn’t be able to handle differences in depth. So it wouldn’t be able to do true volumetric fog.

What it would be good for is linear fog that looks a bit more exciting than standard, single color fog.

j

I really don’t think there’s any good enough reason not to implement such functionality. I consider it a very basic, if not essential, feature that all hardware lacks at present.
If anything sounds like hack it’s the ridiculous methods that we’re stuck with now.
Let alone fog, the limitations of blending in general are ubsurd.

I’m certain it could be implemented without hurting any other process in the pipeline.

HVs, GET IT TOGETHER!!! WAKE UP!!!