is it possible to modify Zbuffer structure?

Hello,

I would like to write an real time subsurface scattering algorithm.
With this method I need to keep color and depth information of all fragment that arise at the same screen coordinates on the screen. So, instead of discard these information when the fragment depth is greater than the new that arise I would like to keep these ones like in a stack.
Moreover I would like to add other information in addition to color and depth information for each fragment.

Do you think it is possible to do that?

Thank you.

It is not possible to modify the structure of the framebuffer in that way.

it is not the framebuffer, it is the depth buffer! ^^
And if it is not possible, why?

Depth buffer is a part of framebuffer. As you mentioned in your first post - you want to store multiple color & depth values.

Something similar was discussed recently. You want to create a stack for each pixel on the screen? No GPU is constructed this way.
It would be possible to create GPU that can support stack with fixed depths and colors. For stack depth = 8 this GPU would work 4x slower, since in average case it would have to move half of the stack. I don’t see a reason why vendors sould turn everything upside down in their GPU’s to support just this one feature that would be rarely used. Especially that it is possible to achieve similar functionality on current hardware.

On current hardware you can do this:

  1. Render everything to first layer (a layer is color texture + depth texture)
  2. Render everything again to second layer, but discard pixels that have depth less or equal to depth stored in first layer
  3. Render to other layers by repeating step 2 until you have “enough” layers

You could use occlusion query in step 2 to find out how many pixels were drawn to a layer.

k_szczech, thank for your reply.

I don’t see why it would be 4x slower with a stack depth of 8 for each Zbuffer pixel. Instead of discard an information, we keep it in an other memory location that is allowed previously whose size is defined by the stack depth…

I didn’t know that we can render a special layer according to depth buffer values.

But I don’t see how it could work without a stack…
Assuming, we want to render to the third layer, how opengl will find the third depth value if the fragments don’t arise from the nearest to the furthest. It is necessary to save all depth values in order to know which of them is the third value… So it is this kind of stack that I need…

Moreover I don’t know what is occlusion query.

Thank you.

I don’t see why it would be 4x slower with a stack depth of 8 for each Zbuffer pixel
Most likely because depth of every pixel you render would have to be compared with every value currently in it’s depth stack. And if you allready have 8 values on stack, and pixel turns out to be on top, then everything must be moved on the stack by 1 position. The GPU would need to make log2(stack_depth) depth tests, and copy averagely stack_depth/2 elements with every pixel drawn.

But I don’t see how it could work without a stack…
Assuming, we want to render to the third layer, how opengl will find the third depth value if the fragments don’t arise from the nearest to the furthest. It is necessary to save all depth values in order to know which of them is the third value… So it is this kind of stack that I need…
I described this in my previous post - each layer contains a color texture and a depth texture. Perhaps I sould add that in steps 1 and 2 you clear the depth buffer.

Moreover I don’t know what is occlusion query.
So why don’t you have a look at the specs for GL_EXT_occlusion_query?

You could also read some articles on order-independent transparency, because these deal with the same type of problem. Perhaps some solutions would give you some new ideas.

Ok thank you for this reply.

In fact I don’t want to make a real stack each for pixel, I just want to store any new fragment at the end of the stack without move down the other elements.

I am going to take a look to GL_EXT_occlusion_query and order-independent transparency.

As far as I understood, I think you still can use the fragment shader to do the job and store all the needed depth values into a texture.

As far as I understood, I think you still can use the fragment shader to do the job and store all the needed depth values into a texture.
Not really - implementation of custom depth buffer would require reading and writting to the same texture in shader which is not supported. Even if shader wouldn’t do any kind of depth testing it would still have to know which slots in the stacks are allready used.

Thanks for the precisions, I didn’t knew that.