Order Independent Transparency (framebuffer alpha)

I got a scene with a lot of transparent polys. I use glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) to calculate tansparency. glBlendFunc(GL_SRC_ALPHA,GL_ONE) is adding colors so it doesn’t look good.

In oder to use blending correctly I should draw polygons in back to front order. But my scene is huge so sorting polys is eating my CPU time. I also can’t use BSP trees because the scene is dynamic and position of each poly is changing in every frame.

Is there any way to draw such a scene without sorting polys? Why not use framebuffer alpha in blending equation?

On www.nvidia.com there is an article about “order independent transpaency” They use “depth peeling” technique which is based on pixelShaders. I got only GeForce2MX so I simply can’t use it.

I’ve also heard about “Ordering table” Technique used to sort polygons (used in PS2)
but it’s not precise and sometimes polys are sorted incorectly.

Please, tell me how do you implement blending in your projects.

Thanx and sorry for my english, i write fom Poland.

Damn, best post ever. As I read, every suggestion I thought of was explored by you in your next sentence and rejected. You pretty much understand the situation as it exists.

There is saturate alpha blend that might interest you, or alpha to mask (less interesting until MS AA is ubiquitous and nearly free for high sample counts).

But pretty much I think you’re screwed. Pesky zbuffer.

how much of your polies are transparent? all, or just a part?

i’ve thought about all of those ways for quite some while, too, and haven’t found a good way.

hehe, and congrats, you got dorbie!

Thanx. I’m not so good as it looks like. The problem is good

Try these links:

Order Independent Transparency : http://developer.nvidia.com/attach/1209
Polygon sorting : http://www.gamedev.net/reference/articles/article675.asp

The way our project works is thus:

a)All non-transparent objects are rendered. All transparent objects are cached untill later.

b) When rendering transparent objects, disable z-writes (but leave z-testing on) enable blending and do this:

  1. Sort all objects (not-polygons) in a front to back order. (should be a max of 500 for an average scene) Objects are rendered from furtherest to nearest according to the following steps.

  2. Objects really far away are rendered “as-is”. (ie. they are not the main focus of the scene so bad polygon ordering will not matter so much)

  3. Objects that are nearer are rendered by sorting each object’s polygons internally. (ie. re-arranging the index array of each object) This works really well assuming no transparent objects are intersecting. (Or if they do the rendering error is not noticeable)

  4. All objects that are REALLY close to the camera are dumped into a polygon soup that is sorted and rendered one polygon at a time. (VERY bad for state changes)

It is possible to do order-independant-transparency on a Geforce2 (ie. using stencil buffer) to a maximum of 4 peels. However this method is so expensive fill rate wise that it is probably not practical. (ie I think 9 render passes for 4 depth peels)

So until we have dual depth buffers or an A-buffer? in hardware (I think that is the name?) you are stuck with these hacks.

thanx again!
sorting objects is a really good idea

Rendering all opaque first, then depth peeling the transparent layers works well for depth-peeling based OIT as well. This is an optimization that has not been demonstrated as far as I’m aware though.

LOL cass, having intersecting transparent and opaque objects with opaque rendered as standard and transparent rendered as depth-peeling was going to be a Cg entry of mine last year.

However, while implementing I decided I wasn’t really using Cg that much (just a depth texture comparison) so I was not likely to win and scrapped that entry.

It could be that you don`t need to sort polys everytime the scene is rendered.
The classical way to speed things up is to skip a few frames.

Another way would be to sort polys per object, and hold them in their own arrays. If the viewers moves enough or if the object moves enough, you resort it`s own polys.

You may have to combine multiple methods to get the desired FPS.

You can usually sort polys in an object in a “good enough” fashion to be mostly back-to-front no matter what the viewing direction, as a pre-process step. For convex objects, this is possible to do perfectly.

That won’t solve inter-object merging though.

Of course, if your physics system is good, you wouldn’t let objects penetrate, so you wouldn’t have that problem except for globally very convex, transparent objects. So don’t make your transparent objects very concave :slight_smile:

Edit: yes, Cass, I meant “very concave” :slight_smile:

[This message has been edited by jwatte (edited 09-11-2003).]

jwatte,

Do you mean “don’t make your transparent objects very concave”?

Cass

Originally posted by sqrt[-1]:
[b]LOL cass, having intersecting transparent and opaque objects with opaque rendered as standard and transparent rendered as depth-peeling was going to be a Cg entry of mine last year.

However, while implementing I decided I wasn’t really using Cg that much (just a depth texture comparison) so I was not likely to win and scrapped that entry.[/b]

I would have voted for your demo, sqrt[-1]!

Here’s a completely insane idea. You render and store the scene 256 time, once for each possible alpha value, using a pixel shader that treats alpha values at or above the current render index as completely opaque, and below the value as completely transparent. You then average the pixels of all 256 renderings together to yield the final image to write to the screen. Obviously, this is about 256 times slower than any other technique, but would it work?

>>>Here’s a completely insane idea. You render and store the scene 256 time, once for each possible alpha value,<<<

The alpha value of what? An object? A polygon? a vertex?

Also, you are assuming an integer (byte) value for alpha.

spacedogs idea works perfectly… if you have 256*60fps