Order independent transparency

I am using OIT from NVIDIA’s code example (http://download.developer.nvidia.com/developer/SDK/Individual_Samples/DEMOS/OpenGL/oit_3x.zip).

for each layer, a shadow map stores the depth values if the GL_TEXTURE_COMPARE_FUNC_ARB GL_GREATER returns true. that means that fragments which are occluded by the last layer are drawn to the current layer. i use two layers but the problem i have is independent of the layer count.

my problem is an inaccurancy, i think. when i view the transparent texture from a distance of 10 units, it looks perfect. but when i come nearer, there are artifacts showing up in thetransparent area. i think these wrong calculated pixels are of the background color (glClearColor, here it is black) and sometimes of the third layer which i don’t even want to render (the blue ones). here are some screenshots from far to near:

[img]http://www.woizischke.de/zische/far.jpg[/img]

 [img]http://www.woizischke.de/zische/near.jpg[/img]

 [img]http://www.woizischke.de/zische/more_near.jpg[/img]

 [img]http://www.woizischke.de/zische/nearest.jpg[/img]

did someone experience the same problems or do you know how to fix it? there must be a way to get around that nasty triangle sort method without those inaccurancy errors.

looks a lot like clasic depth error in shadow mapping. try adding bias to shift away from these imprecisions.

how do you add bias to the depth values?
i tried to do it like this:

 void cOrderIndependentTransparency::render_scene(bool peel, bool update_ztex, int l)
{
    glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);

    glPushMatrix();
    glLoadIdentity();

    m_ztex.bind();

	if(peel)
	{
		glEnable(GL_FRAGMENT_PROGRAM_ARB);
        m_fp_peel.bind();
	}
	else
	{
		glDisable(GL_FRAGMENT_PROGRAM_ARB);
	}

	glActiveTextureARB(GL_TEXTURE0_ARB);

	m_fpRender();

	glActiveTextureARB(GL_TEXTURE1_ARB);

	glDisable(GL_FRAGMENT_PROGRAM_ARB);

	glPopMatrix();

	glFinish();

	m_rgba_layer[l].bind();
	glCopyTexSubImage2D(m_rgba_layer[l].target, 0, 0, 0, 0, 0, m_ScreenWidth, m_ScreenHeight);

	if(update_ztex)
	{
		m_ztex.bind();
		glPixelTransferf(GL_DEPTH_BIAS, m_bias);
		glCopyTexSubImage2D(m_ztex.target, 0, 0, 0, 0, 0, m_ScreenWidth, m_ScreenHeight);
	}

	m_bias+= 0.06f;
} 

but it doesn’t work.

my first try was to render twice per layer. once for the color of the layer. then i translated the camera a bit backwards and then i rendered to the depth texture. the errors were gone but since the viewer’s position has changed, the transparent polygons were “smaller” on the screen. so the depth values were not at the same position as the fragments. hard to explain, but this didn’t work either.

Originally posted by Zische:
how do you add bias to the depth values?
GL_POLYGON_OFFSET_FILL ?

how can i use glPolygonOffset for that ?

EDIT: I found it out. It works great. Thank You

i’ve just loaded another scene and now there is a problem with bigger distances to the transparent polygons. the greater the distance from the polygon to the camera is, the bigger is a clipping space just behind the polygon. it’s hard to explain because english is not my native language, so i uploaded some screenshots. sorted from near to far:

this is how i currently add bias:

 
for each triangle:
[apply material]

[calculate distance between the polygon and the camera's position]

if(DistanceToTriangle < 20.0f)
	glPolygonOffset(0.0f, (20.0f-DistanceToTriangle) * 10.0f);

} else glPolygonOffset(0.0f, 0.0f);

[render polygon]
 

i tried to add or substract some bias to the non-transparent polygons via glPolygonOffset but it either makes the transparent fragments black or doesn’t change anything.

bump

Seems you have a problem with depth-buffer scale. Keep it as small as possible, and the near plane as far away from the viewer as possible.

Personally I use polygon offset only to add decals/details to models where the “detail” geometry coordinates exactly intersects with the “general” geometry.

so how do you do transparency?

Sorry for a very late addition.

Lookin at those screenshots, and assuming (perhaps falsely) the “net” is a mipmapped texture with an alpha-channel, my guess is this is simply automatic mip-map generation.

If you have a texture with an alpha channel like this (with “holes” defined by alpha in the texture), you in general must do your own minification to get expected result, and pay great attention to the alpha channel at every minification.

[EDIT: I’m an idiot :slight_smile: ]

It seems the “net” is drawn before the terrain - else the terrain should have precedence, not the clear-color.

Just draw all opaque stuff before non-opaque surfaces. When you draw the translucent surfaces, do not write to the depth-buffer (you might even want to turn of depth-test, but that depends on your application). In general you must however z-sort them manually, so you’re drawing back-to-front.

Originally posted by ZbuffeR:
looks a lot like clasic depth error in shadow mapping. try adding bias to shift away from these imprecisions.
If you’re using depth peeling on NVIDIA hardware, you should not need any bias. The depth calculation
in texture/shader is exactly the same as the one done for z. However, you cannot use multi-sampled render targets.

Zische, for surfaces like those you have in the screenshots (alpha-masked with AAed outlines) you might be able get away with a two-pass approach. For example, use alpha test in a first pass to draw all (mostly) opaque pixels writing to depth and then follow with the transparent pixels not writing to depth. Those pixels considered opaque will be correctly depth-tested and the lack of depth-testing on the transparent pixels will likely not be noticeable. Obviously, you need to draw all opaque geometry and first passes, then second passes. Whether it looks good enough may depend on the nature of the texture, also important is the mip-map generation method and the threshold for the alpha-test (you may want artists to fine tune this for each texture).

This may not solve your problem but it is a useful trick for antialiasing the alpha outlines in masked surfaces.