Writing directly to the depth buffer..!

Hey all,

First of all, I don’t know whether this question should go in the “beginner” or “advanced” forum section, but i just put it here. Hope it’s ok

I’ve been struggling with a little prob today.
I have this depth map which ranges from white (nearest to the screen) to black (deepest into the screen).
That is, the values range from 0-256.

So, what I wanna achieve is to make this depth info get into the “real” OGL depth buffer. As a start I just wanna pretend that 256 (i.e. white) equals 0.0f units in OGL and 0 (i.e. black) equals 256.0f.

But I have some problems making that depth info go into the real depth buffer.
I’m trying to use the function glDrawPixels() but I don’t know how I should submit my pixels.
Currently I’m reading the depth map’s pixels using the glGetTexImage() function with the GL_RED flag specified (since R, G and B all equal the same value/color).
I’m then converting those values so that 256 becomes 0 and 0 becomes -256 (since I guess the depth buffer handles negative (-) values as into the screen… as usual).

Before posting anymore stuff of what I’ve tried (and thereby confusing you more than necessary), I’d like to hear what you would do in order to make this work!
I also need to use the glSetRasterPos() in order to make the glDrawPixels() work correctly but I don’t know if I should use glOrtho() mode when submitting the depth map or what…

Just to clear up:
Imagine an adventure game such as Monkey Island. Say I have a pre-rendered 2D image of a scene. In order to move my 3D models around in that scene without it getting clipped away (since the 2D prerendered image is drawn in ortho mode in “front”) I’ve exported a depth map from 3dsmax containing info about how deep stuff is into the screen.
So, all I need to achieve is a way to get that depth map into the OGL depth buffer so e.g. the 3D models are clipped correctly.

Please post all your ideas and, hopefully, solutions to this problem of mine.
It’s GREATLY appreciated!

Thanks in advance,

try this threads:
http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/010024.html http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/010267.html

OH… damn… sorry for not searching the board first

And HUGE thanks for those links… looks just like what I’m interested in!

So I should actually use the GL_POINTS method presented by zeckensack?
Or should I go for that ARB_buffer_region extension??

Thanks a bunch so far man…heh

[This message has been edited by Halloko (edited 09-29-2003).]

If you’re not going to be updating all of your depth buffer very often (as your monkey island example suggests), then glDrawPixels will be your best option for the following reasons:

  1. uses less memory than that vertex array malarky.
  2. will be much faster on older cards than that vertex array malarky.
  3. will be more compatible with older cards than that vertex array malarky (gldrawpixels is supported in GL1.0, while vertex arrays are a GL1.1 extension).

I guarentee that the 100mhz+matrox brigade will not thank you for drawing each pixel as a transformed point.

ARB_buffer_region is actually only supported on detonator-cards (tnt,tnt2,gf,gf2,gf3,gf4,gfFX).
ATI does not support them, so you have to use pbuffer together with the WGL_MAKE_CURRENT extension.(but it’s only fast enough on >ati9500 cards.)
KTX_buffer_region is the most unstable extension around… don’t touch it.

for all other cards i suggest the glDrawPixels method. this works fine as long your objects are not too much and not too big on the screen.

Hi guys… and thanks for your answers.

Originally posted by knackered:
If you’re not going to be updating all of your depth buffer very often (as your monkey island example suggests), then glDrawPixels will be your best option…

Well… I guess I will be updating it every frame coz I plan on using e.g. 3D models moving around in the pre-rendered scene.
Would glDrawPixels() then be ok?

If so, you don’t happen to have a link or doc on how to write to the depth buffer, 'coz before even posting this question I actually tried using it before even posting here but I just get some weird lines on my screen instead.

Hope you’re able to help me… thanks :smiley:

Well, you don’t have to clear the depth buffer every frame - your models can be drawn with the depth test enabled, but with depth writes disabled. That way, your models won’t disturb your prerendered depth buffer. Of course, you then won’t get the benefit of having a depth test between your models, but just make sure they’re convex and don’t intersect each other, and sort them by depth in your app and you should be fine. Another approach that would allow you to have depth tests between models would be to calculate the projected rectangular region your models will take up on screen, and save a copy of the depth buffer in that region using glreadpixels, draw your model, then restore the region with gldrawpixels. Like the old days of 8bit sprite animation.
So long as your models don’t take up much screen space, it would work out much faster than restoring the whole depth buffer each frame.

Originally posted by knackered:
Well, you don’t have to clear the depth buffer every frame - your models can be drawn with the depth test enabled, but with depth writes disabled. That way, your models won’t disturb your prerendered depth buffer.

Ahhh… rrright… of course that’s what I will do… heavily decreases depth buffer draws…nice

But you don’t happen to have an example of using glDrawPixels() for the depth buffer?
Should I submit the pixels in ortho or perspective mode and should I clamp the values to [0,1] before submitting the pixels??

Huge thanks so far

Originally posted by knackered:
<…>
Another approach that would allow you to have depth tests between models would be to calculate the projected rectangular region your models will take up on screen, and save a copy of the depth buffer in that region using glreadpixels, draw your model, then restore the region with gldrawpixels.

glReadPixels wouldn’t be needed. The background depth should already be known. It can just be kept in a system memory buffer.
Otherwise a good idea

But, please guys, you got any examples of the glDrawPixels() for the depth buffer??
Coz I get some weird results :expressionless:

Should I use ortho or perspective when drawing? And should I clamp the values to [0,1] on beforehand?

Thanks in advance…!

[This message has been edited by Halloko (edited 09-30-2003).]

Float values must be in the range [0…1], integer values use their full numeric range (ie [0…65535] for ushorts).

float* stuff=<...> //some array of floats
glDrawPixels(640,480,GL_DEPTH_COMPONENT,GL_FLOAT,stuff);

The trickiest part is the raster position. Raster positions are transformed by the current modelview and projection matrices (just like vertices) and become invalid, if the post-transform result is outside of clip space.

For depth writes, raster position z is mostly irrelevant, you only need to make sure it doesn’t fall outside the near/far planes. These are exactly the same requirements as for glDrawPixels color transfers, and they are outlined in the FAQ.

That being said, I can’t recommend a raster position for your specific needs, because that depends on your matrix stuff. A safe approach to drawing to the bottom left corner would be

glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glRasterPos2f(-1.0f,-1.0f);

glDrawPixels(<...> );   //see above

glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);

That way, your models won’t disturb your prerendered depth buffer. Of course, you then won’t get the benefit of having a depth test between your models, but just make sure they’re convex and don’t intersect each other, and sort them by depth in your app and you should be fine.

This will only really work provided you cull back faces. Even then, for example an arm in front of the chest. If the arm gets rendered 1st, then the chest will obscure the arm, as its rendered ontop.

You’d have to manually sort each poly for this to work, and even then it wouldn’t be pixel perfect.

Hence the ‘convex’ stipulation in my answer.

Hey guys…

First of all thanks for your great replies. It seems I’ve finally managed to write my data to the depth buffer correctly

There’s a small problem, however!
Today I did a little test on how depth values a set in the depth buffer. That is, I made a really huge quad (4 vertices) and drew it at different depths.
My perspective information was as follows:
gluPerspective(45.0f, 800.0f/600.0f, 1.0f, 1000.0f);

Now, I collected the following depth information through my tests:

Value: 1.00000000 , Depth: -1000.0f

Value: 0.99987793 , Depth: -900.0f

Value: 0.99955750 , Depth: -700.0f

Value: 0.99899292 , Depth: -500.0f

Value: 0.99098194 , Depth: -100.0f

Value: 0.90089267 , Depth: -10.0f

Value: 0.88976884 , Depth: -9.0f

Value: 0.85799956 , Depth: -7.0f

Value: 0.80079347 , Depth: -5.0f

Value: 0.66732281 , Depth: -3.0f

Value: 0.50049591 , Depth: -2.0f

Value: 0.33365378 , Depth: -1.5f

Value: 0.00000000 , Depth: -1.0f

As you’ve probably noticed the values aren’t sorted linearly. I mean, it’s obvious that the values close to a depth of -1.0f change a LOT more from each interval, while the huge z values have very little effect on the stored values in the depth buffer.

I’m guessing it might have something to do with the perspective setup, but I don’t know right now…!?

Well, the problem with these non-linearly distributed depth values are that I can’t just take MY depth values (from the depth map described earlier) and then calculate what the correct depth buffer value. At least not until I understand how OGL calculates those values!

I hope you guys know what I’m trying to explain here and I hope even more that you’re able to help me… once again

Huge thanks in advance!

Yes, the distribution of z-values is non-linear. Read this for more info:
http://sjbaker.org/steve/omniv/love_your_z_buffer.html

Read this:
http://www.cs.unc.edu/~hoff/techrep/perspective.doc

Hi guys… sorry for not having replied earlier but have been really busy lately!

Originally posted by dorbie:
Read this:
http://www.cs.unc.edu/~hoff/techrep/perspective.doc

I found the above link VERY useful (though I skipped the derivation and jumped straight to the equation) but I got some value error on third decimal, like

Value retrieved using glReadPixles(): 0.99899292
Value calculated: 0.99599600

As you might see, there is a slight inaccuracy at the third decimal and I don’t think this can be tolerated when I’m gonna compare depth values!

I searched the Web too after having read the two articles above and found an explanation here at opengl.org too: http://www.opengl.org/developers/faqs/technical/depthbuffer.htm (scroll down to “12.050 Why is my depth buffer precision so poor?”)

Though I’ve noticed that some of the equations look very much like the one I got from the article mentioned above, I cannot seem to understant how to use it :expressionless:
Zndc = Zc / Wc = (f+n)/(f-n) + (We / Ze) * 2fn/(f-n) <– this is the one almost identical to the one I currently use. I can’t seem to figure out We, however!

Any help on this is greatly appreciated!

Huge thanks in advance

PLEASE GUYS…!!! It seems no people except you know about these equations!!
Please take the time to help me here… I’ve been almost everywhere in search for an answer!