Depth Problem

Hi everyone

I am drawing three groups of objects (lines, triangles and rectangles, while the latter two being solid with their edge-lines drawn too) each in a glPushMatrix,glPopMatrix block and then looking at the entire image from different view-points with gluLookAt.

As far as the drawing is concerned, everything works fine but somehow the “hide/show” property of the objects is not correct. That is, since the rectangles and triangles are solid, I expect them to cover up (i.e. hide) whatever that falls behind them based on my view-point. But they are visible like hell!?!?!

For your added information I have the following commands in my StartUpRC routine (which is only called once at the beginning):

glDisable glcDepthTest
glEnable glcCullFace
glShadeModel smSmooth
glClearColor 1#, 1#, 1#, 1#

I appreciate your help very much.

You should enable depth testing not disable it

With thanks with your help,

I have tried both glEnable glcDepthTest and even commenting it out (i.e. keep it silent on DepthTest) but still it is not correct!?!?

Even the lines (i.e. edges) that are supposed to be hiden, are shown!

What is your projection matrix?

Haven’t specified any! left it to whatever the default is.

Thanks

Bingo! That’s your problem. You need to have a valid projection matrix before you can assume the depth values are calculated correctly.

And thank you again!

But what is a “valid projection matrix”?

But the default projection matrix (the identity matrix) is a valid projection matrix, so that shouldn’t be a problem.

I agree that identity is valid projection matrix. If I remember correctly, you are using Visual Basic??? If so, I assume you create GL context by specifying a pixel format same as I would in Visual C++. What is your cDepthBits of the PIXELFORMATDESCRIPTOR set at? If it is 0 then that could be your problem. Try setting it to 8 or 16. I think that 16 is the max, but it could be as high as 32 (I dunno). And, enable depth testing.

With many thanks for your help, here is what I have in my pixelformat setup:
pfd.nVersion = 1
pfd.dwFlags = PFD_SUPPORT_OPENGL Or PFD_DRAW_TO_WINDOW Or PFD_DOUBLEBUFFER Or PFD_TYPE_RGBA
pfd.iPixelType = PFD_TYPE_RGBA
pfd.cColorBits = 24
pfd.cDepthBits = 16
pfd.iLayerType = PFD_MAIN_PLANE

which shows DepthBits=16!?!?

If it helps, I have the following in my SetupRC routine:

glEnable glcDepthTest
glDepthFunc dFLAG 
glEnable glcCullFace
glShadeModel smSmooth

and I have tried it with dFLAG=GL_GREATER but it clears out my entire image and if I use GL_LESS (or comment it it) it draws such that the “far” comes before “near”?!?!

The identity matrix is not a good projection matrix. It is nothing like a conventional orthognal or perspective projection.

Let me clarify, the identity matrix is similar to an orthognal projection, but there is a significant difference. Look what the far and near planes would have be in order for -2/(far-near)=1. You can see, such a requirement would put the far plane closer than the near plane.

I did try some tests setting my projection matrix to identity. All I saw was black. So, DFrey could be right.

However, before you add perspective or ortho commands, try switching your dFlag to GL_LEQUAL or GL_LESS. Basically, this says show the closest or the same whereas GL_GREATER says draw what is farthest.

edited…
Sorry didn’t read you had tried that already.

[This message has been edited by shinpaughp (edited 02-26-2003).]

Since you are using identity, glOrtho should give you similar results. If you have some idea what your bounds are for the screen ie(left side is -0.7, top side is 0.7, etc) use glOrthoto set the bounds and have near less than far.

With many thanks I just noticed that when I use:
glLookAt -1,-1,-1,0,0,0,0,0,1
I get what I expect when using :
glLookAt 1,1,1,0,0,0,0,0,1
and vise versa!?!?

I thought maybe this would help solving my frustrating problem.

Thanks again!

yeah DFrey is right. better to use GL_MODELVIEW or GL_PROJECTION.I think this will solve ur problem.
sikander

Originally posted by DFrey:
The identity matrix is not a good projection matrix. It is nothing like a conventional orthognal or perspective projection.

Thank you all for the help.

As far as I know, ModelView is the DEFAULT and since I am not specifying any MatrixMode, it should be ModelView!?

But still, I did add the command:
glMatrixMode GL_ModelView
and NO CHANGE IN THE PROBLEM!!!

You set a projection matrix by first setting the matrix mode GL_PROJECTION. Then use glOrtho or gluOrtho2D to setup an orthographic projection, glFrustum or gluPerspective to set a perspective projection. Then set your matrix mode back to GL_MODELVIEW.

With thanks, should I be doing this at the start of OpenGL? How do I calculate/know the parameters for glOrtho?

(I have had very bad experience with glOrtho! whatever value that I’d tried, it just blanked out the entire picture!?!?!)

I dunno. I’m running out of ideas. It could be that the projection mode will work, and I agree completely that it should be used. However, identity is equivalent to glOrtho with parameters unknown and should therefor work without a problem.

Do you clear the depth buffer with each pass of your rendering function such as glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
which should be at the start of each rendering frame.

If neither DFrey’s advice for setting up orthographic or perspective projection using projection matrix mode nor clearing the depth buffer work, then you will have to post code if you want anyone to help you solve this problem.