Sawtooth problem

When I drawing a cube on screen, some sawtooth appear
at the edge of it, I don’t know why, can anybody give
me a suggestion?

The following is the result and my code.

procedure Display();
begin
glClear(GL_COLOR_BUFFER_BIT);
glClear(GL_DEPTH_BUFFER_BIT);
glLoadIdentity;
{ … Some code for lighting and gluLookAt }
DrawAxis();
Obj1.Draw;
Obj1.Vertexs.Draw;
SwapBuffers(DC);
end;

procedure MyInit();
begin
glShadeModel(GL_SMOOTH);
glClearColor(0.0, 0.0, 0.0, 0.5);
glClearDepth(1.0);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST);
Obj1:= TMeshObj.Create;
Obj1.LoadFromFile(‘sq.obj’);
end;

procedure MyReshape(w, h: GLSizei);
begin
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0, 1, 0.1, 10000.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
end;

Well, not 100% sure, but when I look at your parms for the following function:

gluPerspective(45.0, 1, 0.1, 10000.0);

I suppose, depending on how many bits you have in the z-buffer…you are getting z-buffer artifacts…in other words, perhaps your z-far value is set high? Try a lower value than 10000.0?

The z-buffer is 16bits. You suggestion is right, when I change the parms of gluPerspective to:

gluPerspective(45.0, 1, 10.0, 1000.0);

or change z-buffer to 32bits, the problem is resolved.

But why this parms affect the image?

I’d be interested to know whether you have backface culling enabled. I suspect you don’t, and that the dark triangles in the sawtooth are actually from the “invisible” back face of the cube.

Just guessing, though.

Originally posted by liuhp:
[b]The z-buffer is 16bits. You suggestion is right, when I change the parms of gluPerspective to:

gluPerspective(45.0, 1, 10.0, 1000.0);

or change z-buffer to 32bits, the problem is resolved.

But why this parms affect the image?[/b]

Once again, it has to do with the z-buffer precision and the answer comes directly out of many infos and books:

When the ratio between zNear and zFar is big, the depth buffer will begin to lose precision, to the rate of Log(base2)zFar/zNear bits of precision are lost.

So, this is what you were doing with your app:

16-bit zbuffer, zNear = 1, zFar = 10000.

Log(base2)(10000/1)=Log(base2)10000 > 2^13, so round up, you have 2^14, or 14 bits of precision from your zbuffer is lost. Effectively, you were using a 2-bit zbuffer.

Changing to 32-bit zbuffer, you’re now using a 18-bit effective zbuffer.

Performance suggestion, if performance is needed…

Adjust your zFar to a smaller value, something you can easily get away with and not lose too much precision, ie: 2^3 or 2^4, (8 or 16). That way you will be able to use a 16-bit zbuffer withour sacrificing precision, and it will be faster. The only drawback to this is that you must change the order of magnitude on your coordinate system to achieve the same results, and your field of view must also change. You begin to work with normal numbers (>=1.0) for your coordinate space instead of real numbers (>1.0), but the system remains balanced because its “scaled down” to proportion.

Just a thought…

Hope this help you out.

Siwko

— Added —

Sorry, I read the code wrong… you used 0.1 for zNear, that meant you were losing Log(base2)100000 bits of Z… or, well, 17-bits of z on a 16-bit zbuffer. ie: you had NO zbuffer…

There you go.

[This message has been edited by Siwko (edited 06-29-2000).]