Rendering Errors on GeForce cards

I am seeing rendering errors on two GeForce cards. Front faces are being culled or clipped inapproipriately. I am trying to understand whether I am doing something wrong.

I know it’s not usually good form to post C code, but this is a simple demo that most anyone on this forum should be able to compile.

= = = = = = = = = =
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>
#include <stdlib.h>
#include <stdio.h>
#include <math.h>

// Useful Constants
#define PI 3.14159

#define NEAR_CUTOFF 4.0
#define FAR_CUTOFF 80000.0
#define FOV_Y 60.0

void printInstructions(void){
// Print a summary of the user-interface.
printf("This program demonstrates a bug in the torus rendering.
“);
printf(” * ‘Esc’ quits the program
“);
printf(”
");
}

void display(void)
{

// Set up the view matrix.
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
float viewMatrix[16] = {
-0.098, -0.882, -0.460, 0.000,
-0.995, 0.087, 0.045, 0.000,
-0.000, 0.462, -0.887, 0.000,
1312.500, 1628.035, 43550.770, 1.000};
glMultMatrixf(viewMatrix);

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// Draw an object at the origin
// glShadeModel (GL_SMOOTH);
glShadeModel (GL_FLAT);
GLfloat mat_specular[] = { 1.0, 1.0, 1.0, 1.0 };
GLfloat mat_shininess[] = { 70.0 };
GLfloat light_position[] = { 0.0, -100000.0, 100000.0, 0.0 };

glMaterialfv(GL_FRONT, GL_SPECULAR, mat_specular);
glMaterialfv(GL_FRONT, GL_SHININESS, mat_shininess);
glEnable(GL_LIGHTING);
glLightfv(GL_LIGHT0, GL_POSITION, light_position);
glEnable(GL_LIGHT0);
glEnable(GL_DEPTH_TEST);

glutSolidTorus(3000, 10000, 15, 42);
// glutWireTorus (3000, 10000, 15, 42);

glutSwapBuffers();
int i = glGetError();
printf ("after glutSwapBuffers(), glGetError() returned %d
", i);
}

void reshape(int w, int h)
// Called when window changes dimensions. Set up viewport & projection matrix.
//
// Reshape the display using manual vector computations instead of gluPerspective
{
glViewport(0, 0, (GLsizei)w, (GLsizei)h);

//
// Set up projection matrix.
// This is slightly non-standard, since I prefer right-handed
// view coordinates, while OpenGL defaults to a left-handed system.
//
float nearz = NEAR_CUTOFF;
float farz = FAR_CUTOFF;
float fovy = FOV_Y;
float AspectRatio = float(h) / float(w);
float ViewAngleH = fovy * (PI / 180);
float ViewAngleV = atan(tan(ViewAngleH/2) * AspectRatio) * 2;

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
float m[16];
int i;
for (i = 0; i < 16; i++) m[i] = 0;
m[0] = -1.0 / tan(ViewAngleH / 2);
m[5] = -m[0] / AspectRatio;
m[10] = (farz + nearz) / (farz - nearz);
m[11] = 1;
m[14] = - 2 * farz * nearz / (farz - nearz);
glMultMatrixf(m);
// gluPerspective(fovy, (GLfloat) w/(GLfloat) h, nearz, farz);
glMatrixMode(GL_MODELVIEW);
}

void keyboard (unsigned char key, int x, int y)
{
switch (key) {
case 27:
exit(0);
break;
default:
printf ("Key pressed: %d
", key);
break;
}
}

int main(int argc, char** argv)
{
printInstructions();

glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glEnable (GL_DEPTH_TEST);
glutInitWindowSize(250, 250);
glutInitWindowPosition(100, 100);
glutCreateWindow(argv[0]);
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutKeyboardFunc(keyboard);
glutMainLoop();
return 0;
}

I have a radeon 8500 card and /i see no visible errors. I tried it with a gforce 4 card after that on a different computer and nothing went wrong so I think something must be set up wrong on your computer. Hopefully someone else will try your code though.

Try Using:

glShadeModel (GL_SMOOTH);

  • VC6-OGL

#define NEAR_CUTOFF 4.0
#define FAR_CUTOFF 80000.0

Your near:far ration is quite large, and should be reduced. In a normal application, it’s a little bit more than a 24/32-bit depth buffer can handle, and way to much for a 16-bit depth buffer. Try push the near clip plane out by a factor 20-100 or so.

That does the trick!

NEAR_CUTOff=400.0
FAR_CUTOFF=80000.0

does much better.

Perhaps I am not using NEAR_CUTOFF correctly. Can you explain (or recommend reading) more about the near:far ratio and its implication to depth buffering. The original ratio (20,000:1) seems to me be well within the limits of a 16 bit buffer (65,536:1)