glOrtho coordinates

Hey everyone,

I thought its time to give up trying to find the (probably really simple) solution for this particular problem. When I started using OpenGL I had an ATI graphics card and everything was fine, until recently I made a switch to a pair of 8800gtx’s. (Interesting fact: the same primitive 2D opengl program ran like a hundred times faster on the old ati card but thats an another topic…)

So heres the deal: when I draw lines with glBegin(GL_LINES), GL_LINE_STRIP, GL_LINE_LOOP the lines are always 1 pixel left and under the correct coordinates. Oddly enough GL_QUADS, GL_POINTS, etc. works fine. I know its not a big deal but it seems that glRasterPos, glRect and a few other opengl functions are setting/drawing to the wrong coordinates also so adding all the inaccuracy produced by these functions together results in a rather unpleasant final render.

I’ve modified the 2nd lesson from NeHe’s opengl site to make sure the rest of the code is correct (at least I hope so…). Here are the changes:
In GLvoid ReSizeGLScene(GLsizei width, GLsizei height) i replaced

gluPerspective(45.0f,(GLfloat)width/(GLfloat)height,0.1f,100.0f);

with

glOrtho(0, width, 0, height, -1.0, 1.0);

and in int DrawGLScene(GLvoid)

	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);	// Clear Screen And Depth Buffer
	glLoadIdentity();					
{...}
		glVertex3f( 1.0f,-1.0f, 0.0f);					// Bottom Right
		glVertex3f(-1.0f,-1.0f, 0.0f);					// Bottom Left
	glEnd();

with


	glColor3f(1,1,1);
	glBegin(GL_LINE_LOOP);
		glVertex2i(10, 10);
		glVertex2i(100, 10);
		glVertex2i(100, 100);
		glVertex2i(10, 100);
	glEnd();
	
	glColor3f(1,0,0);
	glBegin(GL_QUADS);
		glVertex2i(10, 10);
		glVertex2i(100, 10);
		glVertex2i(100, 100);
		glVertex2i(10, 100);
	glEnd();

And the result:

I’m not exactly sure that this is what I should be seeing since on other graphics cards (ATI, and even on an integrated intel) you can’t see the line since the second glBegin() paints on it.
By the way I’m using the latest drivers so that shouldn’t be the problem.

Thanks in advance,
Fr4G

It looks like NeHe’s second tutorial uses depth testing, so watch out for that. You’ll want to translate the lines to a greater Z value (anything >0.0 in this case) to make them appear on top of the quad.

EDIT: Oh… I see what you were trying to test now.

I believe that in this case glVertex2i(x,y) specifies the lower left corner of the pixel at position x,y and since your line has 1 pixel thickness it spans two half pixels. In this case it’s the hardware manufacturers choice of how they implement it (either render the bottom pixel or the top pixel and either render the left pixel or the right pixel.)

You can check this by using glVertex2f and offsetting the positions by some small value.

Thanks for the fast reply, by translating all coordinates with +0.01 on both axes it turned out that nvidia indeed choose to draw to the bottom/left pixel but only for the lines. Drawing the red square doesn’t seem to be consistent though. Simply because if the difference between drawing lines and quads was only in choosing where to put a pixel then the red and white squares would be the same size just 1 pixel away from each other.

Original (zoomed in with blending to see the square “under” the lines):

After translating with .01:

The translation doesn’t affect the red square at all…

Anyway in my own project calling glTranslatef(.5, .5, 0); after setting glOrtho seems to solve everything for some strange reason, so thanks for the hint.

When you think about it, it’s logical that this small translation does not affect the square. Take a look at this image

For the blue line (which has a thickness of 1) shifting it by 0.1 makes it unambiguous. Either the top or the bottom pixels center is inside the line, so when applying the offset the top ones get filled. Nvidia just chose to fill the bottom pixels to resolve the ambiguity when there is no offset, so it will change when using the offset.

For the case of the red rectangle, moving it by 0.1 does not change which pixels centers are inside the rectangle so nothing changes.

PS. This also explains why you will see your lines appear on one side of the rectangle only.

Oh, nice picture thanks for taking the time to make it. Now I see why OpenGL is not drawing where I would expect it to draw, but then the question arises whether it is even possible to write a code that displays the same way on every card. I know this would be impossible to do using the latest additions/effects in OpenGL which are not even supported by all cards, but hey, this is just simple 2D drawing so I don’t think this is too much to ask. And another interesting question is that how do other cards know that when rendering the GL_LINE_LOOP they should draw to the upper right pixel for the left and bottom sides and draw to the bottom left pixel for the right and upper sides, so rendering the red rectangle will overwrite every pixel.

I guess I’m just gonna try to play around with various glTranslate values and try different shapes until I get a decent result on every gfx card.

As long as there are ambiguities, the result will be dependent on how the hardware decides to resolve this ambiguity. So by using the offset in you should get the same result on all graphics cards.

I think it’s best to keep the thickness of the lines in mind. By default, the lines have a thickness of 1 but the polygons do not have a border. So applied to the problem above, you’d best write (without offset) :

// bl = bottom left
// br = bottom right
// tl = top left
// tr = top right

glBegin(GL_QUADS);
glVertex2i(bl.x,bl.y);
glVertex2i(br.x,br.y);
glVertex2i(tr.x,tr.y);
glVertex2i(tl.x,tl.y);
glEnd();

glBegin(GL_LINE_LOOP);
glVertex2f(bl.x+0.5f,bl.y+0.5f);
glVertex2f(br.x-0.5f,br.y+0.5f);
glVertex2f(tr.x-0.5f,tr.y-0.5f);
glVertex2f(tl.x+0.5f,tl.y-0.5f);
glEnd();

or you can change your linewidth to 2, so that you get a 1 pixel border around the rectangle using the original coordinates of the quad:

glBegin(GL_QUADS);
glVertex2i(bl.x,bl.y);
glVertex2i(br.x,br.y);
glVertex2i(tr.x,tr.y);
glVertex2i(tl.x,tl.y);
glEnd();

glLineWidth(2);

glBegin(GL_LINE_LOOP);
glVertex2i(bl.x,bl.y);
glVertex2i(br.x,br.y);
glVertex2i(tr.x,tr.y);
glVertex2i(tl.x,tl.y);
glEnd();

The latter one might be the easiest solution to draw border around a polygon, since unlike a rectangle it is not obvious whether to apply +0.5 or -0.5 offset for a particular vertex… Anyway at least I know why OpenGL won’t produce the results I’d expect from it, so I can move on and try to figure out which part of the code slows the render down on nVidia cards.

Another thing, what you all did not consider:

Since I am also confronted with glOrtho things, what I had to learn is that one should NEVER use gluLookAt together with glOrtho, ie. orthographic projections! gluLookAt is for Perspective projections. In the case of orthographic ones, one should just glTranslatef and glRotatef the point and direction of view in the beginning:

glMatrixMode(GL_PROJECTION)
glLoadIdentity;
glOrtho(…)
glMatrixMode(GL_MODELVIEW)
glLoadIdentity
glTranslatef(0,0,-10) // look in direction (0,0,1) from Point (0,0,-10)

Best regards,
Oliver