about GLdouble

use GLdouble
but I cant see nothing,why?

glBegin(GL_LINE_STRIP);
glColor3d(1.0,0.0,1.0);
glVertex2d(1000000.0001,1000000.0001);
glVertex2d(1000000.0006,1000000.0006);

glVertex2d(1000000.0021,1000000.0021);
glVertex2d(1000000.0026,1000000.0026);

// glVertex2d(1000000.5026,1000000.5026);
glEnd();
glFlush();
the matrix setting:
glViewport(0, 0, 100, 100);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(1000000.0, 1000000.0010, 1000000, 1000000.0010);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

thanks

Does it work if you use floats?

If so, I’ll consider it to be a driver bug, if not… well then you did something wrong

Are you trying to test precision of passing doubles to GL? Most implementations would use floats for transforms and stuff internally, so passing doubles won’t give you any advantage. And if you’re doing a corner-case like that you’re certainly likely to get problems.

but we need the precision. we are developing a cad program.

Originally posted by Humus:
Are you trying to test precision of passing doubles to GL? Most implementations would use floats for transforms and stuff internally, so passing doubles won’t give you any advantage. And if you’re doing a corner-case like that you’re certainly likely to get problems.

Then do the transformations yourself in as much precision as needed, and use OpenGL only for rendering.

In this particular example, you could manually subtract 1000000 from all values, and then pass the values to OpenGL. This would dramatically decrease the number of significant digits needed to render the image to a level that can be handled by a single precision float. If you do the subtraction in high enough precision, the values will survive the transformation stage (and the problem in your original post is, as mentioned above, that most OpenGL implementation doesn’t perform the transformation using high enough precision for your example to work).

I tried and it does not work starting from 10000.0.

it is okay for 1000.0

Originally posted by /* tSb */:
[b]I tried and it does not work starting from 10000.0.

it is okay for 1000.0[/b]
When it works and not works depends entirely on the precision used by the implementation and how small variations you use compared to the magniture of the numbers. Floats, as the name implies, uses a floating decimal point, so precision is relative, not absolute (as with integers for example). There’s no problems using coordinates above the limit you said where it stops working, as long as the numer of significant digits is small enough. Similarly, it may very well break on numbers smaller than 1, as long as the number of significant digits are large enough.

So saying it works with that number, and not with this, is useless unless you tell us the variations in the coordinates relative their magnitude.

As said the internal precision of most (if not all) OpenGL Implementations is float.
Exceeding the precision limits will result in all sorts of rendering issues.

Another one is this: If you zoom out of your current image and render the 1M1M area into a 100100 pixel grid, it wont’t work either.

OpenGL’s rasterization rules are laid out so that no pixel is touched twice if you render adjacent primitives, otherwise it would break all logic_op, transparency or stencil algorithm.
That means that all primitives which fall through the rasterization points (for lines: the diamond exit rule, to be found in the OpenGL spec) will not generate a pixel in your current setup.
If you demand that a line always generates a pixel, you need to render lines which are shorter than a pixel after transformations as a GL_POINT. Because that’s expensive to calculate in the app, really stupid CAD programs render a line as a line and an endpoint (ouch), forget about performance then.
3D line rendering is what it is. You get the usual aliasing artifacts. You need to be smarter about your rendering setup.