huge worlds approaches

If the world is huge, the usual float just don’t cut it anymore. I was thinking of using integer world coordinates (to avoid the ever increasing steps that floats take with big exponents) and doing the translation part to the origin partly with integers. World objects would be defined with respect to some local origin. Object positions in the viewing system would then be local_object_position + (local_origin - camera_position). Local origin and camera_position would both be integers, the subtraction hopefully leaving a “small” integer to add to a floating point local_object_position. This would then be transformed as usual. Maybe there are some other ways to deal with huge worlds?

You’ve got the basic idea. Local object-space coords in float are fine, so long as you can get an accurate object-space-to-eye-space transform (MODELVIEW) in float somehow.

If ints cut it, you can do that. If not, use bigger ints or doubles.

Key insight is world space coords are huge. So need someway to compute MODELVIEW accurately despite that. That is:

object-to-world-space (MODELING) transform = small-to-huge
world-to-eye-space (VIEWING) transform = huge-to-small

object-to-eye-space (MODELVIEW) transform = small-to-small

So if you can just compute that MODELVIEW with sufficient precision while you’re building it, you’re home free because you can represent the result with floats.

You could divide your objects into those that are currently near the camera and those that are far away.
The distant objects are then rendered first using a large scale such as 1 OpenGL unit = 1km, then the nearby objects are rendered over the top at a smaller scale such as 1 OpenGL unit = 1m.
The depth buffer must be cleared between each pass.
The near objects are rendered from high-detail models that use 1m units.
The distant objects use low detail models in 1km units, or imposters (a camera-facing quad containing a texture which is a picture of the object).
If the camera is not moving around very fast then the distant objects can be drawn onto a cube-map texture which is used as a sky-dome, and only updated when the camera moves far enough for the parallax error to become noticable.
When a distant object comes closer than the threshold it is moved to the list of near objects, when something goes further away it is moved to the list of far objects.

If the camera is not moving around very fast then the distant objects can be drawn onto a cube-map texture which is used as a sky-dome, and only updated when the camera moves far enough for the parallax error to become noticable.

What criteria are available, to determine when that happens?

I am always nervous stuffing conditional things into my rendering loop. While rendering into the cubemap, there may be a slight delay in app response. After all, at least 4 renders are necessary.

Good ideas, nice complexity, I like it.

If ints cut it, you can do that. If not, use bigger ints or doubles.

Can you give an example of an app (game probably) that uses, say 64-bit ints and/or doubles. I’d really like to see what kind of world it renders.

Flightgear (a flight sim), OpenSceneGraph (FlightGear builds on OSG), ours, X3D, and random others just use doubles and put it to bed. Dispenses with all the other annoyances you’d otherwise have to deal with (“shifting the world”, higher-precision vertex attributes, etcetc). Game companies don’t mail me their source code to browse :wink: so I can’t comment on specific games.

While this doesn’t cut it if you want to represent the universe with the accuracy of centimeters, for something the size of planet earth, it’s good enough.

I often wonder what the original Elite game on 8 bit computers did to render its world back in the day. Maybe modeling the universe down to 1 cm might also be possible.

If the universe is about 46.5 billion ligt-years across then thats 46,500,000,000,000,000,000,000,000,000,000cm or around 2^105.
Hence you would need 14 byte integers to represent exact distances in cm.
8-bit computers have a subtract-with-borrow instruction that lets you perform integer math of any size a byte at a time (ie. you do one SUB and 13 SBB instructions to find the difference between two 112 bit integers on an intel 8086).
So its certainly possible, just really really really slow, unless you have very few objects in your universe or a very good Level-Of-Detail system.