Way to render a large terrain without z-fighting

I’ve been reading the last question in http://www.opengl.org/resources/faq/technical/depthbuffer.htm without actually understanding it, ¿does it mean I have to use glPerspective/glFrustum every time I render a piece or the terrain to move the zNear/zFar to contain the terrain block i’m rendering?

My terrain is already partitioned in blocks that are (currently) drawn from further to nearer, but I don’t order them “horizontally” from the viewer’s perspective, only depth-wise, ¿does it matter?.

If your terrain is that large that a 24-bit depth buffer isn’t capable of covering the scene, then yes, that is one way to handle it.

The different blocks don’t need to be strictly ordered horizontally from the viewer. All that is needed is that each one does not interfere in terms of depth with the ones farther away.

On modern hardware (GeForce 8xxx or Radeon HDs) you could try a full-fledged floating-point depth buffer. You’ll need to do something where different layers of the scene add a bias to the depth buffer, but it would work.

Absolutely not! These calls establish a PROJECTION transform for your scene. One of the jobs of this PROJECTION transform is to establish a consistent eye-space-to-clip-space mapping for depth values. Consistent depth values are required for Z-buffering to work properly. If you change PROJECTION multiple times during rendering a single scene without really knowing what you’re doing, you’re likely to get a mess on your display in the end. Set it only once per frame!

My terrain is already partitioned in blocks that are (currently) drawn from further to nearer, but I don’t order them “horizontally” from the viewer’s perspective, only depth-wise, ¿does it matter?.

Assuming the terrain blocks are all opaque, no it doesn’t matter at all. The Z-buffer (if properly configured) takes care of that, ensuring that in the end, at each pixel (or subsample) you can only “see” the closest object on the terrain at that location on the screen. For correct rendering, you shouldn’t have to do any CPU-side sorting at all!

Don’t let yourself get psyched out. The Z-buffer’s job operation is pretty simple:

  • Is this new thing closer? Nope … toss it.
  • Is this new thing closer? Nope … toss it.
  • Is this new thing closer? Yes! … keep it!

    This little algorithm happens in parallel for every pixel on the screen, for every pixel of every polygon you render.

You can tweak the algorithm it uses a bit but as a newbie you probably don’t care about this yet.

The default 24-bit Z buffer is likely just fine for you. Just push your near clip plane out as far as you can stand (i.e make the near_val parameter to gluPerspective/glFrustum as large as you can get away with given your application’s needs). 10-15 meters or so and you’ll probably be just fine if your eyepoint is always airborne. Less if your eyepoint is near the ground.

Depending on your rendering needs, you may be able to just set your PROJECTION once and just leave it alone across multiple frames, though it’s cheap enough to re-set every frame since you only set it once.

This article is worth a read (if you haven’t read it already) :

http://www.gamasutra.com/blogs/BranoKemen/20090812/2725/Logarithmic_ZBuffer.php