Creating an elevation map from the depth buffer

Hi,

I have a terrain model based on a hierarchical TIN which is not very good for fast elevation lookups. We need to do a lot of fast elevation calculations for placing trees and other models on the surface, so I was thinking about using an FBO and the depth buffer for making an elevation map for fast lookups.

I know the min and max elevation values in the terrain model, so I was thinking that I could render a terrain tile with an orthographic projection into an FBO and glReadPixels the depth buffer to get an elevation map.

Is this a feasible way of doing it? I know that the z values are not linearly distributed, so that might be a challenge. Any ideas how I could work around that?

Cheers

Perhaps. Depends on your performance requirements.

Everything up to the point where you say glReadPixels is standard shadow map rendering fare, so you can use tricks for rendering depth maps / shadow maps to help you here.

No question that after you have your depth map, you can glReadPixels it back to the CPU and do your placement there if you really want.

However, another thing you can do is send your tree/model positions down the rendering pipe and let the GPU look up the right depth values for you. This is what it’s made to do, so it’s very fast. You can read from the depth buffer/texture in a shader, grab out the Z value, and serialize it to a buffer where you can read it back or leave it on the GPU, to be used to modify the position of the model/tree accordingly.

I know that the z values are not linearly distributed, so that might be a challenge. Any ideas how I could work around that?

Z values can be non-linearly distributed, but aren’t necessarily. If you use clip-space for a std perspective projection to build your depth map / shadow map, then yes, it’d be non-linear. However, that’s not your case. You’re going to use an orthographic projection, which does not result in a non-linear depth distribution (IIRC).

Also, keep in mind that you’re not limited to rendering to standard 24-bit fixed-point depth buffers with modern GPUs. If you need more depth precision, you can often render to 32-bit fixed point or even 32-bit floating point as well! And for even more precision with 32-bit float, to counteract the non-uniform distribution of floating point values, you can flip near and far to be 1 and 0.

Lot’s of options here – it’s all up to you.

You’re going to use an orthographic projection, which does not result in a non-linear depth distribution (IIRC).

Confirmed: it is a linear distribution.

If you need more depth precision, you can often render to 32-bit fixed point or even 32-bit floating point as well!

GL 3.0 and above actually require implementations to support GL_DEPTH_COMPONENT32F.

Thanks for the feedback! This is what I wanted to hear :slight_smile:

However, another thing you can do is send your tree/model positions down the rendering pipe and let the GPU look up the right depth values for you.

The problem with this is that the terrain model is on a spherical globe, so I would have to transform each tree from geographical coordinates to earth-centered XYZ in the shader each frame. By making the elevation map on the CPU, this only has to be done once when calculating the tree positions.

Now I’ve implemented it, but It’s not working quite as expected…

The problem is that I get no values from glReadPixels at all, all the float values in the input array are untouched. Probably just a stupid mistake somewhere, maybe some of you can spot it :slight_smile:


 GLenum depth_fb;
 glGenFramebuffersEXT(1, &depth_fb);
 glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, depth_fb);

 GLuint depth_rb;
 glGenRenderbuffersEXT(1, &depth_rb);
 glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb);
 glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,
                          GL_DEPTH_COMPONENT24, width, height);

 glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,
                              GL_DEPTH_ATTACHMENT_EXT, 
                              GL_RENDERBUFFER_EXT, depth_rb); 

 // Turn off rendering to color buffer
 glDrawBuffer(GL_FALSE);

 // Check framebuffer status
 GLuint status =  glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
 if(status != GL_FRAMEBUFFER_COMPLETE_EXT)
    return false;

 glMatrixMode(GL_PROJECTION);
 glPushMatrix();
 glLoadIdentity();

 // Lat/lon coordinates on one tile run from 0-0xffff.
 int max_n = 0xffff;
 int max_e = 0xffff;

// Elevation is in meters. Just test with a tile I know is between 0 and 3000m.
 double min_z = 1.0;
 double max_z = 1.0 + 3000.0;
 
 glOrtho(0.0, (GLdouble)max_e,
         0.0, (GLdouble)max_n,
         min_z, max_z);

 glMatrixMode(GL_MODELVIEW);
 glPushMatrix();
 
 Transform3f m;
 m.lookAt(MtkPoint3f(0.0, 0.0, max_z),
          MtkPoint3f(0.0, 0.0, 0.0),
          MtkVector3f(0.0, 1.0, 0.0));
 glLoadMatrixf(m.val());
 
 // store the screen viewport
 glPushAttrib(GL_VIEWPORT_BIT);
 glViewport(0, 0, elev_map_w_, elev_map_h_);
 
 glClear(GL_DEPTH_BUFFER_BIT);

 glDepthMask(true);
 glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);

 // Draw the tile in geographical coordinates.
 drawTileGeo();

 // Read back the depth values
 int size = width*height;
 float* data = new float[size];
 for(int i = 0; i < size; i++)
    data[i] = 111.0;

 glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, data);

… and here all the data values are still 111.0

If your rendering target is a texture (i.e. not a multisampled renderbuffer), you can try to narrow down the problem by rendering a single quad with texture coordinates from (0,0) to (1,1) and with your depth texture mapped onto it. This way you will know if the problem is with the FBO rendering or the readpixel.

If your rendering target is a renderbuffer, maybe you can use a regular texture temporarily to do this test.

I finally figured it out, seems I had to add a:


glReadBuffer(GL_FALSE);

in addition to


glDrawBuffer(GL_FALSE);

Anyway, now it works perfectly :slight_smile: Thanks for all help and tips!