Vertex data in a texture

Hi Guys,

What would the best way be to upload height-field data onto the GPU? I figure the elevation data could be stored as a texture, which is fine, but as for the vertex data, wouldnt it be redundant to store a large grid, with x,y values for each texture point? That data could easily be deduced from the texture-coords. But we still ‘have’ to have vertex data or nothing gets drawn.

Any ideas on how a texture should be enough to store the entire 3D information?

So many views, so no replies…hmm…probably not possible then.

That data could easily be deduced from the texture-coords.

Or, you know, you could infer the texture coordinates from the vertex positions.

In any case, I wouldn’t suggest using a texture for height data unless you specifically want interpolation. It’s fine to split XY and Z data into two vertex attributes; that would allow you to use the same XY data with different heights (and transform matrices, of course). But using a texture here isn’t necessary unless you plan to have the vertices interpolate between neighboring heights.

And even then, linear interpolation won’t get you a lot; you’d probably want to code your own interpolation. So you’d be generating height and normals in your vertex shader.

There’s some nice recent work in Eurographics on GPU ray casting terrain. It stores a height map in a texture and renders just a bounding box. A ray is cast through each fragment and the intersection with the height field is found, then shaded (or discarded). There’s also plenty of nifty optimizations. See GPU Ray-Casting for Scalable Terrain Rendering.

Although far from complete, I have an implementation in C# and GL 3.2, if you need a starting point.

Regards,
Patrick

@Alfonse: The texture is basically read in the Vertex shader as a Uniform, so it wont interpolate unless I am sampling at non-pixel-center locations. Its basically just used as a large floating point array.

Patrick: Thanks, thats a good find! I am avoiding ray-casting though. The method I am using works fine with forward-projection. I was just wondering wether all the RAM usage was strictly necessary. It looks like with forward projection, unfortunately it is. The GPU doesnt/can’t deduce the X/Y info, even though its redundant.

Its basically just used as a large floating point array.

So, like I said, why are you using a texture rather than an attribute?

Ray casting is great because it requires virtually no vertex data - just the bounding box. If you’re not going that route, have you considered having just one nxn mesh for all terrain tiles. Each time you draw a tile, use a different model matrix to translate it into place and a different texture for its height map. Then just do displacement mapping in the vertex shader. I need to do a bit more research myself but I believe this is a common technique.

Regards,
Patrick

Its an application that is meant to stream large data sets in real time. It just so happens that my source height-map is stored in a texture, and ‘converting’ it to a vertex attribute structure per pixel before sending it over, rather than just binding it straight-away as a texture, is going to slow down the streaming.

@Patrick: Thats a nice idea, though Im not rendering terrains so I cant really do it the way you suggest.