meshes - adaptive, surface subdivision, etc. -

tl;dr - Have an old inefficient algorithm. Looking to update with something newer such as adaptive mesh refinement or surface subdivision. I have scientific data in the form of a 5000x3000 texture that I’d like to use as a displacement map on the mesh. Can someone comment on modern techniques to accomplish this? I have newer hardware; nVidia FX cards.


I’m investigating techniques to improve an existing application. The current app renders a texture of around 5000x3000 onto a mesh of vertices. I use this texture as a simple height map. The main problem with the current technique is a high vertex count in some parts of the mesh yet lacking sufficient overall resolution to adequately visualize the details of the texture. I simply don’t have enough geometry in some spots to really do the data justice and far too much detail in other areas where it’s not needed. I also need to light the mesh, so I need normals with the new technique. Something I don’t currently support.

A simple way to understand the application is to compare it to a terrain rendering algorithm. My data is scientific in nature, but is similar to a height-map. The texture is of a fixed size (often times ~5000x3000). The output resembles terrain. Colours are a bit different, but you get the idea.

My current technique involves a series of patches that form a larger overall mesh. Each patch contains several levels of detail. There’s a lot of unused geometry floating around on the hardware that I simply don’t ever use. I’m looking for something much more efficient in terms of performance and memory. Something that dynamically adds detail to a portion of the mesh based upon some metric; perhaps camera distance, etc.

I’ve been looking at different techniques but I’m not quite sure what’s best. My hardware is of the newer nVidia FX class; FX4800 and above. I’ve considered an adaptive mesh technique similar to http://http.developer.nvidia.com/GPUGems3/gpugems3_ch05.html - I’ve also looked at surface subdivision as found here http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter07.html - I’m leaning towards the adaptive mesh refinement.

Essentially what I’d like to do is to define a low-resolution mesh and be able to add detail to the portions that are near the camera using the GPU. And of course I’d like to use my large texture as a displacement map. As I mentioned previously I need to light this as well.

Any guidance you can provide would be appreciated. I’m hoping to hear back from the community as to what works, what doesn’t and so on.

Thanks

My hardware is of the newer nVidia FX class; FX4800 and above.

Is that a Quadro FX 4800 or something else? Because a GeForce FX 4800 doesn’t exist.

In any case, 5000x3000 is not a very big map, in terms of the number of actual data points. How do you fill in the gaps between texels?

Yes, a FX4800 Quadro.

Fill in the gaps between the texels? I’m using the texture as a height map to displace vertices in the mesh. I’m also using the same texture to, well texture the mesh.

Fill in the gaps between the texels?

Yes. That’s what subdivision is for. You have a heightfield at one resolution, and you want to improve the visual quality of the mesh as you get closer. So you subdivide the mesh and apply some algorithm to the new vertices to compute their positions and normals and such.

Unless you’re talking about a way to turn the “high” resolution (5000x3000 is not high resolution. It’s only 15 million vertices) into lower resolutions. In which case, I’m afraid not much progress has been made in this area.

Yes - well right. That’s the point of the post. What techniques can developers speak to. What have they tried and can recommend.

Unless you’re talking about a way to turn the “high” resolution (5000x3000 is not high resolution. It’s only 15 million vertices) into lower resolutions. In which case, I’m afraid not much progress has been made in this area.

No, not talking about that. I’d like to start with a very course mesh and increase resolution at areas near the camera.

You could use hardware tessellation if you had a AMD 5870 or the upcoming Nvidia 480.

I assume the reason you’re trying to implement a LOD system is that your frame rates are really poor when rendering all 3000x5000 vertices. What are your frame rates? What is your target frame rate? How are you currently drawing the mesh? How many batches do you use to draw the mesh? Do you have to load several meshes or just one mesh per application run? The answers to these questions will establish that you really need a LOD system and might help determine which system to use.

Frame rates vary depending upon the resolution of the mesh. At high resolution (which is what I really need) the fps drop to < 30hz. Target is a steady 60hz. Currently using VBOs or display lists. I have to load several meshes depending upon the number of different datasets.

The current algorithm was designed for smaller datesets in mind. It’s not batched very well.

To clarify - I’m not seeking to incrementally improve the existing algorithm. My intention is to replace it with re-write. I’m considering one of the approaches I originally posted.

Most recently I’ve been looking at using something like this:
http://developer.download.nvidia.com/SDK/10.5/opengl/samples.html#instanced_tessellation

…alongside displacement maps if possible. I’m still investigating.

I’d very much like to hear from others that have experience in using one of the techniques I outlined or other techniques they might think are helpful. It might be that I’m on the wrong path.

Well, if it can be compared with terrain rendering, why not try to adapt a gpu-friendly terrain rendering algorithm?

If you want to continue using your existing gpu, i guess something like Hoppe’s gpu-based clipmaps might suite you (http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter02.html), or maybe Harald Vistnes’s approach from Game Programming Gems 6. A good summary and discussion on doing Harralds approach in opengl can be found here: http://blog.inequation.org/2008/11/gpu-terrain-rendering-in-opengl-part-1.html

However, if you can get your hands on a DX11/OpenGL 4.0 card, tessellation is the way to go i guess (doing tessellation on such a large scale on hardware not intended for doesnt realy sounds efficient to me, except maybe with pn-triangles).
You will probably want to take a look at the “Adaptive Terrain Tesselation on the GPU” from SIGGRAPH’08 (http://developer.nvidia.com/object/siggraph-2008-terrain.html)
Especially the bitt on roughness bias on slide 7 might be of interest.

oh, and for your normal generation problem… can you maybe generate them from the displacement map directly in your shader?