Thesis: relief generation based on bump-mapping

Hello, I was wondering if you can give me some advice on the opengl application I have to implement for my thesis:

The thesis is about relief generation from a 3d model. Once as a mesh and once as a bump-map. The relief should be rendered for visual and haptic output. For sufficient detail of the haptic rendering I have to use adaptive subdivision surfaces. The haptic API uses opengl statements for rendering meshes and sth. called force mapping for the bump map. Basically, I can use the same code for both renderings.

Part 1: Mesh
I start by sending rays from the observer to the model and saving the distance to the intersection point for each ray in a heightfield. The height field should take perspective foreshortening into account. Secondly, this height field is compressed using methods from tone mapping literature (HDR and such, it preserves detail), however, this is in principle unrelated to the opengl code. Mapping the compressed height field onto a mesh is the last step. The result is a relief representation of the 3d model.

A paper I’ve read uses the z-buffer for this. They read the z-value for each pixel into an array, do the compression and then write new z-values back into the buffer. However, I still need to generate the geometry for the haptic output.

Part 2: Bump-Map
For the bump map I start again by sending rays from the observer to the model, but this time I save the normal at the intersection point of ray and model. I save the normals in a texture, which I use to visually render a bump map onto the mesh. For haptic rendering I use force mapping to map the normals directly as forces to the haptic device.

Evaluation:
The goal of the thesis is to compare both methods using the haptic rendering.

My questions:
How would you go about doing this?

Can I use buffers for all of this? Read both the z-value and the normal for each pixel? I would do some kind of unproject to get the position of the pixel in space coordinates and go from there.

sending rays from the observer to the model

What do you mean ? Raytracing a high resolution 3d mesh, or scanning a real world object ?

oh sorry, I meant raytracing a 3d mesh

First off, I may have completely understood what you’re trying to do here.
But if I did understand correctly:

Part 1 of your question:
I’m not sure why you would be using rasterisation for this - as it either introduces details that aren’t there, or simply removes details that were there before.
Depending on the purpose (I think haptic has something to do with touch, but I don’t know how the geometry will subsequently be used), you may have a problem with using rasterisation - since perspective division projects onto a plane, not onto a sphere.
If the purpose is to simplify a rough surface, have a look at http://graphics.cs.uiuc.edu/~garland/CMU/scape/ - since that sounds similar to terrain simplification.

Part 2:
Assuming a projection on a plane is OK, I’d say you can use normal rasterisation of the original 3D object, combined with a fragment shader that outputs the interpolated vertex normals to the color buffer.
Then read back the frame buffer to get the bumpmap.

[Edit]
PS: if you can state the issue in terms of projection of geometry onto a plane, you can use rasterisation, and thus do it in OpenGL.

Hello,

thank you for your feedback. It did clear things up for me, since I had my own doubts about rasterisation. I will try it the conventional way (without rasterisation).

Thanks again,
Christian