Volume Rendering - projecting object-alined slices to viewpoint-aligned slices

:confused:

Hi all

I’m actually looking for some working code, or pseudocode of an algorithm.

The issue: Quite a few groups over the last 5 or so years have succeed in getting volume rendering working using 2D textures, by stacking slices.

With 2D textures there are 3 stacks, one for each major axis.

(Ignore 3D textures for now, there are good reasons for wanting this working with 2D textures).

However, most of them seem to rely on a method of making a stack of viewpoint-aligned slices, which slice arbirarily through the (selected to be most perpendicular to the view direction) object-aligned stack.
This results in a series of polygon strips onto which various schemes can be employed to project corresponding texture strips from the object-aligned slices.

eg: Interactive Volume Rendering On PC Hardware (PDF)

Whilst I can understand the basics of what is going on here - cutting bounding polygons with respect to an arbirary plane, ceating an intersection polygon, mapping vertices back to corresponding texture coordinates, etc - trying to wrap my head around the algorithms required isn’t getting me very far.

Searching for pseudocode or working code on the web hasn’t yielded anything either - either they are all hiding their code or its all been taken down.

At the moment I haven’t psyched myself up for the big dive into doing this from scratch, and instead I’m just looking at lowest-cost ways the stack can be kept object-aligned (getting somewhere actually, but I’d like to have the ‘more correct’ method under my belt).

Can anybody help - does anybody have working code that can perform these operations ?

Perhaps I should post this in the Advanced OpenGL list but this seems a more appropriate forum to begin with.

Thank you for your time.

Jon

Don’t take this as gospel, as i’ve never implemented it… (my volume renderer is fragment shader + 3D texture based) but I don’t think you’ll need to do any mapping at all.

What should work and be simpler is to just use your object-aligned slices directly. Then you code up some logic to change up which stack you draw and what direction you draw it in (front -> back, back -> front) based on the current view angle.

You could, in addition to what hifuGL suggested, render all 3 object-aligned stacks from back to front, with opacity proportional to the dot product between surface normal and viewing vector.

The interpolations necessary for viewpoint alignment are tantamount to 3d texture mapping, which I’d rather let the hardware do – it is certainly much faster.

Thanks a lot guys.

I’m not sure though - with using the object-aligned slices in place you get artifacts pretty quickly as the angle changes. Eg. in theory you could use a cube that was 64x64x64 texels, but in practice when you rotate it close to 45 degrees away the gaps between the slices start to become quite noticeable. What you can do then is render slices more than once (saves you texture mem!) - but therein lies the rub: more pixel processing required and that’s where the real slowdown is.
I’ve seen some object-alignment-to-viewpoint-alignment engines work and they are quite realtime.

The opacity thing is, well…part of a bigger mushroom: there are tricks that can be done with anisotropic filtering and various blending technologies, but yes I wonder how useful it is.
Hmmm, still thinking object-alignment might be the go, particularly with the speed of current cards, but I would really LIKE to know how to do the slice projection to viewpoint-alignment!
:wink:

Heck, I just found their source code and its nightmarish.

http://gd.tuwien.ac.at/visual/OpenQVis/

Still, somewhere to start.