Visualisation for meshes

Hello

I’m currently working on an open-source tool to create 3d-Models of indoor scenes using the kinect (earlier version: http://www.youtube.com/watch?v=XejNctt2Fcs )

The kinect produces colored 3d-Pointclouds which we want to align and visualise. Aligning works very well, but the visualisation could be improved.

If I scan e.g. a wall, you can imagine the data as a 2d-Array of colored points which are all lying in the same plane.
The first approach was to mesh the points in the naive way (take a point, its next neighbours and generate some triangles).
If I take another scan I will get another set of points in a plane. Our algorithm aligns the two planes and here start the problems:
The planes are not perfectly aligned due to the sensor noise in the kinect and positional errors in our algorithm. As you can imagine, sometimes the first plane is closer to the camera as the second and sometimes vice versa. This looks sometimes as though the two planes were woven into each other and the brightness at the places where the planes are quite close to each other is different to the places where one plane is some cm closer to the cam and completely covering the second plane.

So… My final question: Has anybody an idea on how to visualise such point clouds?

I can provide some examples if someone wants to test an approach.