Figuring out bounding box for transformed object

Folks,

I’m writing code to create 2 color anaglyph images of my 3D fractal images.

To create the anaglyph I need to select a zero parallax distance - a distance from the camera where the left and right eye views coincide. Vertexes at the zero parallax distance appear to be on the surface of the computer screen. Objects at less than the zero parallax distance appear to float in front of the screen, and objects at greater than the zero parallax distance appear to be behind the surface of the screen.

I’d like to have a slider in my app that lets the user shift the 3D object anywhere from in front of the screen, to centered on the screen, to just behind the screen

To place my 3D object accurately, I would like to know my 3D object’s closest and furthest point from the camera, after applying the model view transformation (the user interface in my program allows the user to move the camera around the image, which applies translations and rotations to the model view matrix.)

I need to know the closest and furthest points before I draw my object, so I can calculate the zero parallax distance.

Here’s what I’m thinking as a crude way to calculate my closest and furthest points. First, I’d run through ALL the vertexes in my object (sometimes millions of vertexes) and record the maximum and minimum values for x, y, and z. I would only need to do this once unless my 3D object changes. This would let me create a bounding cube. I would then get the value of my model view matrix and multiply all the vertexes of my bounding cube by that matrix. I’d then find the largest and smallest z values in my transformed bounding cube.

Would that be too crude to be useful? My 3D object is a rectangular height map of a fractal plot, where the height of each vertex is near zero for some plots, and other plots the whole plot can be as tall as it is wide, and the height value of each vertex can vary chaotically.

Is there another method that would give me a more accurate maximum and minimum distance value fairly efficiently? I’d like to avoid doing the full model view transformation on every point in my plot.

I seem to remember reading about a way of finding a front clipping plane and rear clipping plane for a transformed object, but can’t find it.

Can anyone help?

If you’re going “crude” you might as well use a bounding sphere, since that doesn’t change with the viewing angle.
Or you could reduce the detail by calculating the minimum and maximum for each grid block.

To use the GPU for determining front and back clipping planes, I suppose you could render the entire object into the backbuffer at the size of a single pixel with depth testing and depth writes, and retrieve the depth value for that pixel.
I’m not sure if you could additionally use the stencil buffer, but if not, you would have to render it twice: once for the minimum depth, once for the maximum depth.
Or you could, dependending on GPU capabilities, render the depth to a cubemap (one of the purposes of the Geometry shader stage).

T101,

Thanks for the reply. I’d like my front and rear distances to be fairly close, as these values will determine the placement of my 3D object in the stereo display.

I’m looking for a good trade-off between accuracy and efficiency. Rendering the entire object 2 additional times seems like a major slow-down.

Currently my 3D object is not broken into grid blocks. It’s a series of triangle strips that run all the way from min y to max y for my rectangular height map. It would be quite a bit of work to reorganize my drawing into smaller blocks.

Surely you are rendering triangle strips, but presumably your source data is not a triangle strip but a two-dimensional array?

The grid block suggestion is to use a course version of the original map (sort of like a mipmap in construction) to keep track of the min/max.
Let’s say, for example that you have a grid of 1000 x 1000 height values. Suppose you split it up (conceptually) into blocks of 100x100.
Before the first time you render the object, you’ll have to calculate the min/max of all of these blocks, so you have to walk through all 1 million grid points:
outer loop: walk through all grid blocks
inner loop: for the grid block, calculate the minium and maximum height value
To get the total min/max, you need to get the maximum of the maxima per gridblock, and the minimum of the minima per gridblock.
So far there is no advantage.

However, if a height in a grid block changes, you only need to recalculate that one grid block, and subsequently do the same max of max and min of min calculations. In this specific example that saves you almost 90% of the calculations.
You could even add more layers into it, to reduce the necessary recalculations.

You might treat the block, or the entire heightfield, as a box of the original width and length, and a height equal to the difference between the maximum and minimum height value.

T101,

I was thinking along similar lines. I could divide my height map into smaller rectangles and create a bounding box for each rectangle. When I rotate or zoom my plot, I could then apply that transform to the vertexes of my bounding boxes. That would give me a much better estimate of my closest and furthest points.

Thinking through this, I would want to apply my model view matrix to my bounding boxes, but not my projection matrix.

Is there a way to get OpenGL to apply a transformation matrix to a set of points using the GPU? Trying to do matrix multiplication on thousands of points with the CPU is apt to be quite slow, and may actually be slower than rendering my entire shape, at least in some cases.

What I really want OpenGL to do is apply my model view matrix to my array of bounding box vertexes, and only retain the maximum and minimum height value of the resulting points.

I guess I could apply the model view matrix and the projection matrix, draw the grid of bounding boxes to the back buffer, extract the maximum and minimum height values from the depth buffer, and apply the inverse of the projection matrix on those values to get them back into model space (That’s not the right term, but hopefully you get the point)

Well, presumably you’re using doublebuffering.
You can do anything you like to the backbuffer, as long as you overwrite it before you swap buffers.

To ‘ignore’ the projection matrix, just use an orthogonal projection.