Hi,
I’m trying to load a dataset to visualize. Each small cube is a voxel with a color. By building many of them , I end up with a big cube. For example, 64 x 64 x 64 is lots of them already and my goal is to at least have a 256 x 256 x 256.
My question is for help where to look or read to find the best way of doing it.
Right now, I’m using some of the opengl bible 5th edition code(e.g camera) to move around and building that many cubes.
As I have N = 64 , the response gets slow. My current method is just
pre-create a cube (genlist…)
then when rendering create all the cubes that I have and let opengl deal with what is visible and not.
However, I’m sure that they are better ways to accomplish what I’m doing.
For example, is there a way that I can optimize this in the actual video card? is frustum culling the solution? what solutions are there and if you can give me the names so I can read about them and know what to do.
Are your cubes axis aligned ? What about the sizes of your cubes ? All the same sizes ?
The easiest way to manage this is by using octrees.
If all your cubes are axis aligned and all of the same size, think a bit about this, you must find a very good solution that will end-up call very few renderings functions (don’t forget that you never can see all 6 faces of a cube at the same time !).
PS: avoid using display lists here. Try use VBO (better) or at least one display list per cube face so that you can decide which face of the cube to render.
-To continue this thread, my slices are 2d images , each 256x256 (I guess could be bigger), so I guess I have to find out how to create the 3d texture, as this seams to be more each a 2d texture.
this meaning having 256x256x256
thanks so much for the help. I’ll wait to hear more feedback and directions.
For example, loading a 3d texture… and I’m a bit confused about having many images which are 2d but putting them into 1 3d texture.