Iblues76

05-19-2011, 12:20 PM

Hi,

I'm trying to load a dataset to visualize. Each small cube is a voxel with a color. By building many of them , I end up with a big cube. For example, 64 x 64 x 64 is lots of them already and my goal is to at least have a 256 x 256 x 256.

My question is for help where to look or read to find the best way of doing it.

Right now, I'm using some of the opengl bible 5th edition code(e.g camera) to move around and building that many cubes.

As I have N = 64 , the response gets slow. My current method is just

pre-create a cube (genlist...)

then when rendering create all the cubes that I have and let opengl deal with what is visible and not.

However, I'm sure that they are better ways to accomplish what I'm doing.

For example, is there a way that I can optimize this in the actual video card? is frustum culling the solution? what solutions are there and if you can give me the names so I can read about them and know what to do.

thank you.

Francisco

I'm trying to load a dataset to visualize. Each small cube is a voxel with a color. By building many of them , I end up with a big cube. For example, 64 x 64 x 64 is lots of them already and my goal is to at least have a 256 x 256 x 256.

My question is for help where to look or read to find the best way of doing it.

Right now, I'm using some of the opengl bible 5th edition code(e.g camera) to move around and building that many cubes.

As I have N = 64 , the response gets slow. My current method is just

pre-create a cube (genlist...)

then when rendering create all the cubes that I have and let opengl deal with what is visible and not.

However, I'm sure that they are better ways to accomplish what I'm doing.

For example, is there a way that I can optimize this in the actual video card? is frustum culling the solution? what solutions are there and if you can give me the names so I can read about them and know what to do.

thank you.

Francisco