Loading Volume Data to 3D texture

Hey all,

I’m trying to load volume data from a .raw file into a 3D texture. I have no experience with this, and I’m unable to find much information on how it is done (nearly all of the stuff I have seen has been about creating a 3D texture, rather than loading volume data into it).

The data been loaded is in the form of

I suppose my main problem is that I don’t really understand what the data represents. I assume each line is a slice of the volume, and that the data in the right column is a representation of the data in the middle column (both are highlighted when I select one in VS)

I am trying to load the volume using DevIL at the moment, but I am running into linker errors with it.

So what I’m asking really if anybody has any advice, maybe some books, articles or tutorials on the subject either. I’m really struck and it would be really appreciated, thanks!

Hi,
The data you are viewing is in binary format so you cant use a normal text editor to see it (as u r doing here) nor can u see it using an image loading library like DevIL. Normally, the data may be 8-bit or 16-bit CT/MRI data which u would know from the source which gave u this data (and you would know the data dimensions also). Once you have this data, you load it from the file into an array (I am showing u how to do it next) and then pass that array to the glTexImage3D function (shown later). There are two chores here: 1) loading volume data and 2) rendering volume data.
Loading raw binary data from disc to opengl texture
You may use the C filing routines like this to load the data.


//assuming that the data at hand is a 256x256x256 unsigned byte data
int XDIM=256, YDIM=256, ZDIM=256;
const int size = XDIM*YDIM*ZDIM;
bool LoadVolumeFromFile(const char* fileName) {
   FILE *pFile = fopen(fileName,"rb");
   if(NULL == pFile) {
	return false;
   }
   GLubyte* pVolume=new GLubyte[size];
   fread(pVolume,sizeof(GLubyte),size,pFile);
   fclose(pFile);

   //load data into a 3D texture
   glGenTextures(1, &textureID);
   glBindTexture(GL_TEXTURE_3D, textureID);

   // set the texture parameters
   glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP);
   glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP);
   glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_CLAMP);
   glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
   glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
   
   glTexImage3D(GL_TEXTURE_3D,0,GL_INTENSITY,XDIM,YDIM,ZDIM,0,GL_LUMINANCE,GL_UNSIGNED_BYTE,pVolume);
   delete [] pVolume;
   return true;
}

Volume rendering:
For this, there are numerous algorithms which includes 2d/3d texture slicing, splatting, shear warp, cell projection, iso-surface and raycasting. Currently, the gpu raycasting is the best method for use due to the computational resources available.
For more info on these, see these links
http://www.daimi.au.dk/~trier/?page_id=98
and this book and the accompanying source code is valuable.
http://www.real-time-volume-graphics.org/

Hope this helps you.

That worked a treat! Thanks so much! I was beginning to mess around with V^3 to convert the raw data to pvm and then try and work with it, but you saved me a lot of work, thanks!

From my research on volume rendering, I have found a lot on the various types of rendering, such as GPU Ray Casting (including those links you provided), but have found little to no information on the actual datasets and how to load them. A number of sites host the datasets, but never really explain how to use them which is frustrating and a bit strange too, considering most (if not all) volume data I have found are stored as .raw files.

I have been reading that Real-Time Volume Graphics book which has been very useful, I now need to work on getting the volume stored as a 3D texture to work with a shader performing ray casting (the book covers the shader part very well).

So once again, thanks mobeen for the great reply,

No problems. There are two things that I would like to highlight here based on my experience with volume rendering and datasets in particular.

  1. Always make sure that the type of the dataset is what you code is using. Datasets may be signed/unsigned, byte/short/int, big endian/little endian and so based on this, you might need to restructure your pVolume data. The above steps will be valid for an 8bit dataset only. For 16 bit dataset, u will use GLushort ofcourse.
  2. In the above code, I did not clear the first and the last slice with 0. Usually this wont make much difference if the dataset contains 0s anyways in the first and the last slice however if the dataset contains data, this might give u problems when u render the data using GPU raycasting for instance. This can be done quite simply by adding the following lines after reading the volume data from the file.

//clearing the first slice to 0
memset(pVolume,0, sizeof(GLubyte)*XDIM*YDIM);

//store the volume pointer
GLubyte* temp = pVolume;

//clear the last slice to 0
memset((pVolume+(size-(sizeof(GLubyte)*XDIM*YDIM))),0, sizeof(GLubyte)*XDIM*YDIM);

//restore the volume pointer back
pVolume = temp;

//generate the 3d texture from pVolume

For 2) what could be the problems if first and last slices are not cleared to 0 ?

When visualizing the iso-surface of the data, you will see through the dataset.

Ah ok, I didn’t know that, and I’m sure that would have caused me some headaches down the line, thanks for all your help, it’s much appreciated! :slight_smile:

Hey, I’m back again :smiley:

What I’m trying to do now is subdivide the volume data with an Octree. I figured that when subdividing, I just check if the alpha values of all the volume data within a certain cube is 0, then there is no volume data there so skip it, otherwise subdivide.

I believe that once I get the scalar values, I will be able to get the alpha values from the transfer function, and use it to determine whether to subdivide or not.

Anyway, because I’m doing this on the CPU, I’m struggling to get the scalar values. Is there any way to access these while loading the raw data file or from the loaded texture itself on the CPU side?

I assumed that the I would be able to access the scalar values by looping through pVolume (following from Mobeen’s code) and cast it to an int. But I’m unsure if this is the correct way of doing this.

Thanks for the help, have a good day :slight_smile:

I assumed that the I would be able to access the scalar values by looping through pVolume (following from Mobeen’s code) and cast it to an int. But I’m unsure if this is the correct way of doing this.

You dont have to cast it, you may use the value directly based on the type of the dataset. Lets say my volume dataset is unsigned short type (16bit), u could simply do this


//where 0 < index < (XDIM*YDIM*ZDIM)
//value is any integer constant u want to compare to
if(pVolume[index]==value) {
   printf("Value found at index: %3d",index);
}

The only thing u need to be careful with is the calculation of index and make sure that u r doing the interpolation (nearest/linear/ whatever method u like) correctly. On GPU though, the interpolation is done for you and u only need to lookup the value using the tex3D function.

Ah I see, thanks again :slight_smile:

I don’t mean to keep pestering you Mobeen, I swear :smiley:

When I get the datasets from Volvis,org, it says that the dimensions of it are 64 x 64 x 64 with a spacing of 1:1:1. Now I take it that does not mean that the volume (including the empty space) is of length 64 ints in world space. Would that be correct?

Because I have an octree, and the bounding volume is set to be 64 x 64 x 64, and it is about 20 times bigger than the actual volume.

Edit: Ah I think I was been stupid here. The Bounding volume should go from <0, 0, 0> to <1, 1, 1>. So therefore, if you have volume of size 64 x 64 x 64, then the second element would be at position <1/64, 0/64, 0/64> which would be <0.015, 0, 0>. I think so anyway.

When I get the datasets from Volvis,org, it says that the dimensions of it are 64 x 64 x 64 with a spacing of 1:1:1. Now I take it that does not mean that the volume (including the empty space) is of length 64 ints in world space. Would that be correct?

It means that the whole volume contains 64x64x64 voxels. How it maps to the world coordinates is up to u. I think instead of normalizing the bounding volume to 0-1 why dont u just use the voxel coordinates (0,0,0)-(XDIM-1),(YDIM-1),(ZDIM-1) because it will be easier for you to directly index the value rather than multiplying by the size everytime u need to lookup (and believe me there will be a lot of lookups) but then again this was my suggestion use whatever u feel is good.

Sorry about the late response. Yes that makes more sense alright, you are right, there are a lot of look-ups.

I have the octree generated, and now I’m at the stage where I have to upload the octree to the GPU and incorporate it with the raycasting shader.

Finding it to be complicated, and tricky. Just wish there were more information and examples of this out there.

Hey, I have another question.

For volumes who spacing or scalar factor is not 1:1:1, how you incorporate the actual spacing to the rendering?

I have seen the Volume Rendering 101 tutorial on the Graphics Runner site. How it is implemented there, by multiplying the position by the scale factor in the vertex shader is not working for me. Is there any other way of doing this?

I have looked at the Real-Time Volume Graphics book and it doesn’t mention it either.

For volumes who spacing or scalar factor is not 1:1:1, how you incorporate the actual spacing to the rendering?

There are many ways to handle this.

  1. Resample the volume data on a uniform grid.
  2. Rescale the volume data the proxy geometry slices/shells(this works for texture slicing/ splatting/ cell projection). This is the same as the method u mentioned of scaling in the vertex shader.
  3. For raycaster, you would step into the volume in increments based on spacing in other words the spacing becomes your step size for sampling the volume. Usually, you would step in the raycaster like this,

//ray_dir is the direction the view ray is pointing to
//volume_size=vec3(XDIM,YDIM,ZDIM);
//voxel_spacing=vec3(1.0,1.0,1.0); //uniform spacing
//voxel_spacing=vec3(1.0,0.2,0.2); //non-uniform spacing

step_size = voxel_spacing/volume_size;
dir_step = step_size*ray_dir;
//sample volume from cam position in dir_step

I did the third option, but I’m getting some projection issues.

When I move closer to the volume, it becomes narrower. Here are some images that show the problem.

I must have to change my matrices as well, yeah?

Thanks for all the help by the way, you have been great!