moire patterns / woodgrain artifacts - Volume renderer

I’ve been working on a volume renderer off and on for a couple of months now. I’ve implemented a 2D and 3D texture axis (object) aligned slicing renderer, a view-aligned renderer (3D texture), and a single pass ray-tracer.

Out of all the implmenetations the best one in terms of performance and visuals in my opinion is the view-aligned renderer. Performance is great and the visuals are wonderful with one exception: it suffers horribly from woodgrain artifacts (moire patterns) if the number of slices is too low (< 1000.) The artifacts are very noticable. See this graphic:

I tried to reduce/eliminate the artifacts using a method I found in a book I bought (Real Time Volume Graphics) called stochastic jittering. Great book BTW. This method is suppose to reduce the artifacts by adding offsets to the sampling ray. It gets these offsets from a texture containing random numbers. I tried using the method but it didn’t work for me.

Here’s the shader, pretty much taken from the book:

float4 main(v2f_simple IN,
           float3 position : TEXCOORD1,
           uniform sampler3D Volume            : TEXUNIT0,
           uniform sampler1D TransferFunction  : TEXUNIT1,
           uniform sampler2D Random            : TEXUNIT3,
           float3 worldCameraPosition
           ) : COLOR
{
  half2 tileSize = 32;

  //disable this line to remove the jitter
  IN.TexCoord0 = IN.TexCoord0 + IN.rayDir
    * tex2D(Random, IN.Position.xy / tileSize.xy).x;

  half4 sample = tex3D(Volume, IN.TexCoord0);
  half4 result = tex1D(TransferFunction, sample.r);

  return result;
}


  

Here’s how I calculate the ray direction used in the shader above:

  const CVec3d vec_right  (modelV[0],modelV[4],modelV[8]);   //vector for x axis
  const CVec3d vec_up     (modelV[1],modelV[5],modelV[9]);   //vector for y axis
  const CVec3d vec_eye    (modelV[2],modelV[6],modelV[10]);  //vector for z axis
  const CVec3d vec_origin (modelV[3],modelV[7],modelV[11]);  //vector for origin
  CVec3d camera_pos (modelV_invert[12],modelV_invert[13],modelV_invert[14]);  //vector for camera

  //_xMin, _yMin, & _zMin = -1.0
  //_xMax, _yMax, & _zMax =  1.0
  const CVec3d minDim(_xMin, _yMin, _zMin);
  const CVec3d maxDim(_xMax, _yMax, _zMax);

  CVec3d camera_volPos = (camera_pos - minDim) / (maxDim - minDim);

//...later in the code:
  CVec3d texCoord((-pts[j].pt.x - _xMin) / 2.0, (-pts[j].pt.y - _yMin) / 2.0, (-pts[j].pt.z - _zMin) / 2.0);
  CVec3d rayDir = texCoord - camera_volPos;

  

Here’s the code I use to build the random texture. Again, taken from the book:

 
  if(_jitterTex2D)
  {
    glDeleteTextures(1, &_jitterTex2D);
    _jitterTex2D = 0;
  }

  const int size = 32;
  const int size_sq = size * size;
  unsigned char* buffer = (unsigned char*)malloc(size_sq);
  if(!buffer)
  {
    return false;
  }

  srand((unsigned)time(NULL));
  for(int i = 0;i < size_sq;i++)
  {
    buffer[i] = 255.0 * rand() / (float)RAND_MAX;
  }

  glGenTextures(1, &_jitterTex2D);

  _glActiveTextureARB(GL_TEXTURE3_ARB);
  glBindTexture(GL_TEXTURE_2D, _jitterTex2D);

  glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE8, size, size, 0, 
    GL_LUMINANCE, GL_UNSIGNED_BYTE, buffer);

  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
  
  free(buffer);
 

Does this all look correct? The visuals look wrong when I enable the jitter. I think I’m computing the ray vector incorrectly. When I use the “jitter” shader I get what looks like a particle cloud. Like this:

WRONG*

CORRECT…but w/o Jitter*

If anyone can see the problem here I’d appreciate you pointing it out. It must be staring me in the face, but I can’t see it.

Is there any other method I can use to reduce these artifacts without dramatically increasing the number of passes?

Thanks everyone

The cause:

When you look from the right side (as arrow points) and your slices will be placed on the black lines then the result will be as you see on the bar to the right.

Simplest solution is to increase number of planes, as you described. But it will cost severe performance drop to get much better results.

Another solution is to have a shader that samples volume at regular distances. Depending on difference between last sample and current sample it could decrease or increase step. This way it would ‘travel’ fast throught empty spaces, solid spaces and translucent spaces with constant density, but it would slow down on edges.