Pixel RGB Transformation

Hi,

I am trying to find the best way of doing a pixel color trasnslation after the scene has been drawn. I am currently getting an array of pixels from the frame buffer, altering them, and then writing them back to the frame buffer.

I have been supplied with an RGB mapping for each color (8bit) and need to do this conversion after the scene is drawn (I cant alter the color of the objects in the scene), as both rgb and the converted rgb need to be displayed concurrently on different pc’s within this simulation.

The code I am trying works, but is awfully slow. The requrement is to convert a 1024 x 1024 display, which in my calcs is over 1 million pixels.

The following is teh code I have used, and I have added a comment for the average time taken for each step. I am using a Pentium III (500Mhz), GeForce II 32Mb, and Windows NT.

GLubyte pixels[1024][1024][3];

// 53ms avg
glReadPixels(0.0f, 0.0f, 1024.0f, 1024.0f, GL_RGB, GL_UNSIGNED_BYTE, pixels);

// 112ms avg
int i, j;
GLubyte r, g, b;
for (i = 0; i < 1024; i++) {
for (j = 0; j < 1024; j++) {
// this one just swaps r, g, and b as a test
r = pixels[i][j][0];
g = pixels[i][j][1];
b = pixels[i][j][2];
pixels[i][j][0] = b;
pixels[i][j][1] = r;
pixels[i][j][2] = g;
} // for
} // for

// 71ms avg
glRasterPos2i(0, 0);
glDrawPixels(1024.0f, 1024.0f, GL_RGB, GL_UNSIGNED_BYTE, pixels);

Is there any other more efficient way to do this, especially part 1, and 3 above?

Thanks in advance.

[This message has been edited by mike_p (edited 01-20-2002).]

Yes, those are some slow numbers for ReadPixels and DrawPixels. I am guessing that your #1 problem here is plain old CPU cache thrashing. You are operating on a 3MB block; you should probably shrink that to fit in the L2 cache.

Also, try BGRA rather than RGB (although that will increase the data size, so it’s not guaranteed to help here).

  • Matt

Originally posted by mcraighead:
You are operating on a 3MB block; you should probably shrink that to fit in the L2 cache.

Can you pls explain what you meant by that? I am unaware of how to use the “L2 Cache”.

Thanks,
Michael.

The most efficient way must be to draw to a texture and then draw this texture to the framebuffer and use a fragment/texture shader to swap the colors. With some tricks you can do it with standard multitexturing too.

About L2 Cache, you don’t need to explicitely “use” it. Just split your code above to use many chunks of size less than the size of the L2 cache of your cpu instead. Typical L2 cache size is 256kb today.

Splitting the data into chunks is basically:

loop datasize / chunksize times
  1) read data chunk from gfx card
  2) modify data chunk
  3) write data chunk back to gfx card
end loop

You may even want to limit your chunksize to fit in the L1 data cache. Unfortunately it is very small on Intel CPUs (16 KB on PII/PIII and 8 KB on P4), AMD CPUs are blessed with 64 KB L1 data cache.

Of course, you should not fill the entire cache with your dataset, since there is other important data that needs to be there too. Experiment with different chunk-sizes.

I’m not sure, but you may experience better speeds with GL_RGBA format (which is the native on-board format of the gfx card). Likewise, it should be better with a 24/32-bit framebuffer instead of a 16-bit framebuffer.

Oh, one more thing. In C, it usually better to use incrementing pointers instead of indexed arrays, like this:

unsigned char *pixptr;

// 2) Modify data chunk
pixptr = pixels;
for( i = 0; i < pixelsperchunk; i ++ )
{
  r = *pixptr;
  g = pixptr[1];
  b = pixptr[2];
  *pixptr++ = red_transform(r,g,b);
  *pixptr++ = green_transform(r,g,b);
  *pixptr++ = blue_transform(r,g,b);
}

…or something similar.

[This message has been edited by marcus256 (edited 01-22-2002).]