glColorTable using lots of CPU power

I am currently creating an application to map points on a 2D plot using JOGL. I would like to use a color lookup table to show the dense areas. However, when i add the colortable, the cpu usage shoots up, and the Swing components become pretty much unusable.

The LUT contains 256 colors. I am also using textures to show the points instead of the standard dots.

Here’s the code I use to create the color table:


ByteBuffer colorTableBuf = LUT.createLUT(lutType); //Creates the bytebuffer
if (gl.isExtensionAvailable("GL_ARB_imaging")) {
	if (gl.isFunctionAvailable("glColorTable")) {
		gl.glColorTable(GL.GL_COLOR_TABLE, GL.GL_RGB, 256, GL.GL_RGB,
				GL.GL_UNSIGNED_BYTE, colorTableBuf);
	} else {
		gl.glColorTableEXT(GL.GL_COLOR_TABLE, GL.GL_RGB, 256,
				GL.GL_RGB, GL.GL_UNSIGNED_BYTE, colorTableBuf);
	}
	gl.glEnable(GL.GL_COLOR_TABLE);
}

The code to load the texture is as follows:
FloatBuffer sizes = BufferUtil.newFloatBuffer(3);
gl.glGetFloatv(GL.GL_ALIASED_POINT_SIZE_RANGE, sizes);
gl.glEnable(GL.GL_POINT_SPRITE_ARB);
gl.glPointParameterfARB(GL.GL_POINT_SIZE_MAX_ARB, sizes.get(1));
gl.glPointParameterfARB(GL.GL_POINT_SIZE_MIN_ARB, sizes.get(0));
gl.glTexEnvi(GL.GL_POINT_SPRITE_ARB, GL.GL_COORD_REPLACE_ARB,
		GL.GL_TRUE);
gl.glEnable(GL.GL_TEXTURE_2D);
int[] textureId = new int[1];

// create the texture
gl.glGenTextures(1, textureId, 0);
gl.glBindTexture(GL.GL_TEXTURE_2D, textureId[0]);
gl.glPixelStorei(GL.GL_UNPACK_ALIGNMENT, 1);

gl.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER,
		GL.GL_LINEAR_MIPMAP_LINEAR);
gl.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER,
		GL.GL_LINEAR_MIPMAP_LINEAR);
new GLU().gluBuild2DMipmaps(GL.GL_TEXTURE_2D, GL.GL_RGB,
		particleTexture.getWidth(), particleTexture.getHeight(),
		GL.GL_RGB, GL.GL_UNSIGNED_BYTE, particleTexture.getBuffer());

// enable blending to remove black edge around particle texture
gl.glEnable(GL.GL_BLEND);
gl.glDisable(GL.GL_DEPTH_TEST);
gl.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_CONSTANT_COLOR);
gl.glTexEnvf(GL.GL_TEXTURE_ENV, GL.GL_TEXTURE_ENV_MODE, GL.GL_MODULATE);
gl.glDepthMask(false);
gl.glPointSize(7f);

The texture is loaded at init, and the colortable is changed at runtime (i have different LUT’s). Does anyone know how to shift the work from the CPU to the GPU, or how to speed up the application?

Thanks

Toon

Unfortunately, ARB_imaging is not accelerated (or maybe very rarely).
Using instead a simple GLSL fragment shader with a texture indirection would allow to do the LUT much faster and on the GPU.
You can look at the fragment shader here for inspiration :
http://idlastro.gsfc.nasa.gov/idl_html_help/Applying_Lookup_Tables_Using_Shaders.html

What is your hardware, by the way ?

I’ve got a Nvidia Quadro FX 3400, and an intel Xeon CPU, with 2 gigs of RAM

This is not valid code
gl.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER,
GL.GL_LINEAR_MIPMAP_LINEAR);

It can either be GL_LINEAR or GL_NEAREST

http://www.opengl.org/sdk/docs/man/