I have a fragment shader (below) that fails on some cards.
It works well on Quadro 3800 and Quadro 4600 but fails on Quadro 5500 and the GTX 400. When it fails it looks up the wrong color, like it’s looking in the wrong location. But it still runs with no errors. I suspect I am doing something wrong and it just happens to work on certain cards.
Below the function getSuperpixelColor() does two lookups to get the color:
- Lookup in indexTexture at the current uv
- Take that RGBA value, treat it as a 24-bit value and lookup in the colorMap texture. This texture is (256 x colormapRows).
The unpack stuff with the dot product is adapted from code I found online. I’m open to doing this a totally different way, if my way is just not smart. Basically we have one indexTexture of RGBA values which I want to interpret was indexes into another colorMap texture, and look up them up.
We also have a grayscale texture and blend this color value with the grayscale, but that is not shown.
Originally I had 16-bit values and used GL_LUMINANCE16. When we needed 24-bit I switched to RGBA. Not sure if some other texture format would make things easier or faster?
Thanks!
// unpack a vec3 as a single float
// we expect 24-bit RGB values only, no alpha
float unpackVec3(vec3 v)
{
const vec3 unpackV = vec3(
255,
255 * (256),
255 * (256 * 256));
return dot(v.rgb, unpackV);
}
// Table is assumed to be 256 entries wide and colormapRows long.
vec2 getTableCoords(vec3 v)
{
float f = unpackVec3(v);
return vec2(mod(f, 256.0) / 255.0, floor(f / 256.0) / (colormapRows - 1.0));
}
vec4 getSuperpixelColor(vec2 uv)
{
// Superpixel index, used in lookup table
vec4 superpixelIndex = texture2D(indexTexture, uv);
// Get coordinates where we can find the superpixel's color
vec2 coord = getTableCoords(superpixelIndex.rgb);
// Return the superpixel color
return vec4(texture2D(colormapTexture, coord.st));
}