32bits greyscale textures...

I had this idea and I’d like some feedback from you guys:

for volume rendering I need to use greyscale textures with more than 256 levels (8 bits) so I figured out I could use a RGBA texture to encode 32 bits fixed point levels using a “base 256 encoding”, each greyscale level would be encoded as

level = R256 + G256^2 + B256^3 + A256^4.

The thing is that I need to reconstruct a float value ranging from 0 to 1 from these RGBA coordinates in the fragment shader, this involves a sum of products followed by a division (pretty expensive)

Then I would perform my computations on that greyscale level for volume rendering (colorization and opacity attribution).

So my question to you guys is: is this reasonable, when you take into account the limitations of the output display (number of colors, number of pixels) I can’t figure out where is the precision bottleneck in the pipeline. Even if I manage more precision on the input data, I have the feeling that the results will be scaled/truncated/quantized so that more than 8 bits textures will be useless. Are the fragment computation done with full 32 bits float precision per channel? is it possible to use an offscreen 128bits per pixel framebuffer and to perform 32 bits float alpha composition, and then scale and quantize the results for display?

a dot product 4 with a constant vector should be enough…

float4 decoder = float4(1,1/256,1/65536,1/16777216);

float4 encoded = tex2D(…);
float value = dot(encoded,decoder);

should be 1 instruction only.

Im not sure if Im right. I try to do something identical just with 16bit grayscale. When I used NEAREST filtering it works correct. With LINEAR after some bits shifts magic it was quite good, but not correct (unworkable for VR).
I thing that problem is that after texture sampling ,1-2-3linear interpolation, (with internal format 8bit) each channel is rouded to (8bit) texture internal format fixed point precision. So if first bit of A channel is rouded it can make mistake 2^27.
But Im not sure how is it in fragmet programs. If after sampling texture result in FP is rouded to texture internal precision or is keeped in FP precision.
jano

I missed that interpolation problem for sure…

the dot product trick is clever if dot products are actually hardwired and not computed sequentially.

I’ll have to work out an interpolation calculus in base 256 and check the quantization errors on it. I’ll check the 1.5 spec for this interpolation quantization info.

And concerning floating point color channels and blending? I only have the redbook 1.2, the orange book is not really helpfull on this topic either. The nVidia abstract on offscreen rendering tends to indicate that only standard 32bits per pixel bit depth can be used, no 128 bits per pixel available in pixel formats…

Thanks for the ideas. We should keep feeding this thread with our empirical findings, I’m sure this can be of interest for many people.