I am working on an application where I need a render target with at least 14 bits of precision on 3 channels. Th e logical choice would be RGBA16. But I know GF 6 and 7 series at least do not support those natively, they convert those trough 8 bit per channel. And that is supported by this …
Unfortunately that is quite old, lacks DX10 class hardware and I could not find anything similar for ATI cards.
Is there any hardware that support that natively or should I have to render to 32 bit floating target? If possible I would like to avoid that since those target textures are to be returned to system memory to be analysed by a CPU level algorithm (on such high volume that wasting bandwidth would not be good practice).
Also any different ideas on how to achieve that precision in a render target would be appreciated.
Looking at the hardware documentation of the latest intel graphics chips it seems they support such a render target (and a RGB16 render target).
However it is not exposed in the Linux drivers (they convert to RGBA8, too) and Intel’s Windows drivers tend to be worse than their Linux drivers.
The problem is FP16 has only 10 bit precision while FP32 doubles the bandwidth usage. This is for medical application so DEMANDING a specific hardware would not be a problem, as long as that hardware existed
Well if bandwidth is the problem and hardware is not, then just double the hardware bandwidth through SLI.
The problem with the intel chips is that they are not high performance parts, i would rather use a good quad Nvidia SLI setup with the latest hardware, that way you would probably not have to drop down to the “cpu level algorithm” as you could just use cuda for that.