Is there any method or trick to handle high precision normal maps on Radeon 9500+ in a way that is similar to the Nvidia HILO? I’m interested by the fact that HILO can carry precisely normal components at a reasonable cost (32 bits).
I guess that it could be possible to generate a fragment program that generates Nz from the 2 first channels and give a dual channels half-float texture as an input, but this should certainly be more expensive than the native HILO management.
Couldn’t you use an intensity/alpha format with 16-bits per component? I think R300-based cards can handle 16-bit-per-component textures, in addition to 16-bit and 32-bit floating-point textures.
I think R300-based cards can handle 16-bit-per-component textures, in addition to 16-bit and 32-bit floating-point textures.
Yes, and the 16-bit floating-point textures should be sufficient to handle such a normal vector, but HILO textures allow the automatic generation of the third normal vector coordinate… Is there any exposed and similar mechanism in an ATI extension ?
8 or 9 instructions actually, less if you don’t renormalize.
I guess that you include the texture instructions that are required in any case. But do you think this extra work could counterbalance the additional memory for 3 16-bits channels.
I guess they just didn’t want to spend the extra transistors for it.
Thank you NitroGL! I didn’t see this shader files.
Assuming that the input texture is dual, then only the R and G component are provided, so B is 0 and in opengl this shader should be something like that:
RCP is unary, this can't be right. I also think MUL may be more efficient than RCP, even if you don't see it on (your) current hardware. My changes in [b]b[/b]old: