I’ve seen demos that show the diff between 8 bit normal maps and the higher precision version.
I’m considering going to higher precision but I’m not sure if the difference will be noticeable in a complex object.
What are the possible precisions for them anyway?
I guess there is HILO.
float is out of the question for now.
One interesting thing they suggest is to use a 2-channel 32 bit format, each channel having 16 bits. The last channel is derived from the given two in the fragment program. This way you get much better precision at the same memory cost.
Well, with a fragment program you could always renormalize the normal vectors, so that’s not necessarily a problem (merely a few extra instructions). I’m really not sure. . .
There is a demo from ATI called highprecisionnormalmaps
that shows the difference between various formats. It looks like they are normalizing when needed. I don’t quite understand the code.
Perhaps the ugliness comes from texture filtering???
For GL, we have quite a few formats for textures it seems.
The ugliness referred to is dependent on the spatial frequency (i.e. how noisy versus smooth) of your normal map and the shader that uses it. In the case of the car hood, the very smooth hood is generated using the normal mapper and has very subtle variations in normals. This coupled with the detailed cube map used on the car makes the relatively low precision of the normal map apparent. If a blurrier cubic environment map had been used, the lack of precision would be less obvious. The HighPrecisionNormalMaps D3D sample uses a relatively high specular exponent and shows off the precision artifacts from the (procedurally generated) normal map. If you are using a grittier/noisier normal map and a less demanding shader (i.e. just diffuse) you won’t see these artifacts.
-Jason
[This message has been edited by JasonM (edited 01-03-2004).]
Is support of all that RGB16, RBG10 etc. formats madratory? i just tried it out with Nvidia and it gets clamped to 8-bit pro channel. Hmm, not surprising if one remember the memory adressing issue of nvidia cards(where leck of floating-point buffer support comes).
I forgot to ask the obvious. Which one is the HILO format in GL? The luminance & alpha one?
There is no direct equivalent to the HILO format in OpenGL. You can use Luminance/Alpha 16 as one, though.
all of these formats are unsigned? If I sample them in a fragment program, then I get floats [0…1]?
Until quite recently, the idea of “signed” colors was not even reasonable. So, yes, all the formats are unsigned. You will need to convert them to signed just as you normally would.
There is no direct equivalent to the HILO format in OpenGL. You can use Luminance/Alpha 16 as one, though.
There’s HILO in OpenGL, in fact, originally it was an OpenGL feature only. IIRC it’s in the NV_texture_shader extension. It would be nice to have that as a separate EXT or ARB extension, though.
There’s HILO in OpenGL, in fact, originally it was an OpenGL feature only. IIRC it’s in the NV_texture_shader extension. It would be nice to have that as a separate EXT or ARB extension, though
Yes but this is a feature that is only available on nvidia chips (GeForce 3+).
On ATI chips, you have to do the job via fragment programs. Have a look to: