compressed normals

Does somebody know if OpenGL supports something like the DirectX UDEC3/DEC3N vertex format?

Thanks!

You can pack normals into a bytes or shorts per component. Take a look at glVertexAttribPointerARB .

Not all possible formats are accelerated on all hardware though. ATI and NVidia both provide documentation with charts showing which types are recommended.

Originally posted by Pop N Fresh:
Not all possible formats are accelerated on all hardware though. ATI and NVidia both provide documentation with charts showing which types are recommended.
Thanks for pointing that out, it seems that it’s only supported on ATI hardware… :frowning: For some reason I thought that nvidia also supported it.

I know that you can use bytes and shorts, but the mentioned formats use 10 bits per component and pack them on a dword. I think it offers a nice tradeoff between size and quality.

I know that you can use bytes and shorts, but the mentioned formats use 10 bits per component and pack them on a dword. I think it offers a nice tradeoff between size and quality.
It would be nice to get something like that. A simple internal format: 10:10:10:2 (effectively). Unfortunately, the ARB is far too busy to deal with little issues, so they will almost never get resolved.

but the mentioned formats use 10 bits per component and pack them on a dword.
I wasn’t familiar with the D3D formats and was just guessing what they meant. The closest you could get in OpenGL would be to store 2 shorts and derive the 3rd value in a shader perhaps.

Out of curiousity, have you done any benchmarking with UDEC3/DEC3N? Is there a caps bit for it? I’m wondering if this has native support in hardware or not.

Originally posted by Pop N Fresh:
Out of curiousity, have you done any benchmarking with UDEC3/DEC3N? Is there a caps bit for it? I’m wondering if this has native support in hardware or not.
I haven’t done any test, but I found that ATI has a demo available on its developer site . I don’t have any ATI hardware right now, but maybe somebody can give it a try.

I just took a look.

The example requires an extra operation in the vertex shader. The 10/10/10/2 format is loaded into a register as 3 floats. They’re in the range 0 to 1023 and an extra operation (MUL or MAD depending) is needed to put them into the unit vector range.

So it seems the hardware support is actually limited to unpacking 4 bytes into 3 unsigned integer values with a little magic in the vertex shader used to renormalize.

I know of no extension which would allow you to emulate that method in OpenGL exactly. Not surprising since it not orthogonal with glVertexAttribPointer in that you can’t specify automatic renormalization.

Originally posted by Pop N Fresh:
[b]So it seems the hardware support is actually limited to unpacking 4 bytes into 3 unsigned integer values with a little magic in the vertex shader used to renormalize.

I know of no extension which would allow you to emulate that method in OpenGL exactly. Not surprising since it not orthogonal with glVertexAttribPointer in that you can’t specify automatic renormalization.[/b]
That’s only true for UDEC3, but not for DEC3N. Actually, the U stands for unsigned and the N for normalized. On OpenGL that would be exposed using a single enumerant, and using the normalized flag to choose between unsigned and normalized.

Ah, you’re right. I only glanced at the shader quickly. I was thrown by the fact that they’re remapping the DEC3N normal to the range [0…1] for some reason. I just saw the MAD instruction and made an assumption without bothering to look at the constant being used. Oops.