PDA

View Full Version : compressed normals



castano
12-09-2004, 01:17 PM
Does somebody know if OpenGL supports something like the DirectX UDEC3/DEC3N vertex format?

Thanks!

Pop N Fresh
12-09-2004, 04:49 PM
You can pack normals into a bytes or shorts per component. Take a look at glVertexAttribPointerARB (http://developer.3dlabs.com/openGL2/slapi/VertexAttribPointerARB.htm) .

Not all possible formats are accelerated on all hardware though. ATI and NVidia both provide documentation with charts showing which types are recommended.

castano
12-09-2004, 06:06 PM
Originally posted by Pop N Fresh:
Not all possible formats are accelerated on all hardware though. ATI and NVidia both provide documentation with charts showing which types are recommended.Thanks for pointing that out, it seems that it's only supported on ATI hardware... :( For some reason I thought that nvidia also supported it.

I know that you can use bytes and shorts, but the mentioned formats use 10 bits per component and pack them on a dword. I think it offers a nice tradeoff between size and quality.

Korval
12-09-2004, 06:48 PM
I know that you can use bytes and shorts, but the mentioned formats use 10 bits per component and pack them on a dword. I think it offers a nice tradeoff between size and quality.It would be nice to get something like that. A simple internal format: 10:10:10:2 (effectively). Unfortunately, the ARB is far too busy to deal with little issues, so they will almost never get resolved.

Pop N Fresh
12-10-2004, 05:21 PM
but the mentioned formats use 10 bits per component and pack them on a dword. I wasn't familiar with the D3D formats and was just guessing what they meant. The closest you could get in OpenGL would be to store 2 shorts and derive the 3rd value in a shader perhaps.

Out of curiousity, have you done any benchmarking with UDEC3/DEC3N? Is there a caps bit for it? I'm wondering if this has native support in hardware or not.

castano
12-10-2004, 10:20 PM
Originally posted by Pop N Fresh:
Out of curiousity, have you done any benchmarking with UDEC3/DEC3N? Is there a caps bit for it? I'm wondering if this has native support in hardware or not.I haven't done any test, but I found that ATI has a demo available on its developer site (http://www.ati.com/developer/samples/dx9/QuantizedVertexNormals.html) . I don't have any ATI hardware right now, but maybe somebody can give it a try.

Pop N Fresh
12-11-2004, 11:08 AM
I just took a look.

The example requires an extra operation in the vertex shader. The 10/10/10/2 format is loaded into a register as 3 floats. They're in the range 0 to 1023 and an extra operation (MUL or MAD depending) is needed to put them into the unit vector range.

So it seems the hardware support is actually limited to unpacking 4 bytes into 3 unsigned integer values with a little magic in the vertex shader used to renormalize.

I know of no extension which would allow you to emulate that method in OpenGL exactly. Not surprising since it not orthogonal with glVertexAttribPointer in that you can't specify automatic renormalization.

castano
12-11-2004, 01:56 PM
Originally posted by Pop N Fresh:
So it seems the hardware support is actually limited to unpacking 4 bytes into 3 unsigned integer values with a little magic in the vertex shader used to renormalize.

I know of no extension which would allow you to emulate that method in OpenGL exactly. Not surprising since it not orthogonal with glVertexAttribPointer in that you can't specify automatic renormalization.That's only true for UDEC3, but not for DEC3N. Actually, the U stands for unsigned and the N for normalized. On OpenGL that would be exposed using a single enumerant, and using the normalized flag to choose between unsigned and normalized.

Pop N Fresh
12-11-2004, 03:08 PM
Ah, you're right. I only glanced at the shader quickly. I was thrown by the fact that they're remapping the DEC3N normal to the range [0..1] for some reason. I just saw the MAD instruction and made an assumption without bothering to look at the constant being used. Oops.