Packed normal

Hi Everyone,

I am currently writing a mesh viewer for a commercial game and i have worked out the file format ok but i am having trouble getting the normal’s out of it. I know it is using a packed format for the normal’s/tangent’s/binormal’s.

The data for each normal is packed into 32bits. So i assume they are doing something like that is described Here

I have tried everything mentioned there to unpack the data but to no avail.

Here is an example of one triangle, vertex position followed by the packed normal in hex.

X: 0.6887617 Y: 0.8091009 Z: -0.1088498
A5 FB B2 FC

X: 0.7176331 Y: 0.8863871 Z: -0.2892583
FE 2B 00 FC

X: 0.7230608 Y: 0.8863865 Z: -0.1173188
FE 2B 00 FC

Any help deciphering this would be greatly appreciated.

Thanks

Uh, what game?

And, how did you get the float values if you don’t already know how to convert it?

I know it is using a packed format for the normal’s/tangent’s/binormal’s.

What is the internal format provided to glTexImage2D? That’d be helpful in knowing exactly what it’s doing.

Sammie381:
I do know how to read in the data for the mesh files, in fact i can render the mesh ok, i just can’t decipher the normal’s.

The data is packed like so

struct MeshLayout
{
float pos[3];
float normal;
float uv[2];
float tangent;
float binormal;
}

It’s only the normal’s/tangent’s/binormals they are packing, the position and texture coords are saved as regular floats.

If i generate my own normal’s i can get lighting to work ok, i just want to use their normal’s though.

So this is for vertex attributes. Why do you think those are floats? Yes, they take up 4 bytes, but why do you think that they’re the C++ type “float”?

There are several ways that they could be packed. Since these are vertex attributes, two spring to mind:


glVertexAttribPointer(X, 3, GL_SIGNED_BYTE, GL_TRUE, sizeof(MeshLayout), ...);

This means that each byte is a signed, normalized value. 0xFF means -1.0. 0x7F means 1.0. And so forth. This would waste a byte of space, since there are obviously only 3 components to a normal.

or:


glVertexAttribPointer(X, 4, GL_INT_2_10_10_10_REV, GL_TRUE, sizeof(MeshLayout), ...);

This uses a packed, 10-bit format for the RGB, with the remaining two bits for alpha. Or 10-bit XYZ, with 2 bits for W, if you prefer; it’s functionally equivalent. This is often used for packing normals, as it allows them to be more precise than the first version, but it takes up the same space: an integer.

Decoding this one is a bit trickier. The specification explains the byte arrangement of the components in the integer. Just remember: if they’re loading this array directly into memory, it was probably saved in the CPU’s native endian format. So you need to access it as a C++ unsigned integer, not as a unsigned char[4].

I am not sure at this stage if they are float’s. I am only reading them in this way as i know that’s how much data i need to skip to get from the position to the texture coords.

I have seen text files in the game directory that actually describe the mesh format like i mentioned above, although it makes no mention of the normals been a float it just says type_packed. The original game is actually a Direct3D9 game.

Thanks Alfonse, I will try a few of those format’s and see how i go.

A game using its own file format isn’t going to use any fancy GL packing. Normal components are usually packed
per byte or per word. There are 3 components (x,y,z), so that is 3 bytes or 6 bytes. A 4-byte normal is not very likely,
imho, so you’re format assumption must be wrong. And you still haven’t told us the name of the game.

I would rather not say which game as it is against the EULA to be doing this, it is not any large game like BF or COD.

I am 110% certain this is the layout of the mesh format, there is just no where else in the file they could be storing the data, after the vertex chunk i describe above the indices follow. It just can’t be any other way.

Like i said there is text files in the data directory that describes the mesh format this way too, it even mention’s packed data for the normal’s/tangent’s/binormals.

They could also be storing just the x,y and reconstructing the z in the shader, I haven’t tried this yet actually.

Perhaps running the game under apitrace could reveal how the vertex attribs are passed by the original code.

A game using its own file format isn’t going to use any fancy GL packing. Normal components are usually packed

D3D had 2_10_10_10_REV before OpenGL.

They could also be storing just the x,y and reconstructing the z in the shader, I haven’t tried this yet actually.

That’s not possible for an arbitrary normal. There are two square-roots, after all. You can only do it if you know the sign of the Z. That’s why it’s commonly done for the normals stored in tangent-space normal maps, but not commonly done for model-space normal maps.

I’ll try the apitrace and hopefully that will shed some light on it.