[glm] How to pack/unpack half type vec3 in glm version >= 0.9.5 ?

Hi,

In glm 0.9.4 there are hvec* types, and I have a program that uses hvec3 type, and now I am trying to upgrade to 0.9.5, but those types are removed. Instead, functions like packHalf2x16() packHalf4x16() are added to pack/unpack hvec2 and hvec4. What about hvec3? I can’t seem to find a way to pack hvec3 with the same 16 bit accuracy and preserving signs. Is it even possible under new GLM versions?

Thanks!

What about hvec3?

That’s an odd case because it’s not 4-byte aligned, and C++ doesn’t have a native 6-byte type. So it would have to return some other type, like a std::array<uint16, 3> or something similar. I suppose those weren’t added because nobody asked for them.

Hi,

I used it in some not-so-frequent cases to store normals, so I think I do need those. Before glm 0.9.5, they have hvec3 which is perfect for my use case, but then they claimed that half types are not supported by CPU natively, and may lead the user to think that it is, so they removed all half types, and added those packing/unpacking functions.

First of all, I still don’t think the reason behind the removal is strong enough (GLSL has native support for those half types, I believe glm should have those as well…), and secondly, it would be fine if we didn’t lose any functionality by this change, but apparently we did, I can’t use half type vec3 any more…

At this time, I am thinking of writing a custom packing/unpacking functions using those provided by glm, but comparing to just declaring a glm::hvec3, it is such a pain in the ass… If you guys have any better solutions it would be very appreciated.

Thanks!

I used it in some not-so-frequent cases to store normals, so I think I do need those.

I’ll ignore the 4-byte alignment issue for a moment to mention a more practical concern. 16-bit half-floats spend 5 bits on their exponent. The components of a normal are never greater than 1 (or less than -1). Which means that, at best, you’re wasting a bit of your exponent, since your exponent will always be either 0 or negative.

Worse still is the whole floating-point nature of a float. The purpose of a floating-point number is that the mantissa provides a regular amount of precision throughout (most of) the entire range of the representable numbers. A 16-bit float can represent 0.000123 just as exactly as it can 0.123. While a normalized, signed integer can’t represent the former exactly.

However, given the range restriction on normals, the floating-point nature… just isn’t very useful. So you can store 0.000123 exactly. Is it really that important to store ultra-small components like that? Or is it more important to exactly represent larger numbers? Because a normalized, 16-bit integer can store 0.12345, while the half-float will have to round it. This is because the half-float has to spend 5 bits on an exponent, while the normalized integer can use all its bits for the mantissa.

You’d be better off using normalized, signed 16-bit integers. They’d take up the same room and give you better fidelity where it counts. Then again, you’d probably be better off using GL_INT_2_10_10_10_REV​​, which packs 3 10-bit signed integers into a single 4-byte word. These have the same effective bitdepth as the half-float’s mantissa, but without that pesky exponent taking up unnecessary space. Not only is it smaller, you also have 2 bits to play with, if you want to add some Boolean vertex property.

As for the 4-byte alignment thing, I recall reading (a long time ago, so take that for what it’s worth) that it’s a bad idea performance-wise for any particular attribute to not be at least 4-byte aligned. So that might be something worth investigating.

Generally speaking, I’ve always found half-float vertex attributes to be… dubious. They always sound like a great idea. But I’m never really sure where they’d be more useful than something else. Maybe for texture coordinates that have to get big. And yet, if they get big, I’d much rather have the precision of a 32-bit float, where you can get into the thousands and still have 3 decimal digits of precision (texture coordinate precision is nothing to skimp on).

GLSL has native support for those half types

Do not confuse the parameters for glVertexAttribPointer (and its ilk) with what “GLSL” supports. The vertex format defines what the vertex pulling and decoding hardware does. That is not GLSL itself (though it could involve programmable hardware, and even be partially implemented in the VS, but that’s an implementation detail).

The only mention I see made in GLSL 4.50’s spec to “hvec4” is in a list of reserved keywords. And that particular list is for reserved keywords that are unused. The spec clearly says, “The following are the keywords reserved for future use. Using them will result in a compile-time error:”

So while you can use half-floats as vertex attributes, GLSL itself does not support them in code.

I am not quite sure what do you mean by using a integer variable to store a decimal number, can you explain a little more.

I agree that the keyword is reserved for future use, but hvec4 is not the only one, hvec2, hvec3 etc. are all there, check this: https://www.opengl.org/registry/doc/GLSLangSpec.4.50.diff.pdf, on page 24.

I am not quite sure what do you mean by using a integer variable to store a decimal number, can you explain a little more.

The most common texture image format is GL_RGBA8. This stores 4 channels of data, and uses 8-bits per channel. But these 4 values are 8-bit integers, on the range [0, 255].

Ever notice how your shader that accesses such textures only sees a floating-point value on the range [0, 1]? Even though the texture is actually storing an 8-bit integer? They don’t have to use “isampler” types either; the use “sampler”, exactly as if they were using floating-point textures.

This is called an “unsigned, [normalized integer](Normalized Integer - OpenGL Wiki Integer).” This means that, while the data is stored as an integer, OpenGL will automatically convert it to a float when read (and automatically convert floats to the integer range when written). It does this by mapping the integer range [0, L] (where L is the largest unsigned integer for that bitdepth), to the floating-point range [0.0, 1.0]. So an 8-bit unsigned normalized integer maps [0, 255] to [0.0, 1.0]; a 16-bit one maps [0, 65536] to [0.0, 1.0].

For the components of a normal, you would need a “signed, normalized integer”. This is just the signed version of the above. Though do take note of the annotation about signed integer mapping at the bottom of the linked wiki article.