PDA

View Full Version : Using bitfieldInsert() to pack flags into G-Buffer



Moktoe
07-10-2017, 02:13 PM
I'm trying to use bitfieldInsert() and bitfieldExtract() to pack/unpack binary flags into a color component in my G-Buffer. Although the second modification of the bitfield doesn't seem to have any affect on the data. I've tried hard coding the input values multiple ways, but the color value only changes with respect to the first input.

Below is a simplified form of the code logic:


bool var1 = true;
bool var2 = true;

uint flags = 0;
flags = bitfieldInsert( flags, uint(var1), 0, 1 );
flags = bitfieldInsert( flags, uint(var2), 1, 1 );

color.b = flags;

Changing var2 doesn't affect the blue channel of the color output. Am I using these functions correctly?

GClements
07-10-2017, 02:56 PM
What is the format of the colour buffer? If it's unsigned normalised, then assigning a value outside of the range [0,1] will result in the value being clamped to that range.

Moktoe
07-10-2017, 03:30 PM
What is the format of the colour buffer? If it's unsigned normalised, then assigning a value outside of the range [0,1] will result in the value being clamped to that range.

The buffer format is GL_RGB10_A2. I thought about that too. I tried to use uintBitsToFloat() function, in case I needed it to match the rest of the vector components as floats. But that removed both the flags entirely.

Moktoe
07-10-2017, 04:03 PM
You were right, I just normalized the unsigned integer value and it's working now. Below is the solved form of the code. Thanks for your help.

bool var1 = true;
bool var2 = true;

uint flags = 0;
flags = bitfieldInsert( flags, uint(var1), 0, 1 );
flags = bitfieldInsert( flags, uint(var2), 1, 1 );

color.b = float(flags)/255.0;

Moktoe
07-11-2017, 10:04 AM
One last comment. If you are using a 10 bit color component like me. You should divide by 1023.0 to fully utilize all 10 bits.