RGBA floats to 16-bit value

Hi, I am totally new to OpenGL. I’m updating a visualization addon for Kodi (a mediacenter application). With the latest update, I think I have to provide a 16-bit value to OpenGL. I have four floats, r, g, b, a. What’s an efficient way of doing this in C++?

float r,g,b,a; //values from 0..1

verts[i].col = ????  //expecting a value like 0xffffffff

Thank you!

Are you sure that is a 16 bit value? I count 32 bits in 0xffffffff.

If the integer represents a direct color value (e.g. X bits are used for red, Y bits for green, …), you simply scale the components accordingly (e.g. multiply red
with (2^x) - 1 to transform it from [0,1] range to [0,(2^x)-1]) and cast it to an integer. The result is shifted to the right bit position and combined with the
other results with a bitwise or. Watch out for potential endianness problems (bit position vs. byte positions in multi byte integers).

An example with 24 bit RGB color with 8 bit per channel, where the least significant bits of the word are
used for blue, the next higher ones for green, and so on:


int R = (int)(r*255.0f), G = (int)(g*255.0f), B = (int)(b*255.0f);

int color = (R<<16) | (G<<8) | B;

If it is a 16 bit color index, you have to find the closest match in the color table for your RGB color somehow.