I need to convert 24 bit color information “RGB” to 16 bit color information.
How do i attain this?
Should i do my own calculations on it (or) Is there a standard methodology to do this?
Only the 16 bit color representation is supported in particular embedded platforms, not beyond that.
Thanks for the information.
But that gives the theoretical explanation…
What about this below logic…?
24 bit representation of RGB has RRRRRRRR-GGGGGGGG-BBBBBBBB, 8 bits assigned for each primary color.
If it has to be converted to 16 bit representation, it has to be RRRRR-GGGGGG-BBBBB (5-6-5) like this.
So fetch the color components from 24 bit color representation and shift accordingly to get the 16 bit representation
On the early 3dfx chips all the calculations were done in 24 or 32bit, then dithered to 16bit before hitting the frame buffer. This meant that 3dfxs 16bit colour was superior to that of the competitions. Quite clever. Considering the hardware of the time, early voodoo1 card with 4meg of ram, it was a good solution. Just later on 16bit colour sucked :eek:
Are you sure this system was present of first generation 3DFX voodoo ?
My memories of the 3DFX1 I had were that subtle texture gradients were very ugly and banding, but I may not be accurate. 16bits textures would look crap even with 24bits compuations anyway…
Nowadays, 24bits is very easy on desktop platform, not costing much in terms of framebuffer space relative to VRAM.
And if your embedded device does 16bits, it is for both memory and speed, so it does not make sense to do computations in 24bits.
There was a time around Quake3, I saw a Mac with 16bits temporal dithering, that was quite nice on the eyes. It just breaks the reproductiblility.