This might be a beginner question, i figured you guys would probably know more on the subject,
when calling a glTexImageXD routine with a load of floats, specifying GL_FLOAT as the data type to use, what actually happens, how does GL map the float range and use it ??
im using new hardware, fx5900 or r9700 and above. if i specify RGBA16 for internal format, in a glTexImageXD operation, i guess i’ll get 16 bit precision clamped to [0,1]. Does anybody know how these values are mapped ? or should i use an extension such as float_buffer instead to guarantee behaviour ?
The external one, that includes both “format” and “type” and tells the driver how to cast and interpret the void * data argument.
Then there’s an internal format (like RGB8 or INTENSITY32F), and that determines what sort of type conversions and component replicating or merging need to happen.
Note also that the internal format is strictly a hint.