Sorry for asking so many questions in quick succession - I’ve been storing them up before I found out about this board
My problem is that I am loading 24 bit textures (straight from a .raw file into glTexImage2d, although I add an alpha channel), but when they are displayed there is obvious banding in what should be smooth gradients, etc, with the general effect looking like the textures are being displayed in 12 (!) bit or similar colour resolution
This happens on the 3 graphics cards I’ve tested, in both 16 and 32 bit colour modes…
I load my textures in GL_RGBA format, with the buffer in unsigned bytes…
I have read about the third parameter in MSVC documentation, but never thought more about it, until now. Since you CAN pass a 4 as parameter and it works (more or less), OpenGL must assume it means four elements (red, green, blue and alpha). Is this correct?
I guess that if you pass a incorrect number you just get some sort of default internal format which depends on your drivers. At my G400 i got 16bit textures with a 4, but i think most nVidia cards gives you 32bit textures with a 4 (maybe to handle the common mistakes done by the misleading info in the MSVC docs and the SuperBible)
[This message has been edited by Humus (edited 05-30-2000).]