Originally posted by dvm:
There’s an excellent paper from NVidia about texture compression that describes the steps that have to be taken. I’ve done a few months ago and I’m not sure your code is 100% correct (I only took a quick look, so I could be wrong). Even if it is maybe your card does not support the compression (but can use compressed textures). This happened to me when I tried to compress my textures in a geforce 5200 in a laptop. While it could use precompressed textures it couldn’t compress them. Also, you understand that you have to build an application that will compress the textures before you use them, don’t you?
Furthermore look up S3TC which seems to be the prevailent method for compression.
Go here
to download the paper from nvidia.
Hello!
Thank you for the paper suggestion. That was precisely the document that encouraged in the first time me to use texture compression, and I found it easier to read than the official extension specification. It’s a coincidence I’m also programming from a laptop, ATI card here instead. The drivers I currently use brings the texture compression extension, but not the s3tc compression one. Maybe this is causing the compression failing (lack of concrete compressed format from the drivers) but if it is so, it sounds a bit silly to implement the first without any other extension that could make use of it.
Anyway, I’d like to point that, according to nVidia’s paper, (page 2)
There are two different approaches to performing the compression of the texture bitmaps. The first method uses
OpenGL to effectively compress the textures.
My understanding here is that you can ask the GL to compress the texture for you.
Also from page 2 of the paper
The ARB_texture_compression extension allows an uncompressed texture to be
compressed on the fly through the glTexImage2D call by setting its <internalFormat>
parameter accordingly. This can be done in one of two ways: use a generic compressed
internal format from Table 1 or use an explicit internal format like one offered by the
S3TC extension listed in Table 2. Basically, the “compressed internal format” works just
like a texture with “base internal format” except that the data is compressed.
I think the meaning here is that you can tell GL that you want a texture to be compressed without specifying actually what kind of compression do you want, only what kind of data is being compressed.
Then, if a generic compressed internal format was used, query OpenGL for the actual
<internalFormat> that has been automatically selected by OpenGL. To do so, one calls
glGetTexLevelParameteriv again with <pname> set to
GL_TEXTURE_INTERNAL_FORMAT.
From here I understand that GL can choose an appropiate method of compression. Maybe is the lack of compressed formats on my machine what it is making that GL doesn’t have a way of compress the data. I don’t know what would be the behaviour if I supply already compressed data because I still would lack the decompression extension that was used to process the image in the first place.
I’ll test the same program with a driver that supports compressed textures and s3tc to see if generic compression from uncompressed data (that is, using GL_COMPRESSED_RGB_ARB as image internal format) is really supported.
Again, thanks for the tips.