I’m trying to track down an occasional problem I’m having using non-power-of-two textures with texture compression on ATI cards. (Current card = Radeon HD 5850, driver = Catalyst 12.8.)
After looking at many examples of successful and unsuccessful texture loads I’ve picked one that I can use to reproduce the problem:
glGenTextures(1, &TextureID);
glBindTexture(GL_TEXTURE_2D, TextureID);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
GLsizei width = 734;
GLsizei height = 717;
DummyTexture = (GLubyte*) calloc(width*height, 3);
try {
glTexImage2D(GL_TEXTURE_2D, 2, GL_COMPRESSED_RGB, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, DummyTexture);
GLint format;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_INTERNAL_FORMAT, &format);
if (format == 0){
TRACE("No format at %d x %d
", width, height);
} else {
TRACE("OK at %d x %d
", width, height);
}
} catch (structured_exception& e) {
const EXCEPTION_RECORD& rec = e.Record();
TRACE("Exception attempting to load a texture sized %d x %d
", width, height);
if (rec.ExceptionCode == EXCEPTION_ACCESS_VIOLATION){
char* accessStr;
if (rec.ExceptionInformation[0] == 0)
accessStr = "Read";
else if (rec.ExceptionInformation[0] == 1)
accessStr = "Write";
else if (rec.ExceptionInformation[0] == 8)
accessStr = "DEP";
else
accessStr = "Unknown";
size_t AccessLocation = rec.ExceptionInformation[1];
TRACE("%s access violation to address %ld
", accessStr, AccessLocation);
size_t ImageStart = reinterpret_cast<size_t>(DummyTexture);
size_t ImageEnd = ImageStart + width*height*3;
TRACE("Image data is from %ld to %ld
", ImageStart, ImageEnd);
if (AccessLocation < ImageStart)
TRACE("Access is %ld bytes before image data
", ImageStart - AccessLocation);
else if (AccessLocation > ImageEnd)
TRACE("Access is %ld bytes after image data
", AccessLocation - ImageEnd);
else
TRACE("Access is within image data???
");
}
}
free(DummyTexture);
glDeleteTextures(1, &TextureID);
The trace output is:
Exception attempting to load a texture sized 734 x 717
Read access violation to address 141566091
Image data is from 139985008 to 141563842
Access is 2249 bytes after image data
A few things need explaining:
The sample code is loading mip level 2. This is because the crash, when it occurred loading a real texture, occurred with mip level 2. Using the same values with mip level 0 works fine. Loading mip levels 0 and 1 with correspondingly larger values doesn’t make a difference, i.e. putting the above into an appropriate loop gives:
OK at 2936 x 2868
OK at 1468 x 1434
Exception attempting to load a texture sized 734 x 717
Read access violation to address 50143371
Image data is from 48562288 to 50141122
Access is 2249 bytes after image data
Using automatic mipmap generation triggers the same crash when loading the 2936 x 2868 base level texture. Manual mipmap generation allows me to isolate the particular point when it occurs.
Changing GL_UNPACK_ROW_LENGTH to the image width doesn’t make any difference. Changing the height from 717 to 716 or 718 avoids the access violation in this particular case.
It is possible that other combinations of height and width that don’t trigger the access violation are still accessing memory outside of the image area – it might be purely luck that in this particular case the location 2249 bytes after the image just happen to not belong to the process.
I tried sizing the memory so that the width and height were a multiple of four and then set GL_UNPACK_ALIGNMENT to 4 but it didn’t help (it just changed the location of the access violation to be anywhere to tens of megabytes after or before the image data!).
Finally, structured_exception is just a simple wrapper class for Win32 structured exceptions (i.e. used with _set_se_translator()).
On a related note – for years now we’ve had a work-around in our code trying to find out what the maximum compressed texture size is for ATI GPUs. On the above card, if I do:
// Get the theoretical maximum texture size
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &TexSize);
then TexSize = 16384. If I then do:
// Find out the largest size that will actually fit
do {
glTexImage2D(GL_PROXY_TEXTURE_2D, 0, GL_COMPRESSED_RGB, TexSize, TexSize, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
GLint format;
glGetTexLevelParameteriv(GL_PROXY_TEXTURE_2D, 0, GL_TEXTURE_INTERNAL_FORMAT, &format);
if (format == 0){
TexSize >>= 1;
} else {
done = true;
MaxTextureSize = TexSize;
}
} while (!done);
TexSize will still be 16384.
But if I then try to use an actual texture larger than 4096, it crashes. So our code uses a loop similar to the first one to find out what the largest power of two is that doesn’t crash when trying to load it.
Am I doing something wrong?
Thanks,
Jason.