Our team is developing a graphical client for a MMORPG game and we’re experiencing issues with users which have Radeon video cards.
OpenGL randomly crashes on glBindTexture and we fail to get any stack trace or anything more useful other than that. All we know that they all have one thing in common - Radeon video cards.
Are there any known pitfalls with Radeon videocards and OpenGL that we should know?
These are only 3 examples, but AMD don’t have the best track record when it comes to OpenGL driver quality.
That said, it’s also possible that the AMD driver is behaving correctly, but other drivers are accepting a texture (or combination of state) that they shouldn’t. You’d need to give more information on those crashes; determine what texture or textures are being bound when it crashes, if it’s a single texture inspect it for anything that looks like it might be a cuase, if it’s multiple textures determine what they have in common, etc.
Again, look at your parameters to the glTexImage2D call. You’ll find something common across your crash cases. It would help if you’d post a sample of one or two such calls; otherwise we’ll have to guess as to possible causes.
width and height were checked through logging, they have valid values, just as everything else. All values are just valid and it’s having really random crashes. Sometimes it does binding and just works…
We’re getting a bit hopeless on this.
P.S. Updating drivers didn’t help the guy. We’ll probably buy ourselves RADEON videocard and try to reproduce on our own machine, since this happened only to our testers with RADEON videocards.
type
Specifies the data type of the pixel data.
The following symbolic values are accepted:
GL_UNSIGNED_BYTE,
GL_BYTE,
GL_BITMAP,
GL_UNSIGNED_SHORT,
GL_SHORT,
GL_UNSIGNED_INT,
GL_INT,
GL_FLOAT,
GL_UNSIGNED_BYTE_3_3_2,
GL_UNSIGNED_BYTE_2_3_3_REV,
GL_UNSIGNED_SHORT_5_6_5,
GL_UNSIGNED_SHORT_5_6_5_REV,
GL_UNSIGNED_SHORT_4_4_4_4,
GL_UNSIGNED_SHORT_4_4_4_4_REV,
GL_UNSIGNED_SHORT_5_5_5_1, GL_UNSIGNED_SHORT_1_5_5_5_REV,
GL_UNSIGNED_INT_8_8_8_8,
GL_UNSIGNED_INT_8_8_8_8_REV,
GL_UNSIGNED_INT_10_10_10_2, and
GL_UNSIGNED_INT_2_10_10_10_REV.
It is a valid format and it works on all NVidia cards, so…
If this would be a major AMD Catalyst driver and OpenGL problem, this would be generally known. But it seems like we’re the only ones having this issue…
And this randomness with crashes makes it really hard.
Can you post an example of the values that your width and height parameters have? Where I’m coming from here is that I’m wondering if you have non-power-of-two textures, width or height that are not nice multiples of 4, or similar.
As a gneral rule AMD drivers are actually OK (despite my previous post) until you try to do something unusual or unexpected - that’s when they hit a codepath that might not have been robustly tested, and that’s when they explode.
This, for example, is something I’d consider unusual. One doesn’t often see the GL_UNSIGNED_INT_8_8_8_8 packed pixel type (GL_UNSIGNED_INT_8_8_8_8_REV is more common) and it’s also doing a format conversion in the driver from 32-bit 8-bit per channel to 16-bit 4-bit per channel.