Strange ATI issue ? - even the basics don't work!

For possibly a year now I have been experiencing a very odd situation with my engine running on a Windows Vista PC with ATI Radeon 4850 - all driver versions.

My engine runs perfectly with nVidia h/w and this is my main game development system. However, from time to time i like to check the engine on ATI h/w - and this is where the fun begins.

Using a debugger, I can see that the odd behaviour is when Texture_2D is enabled and I try to render any texture (to a quad, etc). Even a simple glBegin (GL_QUADS) results in a floating-point divide error when the next swap buffer cammand is issued. The exact same rendering commands with Texture_2D disabled does not result in the crash. Therefore I conculde the issue is the ATI drivers doing something odd with the texturing state. The same behaviour occurs no matter what object I attempt to draw (complex models, simple models, basic quads, etc) and it does not matter whether I use fixed function or shaders - it always crashes. I can’t post you any code as it’s meaningless - any attempt to use a texture just fails.

To aid my debugging - I have checked the OpenGL context - it’s 3.1.xxx and also have tried a GL 2.1 context as well. I have ripped apart my engine and made it only render a stupidly basic scene - some flat shaded objects and then a 2D orthographic full screen overlay with a test peice of code to draw a textured quad. It will render the flat shaded objects but always fails to render this textured quad - as I said before.

Anyone ever seen anything like this before?
This has been bugging me for absolutely ages (no pun intended!)

No, everything is working perfectly on Ati here. Chances are, there’s something wrong with your engine - but there’s not enough information to hazard a guess as to the cause.

Are you using any glGet* functions to query state? If so, double-check their return values are what you expect.

Even if I do a floating point divide error, by which I mean divide by 0.0, none of my programs have crashed. The FPU just raises an error flag but it should not jam your program.

Are you sure the crash occurs in an ATI dll?

Right, some progress…
Massive amounts of debugging going on and I think it’s either the way I’m doing automatic texture compression or ATI/nVidia drivers are different in behaviour.

My engine usually allows each texture to become automatically compressed - this happens because i change the GL internal format from GL_RGBA to GL_COMPRESSED_RGBA when calling GL_Texture2D (target, GLformat,…)
After uploading the texel data I read back some parameters and this where it gets interesting

I check if the texture was compressed, then it’s internal format…

On the ATI hardware, calling:

glGetTexParameteriv (texture.target, GL_TEXTURE_COMPRESSED, @isTC) returns 0

glGetTexParameteriv (texture.target, GL_TEXTURE_INTERNAL_FORMAT, @fmt) returns 0
glGetTexLevelParameteriv (texture.target, 0,GL_TEXTURE_INTERNAL_FORMAT, @fmt) returns GL_COMPRESSED_RGBA_S3TC_DXT5_EXT

This is most odd, since the GL query first said it was not compressed - then actualy says its S3TC Compressed.
With this format - when I attempt to use the texture it crashes GL.

I notice that I have actually loaded an RGB texture and generated the alpha myself. Hence I have created an RGBA source image and therefore a COMPRESSED_RGBA GLinternal format. If I just keep the texture to GL_RGB the compression is set to GL_COMPRESSED_RGB and

glGetTexParameteriv (texture.target, GL_TEXTURE_COMPRESSED, @isTC) returns 0
glGetTexParameteriv (texture.target, GL_TEXTURE_INTERNAL_FORMAT, @fmt) returns 0
glGetTexLevelParameteriv (texture.target, 0,GL_TEXTURE_INTERNAL_FORMAT, @fmt) returns GL_COMPRESSED_RGBS3TCDXT1_EXT

…but at least I can now use the same texture for something…

Questions: Am I doing something wrong to create compressed textures

or

Is there something wrong with ATI drivers (or infact are nVidia too forgiving of badly written GL code!)

What is the fool-proof correct way to upload image data and get GL to compress?

Is there something wrong with ATI drivers

That seems likely. Most applications don’t use the generic compressed formats the way you did; they usually select a specific compression scheme. So it’s very possible that ATI just has a bug there.

As a workaround, you could try to select a specific compression scheme yourself.

Just a guess but a driver might favor speed over quality here so you’re probably better off doing this sort of thing as a preprocess anyway (at least you’ll know what you’re getting and it’ll be consistent).

glGetTexParameteriv (texture.target, GL_TEXTURE_COMPRESSED, @isTC) returns 0
glGetTexParameteriv (texture.target, GL_TEXTURE_INTERNAL_FORMAT, @fmt) returns 0

Those should be throwing INVALID_ENUM, because you’re trying to query per-level parameters with a per-object API.

setting to _S3TC_DXT5 still causes a crash…

I thought the whole point of GL_COMPRESSED_ARB was that the drive chose the best compression.

What is the correct way to read back from GL whether the texture is compressed, its internal format and its size?

setting to _S3TC_DXT5 still causes a crash…

Well ATI’s drivers aren’t exactly known for their quality. You should probably do what actual games do: compress them off-line and upload the data as pre-compressed images.

I thought the whole point of GL_COMPRESSED_ARB was that the drive chose the best compression.

Well it’s not like there are a lot of choices here. The hardware only really supposed one basic style of compression: S3TC block compression. This can be used in several ways, but only usually in one way per type of input data.

If you send GL_RED data and ask for GL_COMPRESSED_RED, you will get GL_COMPRESSED_RED_RGTC1, as that is what you use for 1 channel compressed formats. For GL_RG data, it’s GL_COMPRESSED_RG_RGTC1. For GL_RGB, it’s GL_COMPRESSED_RGB_S3TC_DXT1_EXT. Only for GL_RGBA data is there ever a choice, and the better choice most of the time is GL_COMPRESSED_RGB_S3TC_DXT5_EXT.

So basically, you should already know what format you’re getting based on what data you send GL.

What is the correct way to read back from GL whether the texture is compressed, its internal format and its size?

According to the GL spec, GL_TEXTURE_COMPRESSED is a texture level parameter. Therefore you must use glGetTexLevelParameter to retrieve it. The same goes for GL_TEXTURE_INTERNAL_FORMAT. Just as arekkusu said.

Thankx guys…I am doing somethings wrong with the read-back.

There are a couple of other ATI bugs too.
An FBO with depthbuffer & stencil (or depthtexture & stencil) actually does not function correctly. I get no FBO errors on creation and a GL readback confirms 8 stencil bits allocated. However, I have proved that the stencil taging is not working (so deferred lighting broken) and also most bizzarely NO DEPTH! When rendering the scene - all objects are invisible becuase they have no depth - I have to render in wireframe mode and then I can ‘see’ the objects. I have a couple of other flat shaded objects (test/debug shapes - box, cylinder,etc)and these are drawn really badly behind each other - where there is no depth.

oh…forgot to reply to earlier comment…
The reason why I choose to use the generic compression ENUM is that it’s future proof - the driver should choose the most appropriate compression without the application having to test for each and every one. Yes, we have one S3_TC scheme but that’s not the only one now with the RG_TC additions found in GL 3.x.

An FBO with depthbuffer & stencil (or depthtexture & stencil) actually does not function correctly.

Did you use a combined depth/stencil buffer, or are you trying to use individual depth and stencil buffers?

the driver should choose the most appropriate compression without the application having to test for each and every one.

So the quality of the compressed result is irrelevant to you?

Yes, we have one S3_TC scheme but that’s not the only one now with the RG_TC additions found in GL 3.x.

These are used for different things. RGTC only works on GL_RED and GL_RG textues. DXT1 works for RGB and RGBA with a 1-bit alpha. DXT3/5 works for RGBA only.

They don’t overlap.

I’m not too concerned over quality issues - if I were then I can choose not to compress indiviual textures. I use a very simple case select staement to choose the appropriate general purpose compression depending upon the texture format:

case Image.GLformat of:
GL_ALPHA: fmt := GL_COMPRESSED_ALPHA
GL_LUMINANCE: fmt := GL_COMPRESSED_LUMINANCE
GL_RGB8: fmt := GL_COMPRESSED_RGB
GL_RGBA8: fmt := GL_COMPRESSED_RGBA
…and so on…

It’s in this routine where I substituted GL_COMPRESSED_RGB for a specific scheme such as GL_COMPRESSED_RGB_S3TC_EXT…but that still caused mayhem on the ATI 4850 with OpenGL 3.1

As for the depth and stencil problems…
I originally used depth24_stencil8 (so a combined buffer) and this works as expected on nVidia. I had previously submitted a bug report to ATI that their combined depth_stencil FBOs were producing weird artifacts when the stencil was being used. That was at least two/three driver revisions ago and I’ve been waiting to test again with something newer (but the persistance crashing prevented me from getting any closer).

Now that I’ve stopped the crashing the depth_stencil is worse - it does not even work as a depth buffer!
I have also tried a depth24_stencil8 FBO with depth texture and with a depth buffer, and also using a separate stencil buffer (but that produced an incomplete FBO error).

None of these combinations work for me, the main GL window is OK with stencil and depth, however.

After debugging the FBO issue - is seems that ATI drivers are insiting that I use DEPTH_STENCIL derivatives for the renderbuffer internal format, and DEPTH_STENCIL_ATTACHMENT for the framebufferrenderbuffer call. This is the case whether I use a combined depth_stencil (depthTexture) or separate render buffers for depth and stencil.

Anyway, FBOs do now contain depth and stencil renderbuffers, but FBO using depthTextures do not work (as stated before - objects render with out any depth)

BionicBytes,

I have created test cases to try to reproduce your issues with compressed textures. So far, I have not managed to induce a crash. Is there any way you could supply us with a test case so that we can investigate further?

Cheers,

Graham

Hi Graham,

Yes please, I’d welcome that.
Actually I’d like to give you two - one to test the compression and the other to test the FBO using depth texture & stencil(instead of depth buffer).

How can I send it to you?