ATI why do you torture me so

I am trying to implement some weighted average order independent transparency …

Anyhow I need a FBO with 2 colour attachments. I need float texture formats for it to work, because I need to sum all the values then divide by the alpha values. The problem is float texture formats cause all kinds of bizarre problems.

Pics

As you can see both floating texture targets give strange errors, with the 32bit one just failing to do any kind of blending at all. Is this kind of pain normal ?

My gfx card is … ATI x1950 pro. Oldish yes but I need to support these crappy cards (cries).

My gfx card is … ATI x1950 pro.

They don’t support 32-bit floating-point blending.

Well that explains that problem. Is there some way of asking the card if it supports that ?

Any idea what I can do about the poly cracking ? It only seems to happen with GL_RGB16F.

Well that explains that problem. Is there some way of asking the card if it supports that ?

No.

Could you use a texture proxy to test what texture formats are available?
By trying to create a texture with a specific format:


glTexImage2D( GL_PROXY_TEXTURE_2D,
              level,
              internalFormat,
              width,
              height,
              border,
              format,
              type,
              NULL);

and then query to see what format was actually loaded:


glGetTexLevelParameter( GL_PROXY_TEXTURE_2D,
                        0,
                        GL_TEXTURE_INTERNAL_FORMAT,
                        &format);

Could you use a texture proxy to test what texture formats are available?

You can create and render to a 32-bit floating point format on those platforms. You just can’t use blend modes with them.

Yeah that’s the problem, it just randomly fails. I discovered exactly what causes the poly cracking with GL_RGBF16, simply enabling blending. It seems this ATI card can’t really do any kind of floating point blending with any sense.