GL_RG32F vs GL_RGB32F in ATI's cards

Hi All,

I am implementing a Order Independent transparency with Dual peeling based on http://developer.download.nvidia.com/SDK/10/opengl/src/dual_depth_peeling/doc/DualDepthPeeling.pdf.

On Nvidia cards, all works as expected but for some reason when I run the program on Ati FirePro 4900 nothing is visible (All is transparent).
After a couple of hours, I had discovered that if I use a GL_RGB32F instead a GL_RG32F it works with ATI too.

Somebody has any idea of what is happens?

This is how I am creating the texture.


        glGenTextures(1, &m_ID);
        glBindTexture(GL_TEXTURE_3D, m_ID);
        glTexImage3D(GL_TEXTURE_3D, 0, GL_RG32F, m_Width, m_Height, m_Depth, m_Border, GL_RGB, GL_FLOAT, NULL);
        glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
        glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); 
        glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP);
        glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP);

Thanks in advance,

Carlos.

Your code snippet has GL_RG32F matched with GL_RGB, which is likely the issue. Use GL_RG instead.

Also, why a 3D texture? I haven’t implemented dual-depth peeling before, though I have done a regular depth peeling algorithm, and I’m not sure why you’d need a 3D texture over a 2D texture.

Hi Malexander,

Thanks for the reply.

I tried with GL_RG too but it doesn’t change the result (all transparent). Dual depth peeling or a least my implementation require 2 textures for depth information, one for read, one for write. I could use two regular textures or a texture array but for simplicity (maybe a bit lazy too) I used a 3D texture. By the way, for the MSAA version I am using a GL_TEXTURE_2D_MULTISAMPLE_ARRAY and GL_RG32F works fine (more or less) so maybe ATI doesn’t like 3D textures.

Confirmed, I have changed my 3D texture by a GL_TEXTURE_2D_ARRAY and it works. I don’t know if it is a ATI’s driver problem or that I don’t understand the different between GL_TEXTURE_2D_ARRAY and GL_TEXTURE_3D.

Thanks again Malexander,

Carlos.

It’s likely allowing you to render to one layer of a 2D texture array and read from another layer, but the same doesn’t hold true for a 3D texture. This is probably due to implementation-specific details on how those textures are laid out.

The main difference is that a 2D array texture doesn’t perform interpolation or mipmapping in the third dimension, and the third texture coordinate (r) is always clamped regardless of the repeat mode.

According to §9.3 (Feedback Loops Between Textures and the Framebuffer) of the 4.3 core profile specification, the behaviour is undefined if it’s possible for the bound program to sample from a part of a texture which is attached to the bound framebuffer.

If you’re attaching one layer of a 3D texture or 2D array texture to the bound framebuffer, and the shader is reading from the same texture, and the only thing stopping it reading from the “write” layer is that the shader logic (and/or the values of uniforms and attributes) ensures that it doesn’t pass the wrong value as the third texture coordinate, then the behaviour is undefined. Maybe it will work, maybe it won’t, maybe it will vary depending upon any number of factors.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.