Using 32bit bitmaps with alpha channel as a texture - Transparency problem

Hello, World!
I’m developing a small game with my friend and we have a small but important problem.
We are trying to make transparency with 32-bit bitmap textures but not successfully. We think that the problem is in depth test or in blending, but we are beginners in OpenGL, so we can’t solve this problem. Hope you can :wink:
Here is our coding of draw function:

[ol]
[li]void SC_OBJ::sDrawOBJ()[/li][li]{[/li][li]glTranslatef(0,0,-1.0f);[/li][li]glEnable(GL_TEXTURE_2D);[/li][li]//glDisable(GL_COLOR_MATERIAL);[/li][li]glColor3f(1,1,1);[/li][li]for(int drawed = 0; drawed < facenum; drawed++)[/li][li]{[/li][li] if(triangle[drawed].busemtl == true)[/li][li] {[/li][li] if(material[triangle[drawed].materialindex].bmap_d == true)[/li][li] {[/li][li] glDisable(GL_DEPTH_TEST); // Maybe here is our problem?[/li][li] glEnable(GL_BLEND);[/li][li] glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);[/li][li] }[/li][li] usetex = material[triangle[drawed].materialindex].map_Kd_index;[/li][li] stexture[usetex].sUseTexture();[/li][li] glEnable(GL_DEPTH_TEST);[/li][li] }[/li][li] [/li][li]glBegin(GL_TRIANGLES);[/li][li]if(normalnum > 0)glNormal3f(triangle[drawed].nx[0], triangle[drawed].ny[0], triangle[drawed].nz[0]);if(texcoordnum > 0)glTexCoord2f(triangle[drawed].s[0],triangle[drawed].t[0]);glVertex3f(triangle[drawed].vx[0], triangle[drawed].vy[0], triangle[drawed].vz[0]); [/li][li]if(normalnum > 0)glNormal3f(triangle[drawed].nx[1], triangle[drawed].ny[1], triangle[drawed].nz[1]);if(texcoordnum > 0)glTexCoord2f(triangle[drawed].s[1],triangle[drawed].t[1]);glVertex3f(triangle[drawed].vx[1], triangle[drawed].vy[1], triangle[drawed].vz[1]); [/li][li]if(normalnum > 0)glNormal3f(triangle[drawed].nx[2], triangle[drawed].ny[2], triangle[drawed].nz[2]);if(texcoordnum > 0)glTexCoord2f(triangle[drawed].s[2],triangle[drawed].t[2]);glVertex3f(triangle[drawed].vx[2], triangle[drawed].vy[2], triangle[drawed].vz[2]);[/li][li]glEnd();[/li][li]}[/li][li]glDisable(GL_BLEND);[/li][li]}[/li][/ol]

And here is a part from our texture loader:

[ol]
[li]glGenTextures(1, &uTEXTURE[0]);[/li][li]glBindTexture(GL_TEXTURE_2D, uTEXTURE[0]);[/li][li]glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);[/li][li]glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);[/li][li]glTexImage2D(GL_TEXTURE_2D, 0, 4, texturedata.width, texturedata.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, texturedata.imageData);[/li][/ol]

Here are a couple of screenshots to show, what exactly happens…

proxict.rajce.idnes.cz/OpenGL_screenshots#Screen0.jpg - This is not how it should look like…// in our scene

proxict.rajce.idnes.cz/OpenGL_screenshots#screen2.jpg - This is not how it should look like…// in our scene

proxict.rajce.idnes.cz/OpenGL_screenshots#Sniper0001.jpg - Rendered picture in Cinema 4D

proxict.rajce.idnes.cz/OpenGL_screenshots#Sniper2_0001.jpg - Another rendered picture in Cinema 4D

Thanks in advance…

You disable the depth test when using blending, but then re-enable it almost immediately:


            if(material[triangle[drawed].materialindex].bmap_d == true)
            {
                glDisable(GL_DEPTH_TEST); // Maybe here is our problem?
                glEnable(GL_BLEND);
                glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
            }
            usetex = material[triangle[drawed].materialindex].map_Kd_index;
            stexture[usetex].sUseTexture();
            glEnable(GL_DEPTH_TEST);

The problem with depth testing is that the test returns yes or no. Either the fragment is obscured or it isn’t. If it’s obscured, it won’t be drawn, if it isn’t obscured, it will be drawn. There’s no “draw it, but partially obscured by some other fragment with an intermediate alpha value”.

If you want to render translucent surfaces correctly, you basically have three options:

[LIST=*]
[li] Render from back to front, with glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA).
[/li][li] Render from front to back with glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA), using pre-multiplied alpha and a framebuffer with an alpha channel.
[/li][li] Use “depth peeling”, a technique involving rendering in multiple passes using two depth buffers. On each pass, the nearest fragment which hasn’t already been drawn is rendered. This similar to the second option, but depth-sorting individual fragments rather than polygons.
[/li][/LIST]
The advantage of depth peeling is that you don’t have to depth sort the polygons. The disadvantage is that it requires shaders, framebuffer objects and depth textures (i.e. OpenGL 3.x), and requires multiple passes.

The practicality of the first two options depends upon the geometry. If the data is in a BSP tree or similar structure, then sorting the polygons is straightforward. In other case, you may get away with sorting polygons based upon the nearest/farthest/average Z coordinate, but to do it correctly requires a topological sort, which in turn may require splitting polygons in order to break cycles in the dependency graph.

If you have to render arbitrary meshes and can’t afford the expense or complexity of depth-sorting the polygons, you can disable depth testing and use a blending mode which doesn’t depend upon order (e.g. GL_ONE,GL_ONE for addition). Or if you only need the front-most polygons, you can render into a separate buffer, with depth-testing and without blending, then composite the result with blending.

Could you send us any coding of one of the options? As I said, we are fairly new in OGL, so we don’t understand it much… We have tried a lot to get it work, but not really successfully. First time we wanted to use masking but it didn’t work well too. What would you prefer to use? Masking or 32-bit bitmap? I exported the model, what you can see on the screenshots from Cinema 4D, where the transparency works using Black and White mask… It sounds like, we are lazy and don’t want to do it in our way, but we really tried a lot but we didn’t get it to work… Please, could you send us any example, how to use it?
Thank you really much for your relply.

None of the three numbered options are simple, and I don’t have code which could realistically be used as an example (either it’s part of a much larger program from which it can’t reasonably be extracted, or it’s code which I don’t have the right to distribute, or both).

If masking (i.e. alpha-test) is sufficient (i.e. you don’t have large areas which are supposed to be translucent), I’d use that. It’s a great deal simpler than any of the approaches which are required to make general-case translucency work.

If you need general-case translucency and you can assume OpenGL 3 support, search the web for “depth peeling”; the first result should be nVidia’s original paper. If you don’t understand it, you just need to spend more time learning OpenGL. Contrary to what some books might promise, you can’t actually learn 3D graphics programming (to any reasonable level) in 28 days.

Failing that, simply disabling depth tests/writes will produce results which aren’t as blatantly wrong as you’ll get with them enabled.

Ok, thank you for your fast reply :wink: