ATI + FBO + min filter + GenerateMipmapEXT -> artifacts, access violation, crashes

Problem 1:

The following code causes an access violation in atioglxx.dll when it reaches the line with glGenerateMipMapEXT:
	GLuint tex;
	glGenTextures(1, &tex);
	glBindTexture(GL_TEXTURE_2D, tex);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
	//glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);     
	glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA16F_ARB,512,512,0,GL_RGBA,GL_FLOAT,0);
	//glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAX_LEVEL, 0); 
	
	GLuint fb;
	glGenFramebuffersEXT(1, &fb);
	glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
	// glBindTexture(GL_TEXTURE_2D, tex);
	glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, 
		GL_TEXTURE_2D, tex, 0);	
		
	// optionally render something here
	
	glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
	glBindTexture(GL_TEXTURE_2D, tex);
	glGenerateMipmapEXT(GL_TEXTURE_2D);
	
	// render here with tex
	...
	

If you uncomment the line where GL_GENERATE_MIP is set to GL_TRUE, or the line which sets GL_TEXTURE_MAX_LEVEL to 0, glGenerateMipmapEXT does not cause an access violation any more.

But:

glGenerateMipmapEXT on this fp16 texture destroys the texture data!! I rendered a fullscreen quad using the fp16 texture 'tex' after glGenerateMipmapEXT and everything was black. Skipping the call to glGenerateMipmapEXT solved the problem.

But:

If you uncomment the line with glBindTexture(GL_TEXTURE_2D, tex). glGenerateMipmapEXT does not kill the texture data any mode !?!? Why is this? tex is already bound to GL_TEXTURE_2D!!!

Problem 2:

	GLuint tex;
	glGenTextures(1, &tex);
	glBindTexture(GL_TEXTURE_2D, tex);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); 
	glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA8,512,512,0,GL_RED,GL_INT,0);
	
	GLuint fb;
	glGenFramebuffersEXT(1, &fb);
	glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
	
	//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
	glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, 
		GL_TEXTURE_2D, tex, 0);	
	//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
	
	// render something
	...
	
	glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
	glBindTexture(GL_TEXTURE_2D, tex);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
	glGenerateMipmapEXT(GL_TEXTURE_2D);
	
	// render something using the texture 'tex'
	...
	

The texture is created and the min and mag filter are set to nearest intentionally. GENERATE_MIPMAP is enabled so that a full set of mipmaps will be generated automatically.

After glGenerateMipmapsEXT the texture 'tex' contains artifacts in every mipmap layer. Not calling this function will leave the texture intact and without artifacts.

Uncommenting the line with glTexParameteri AFTER glFramebufferTexture2DEXT causes the texture to be black regadles of GenerateMipmapsEXT.

Setting the same texture filter before calling glFramebufferTexture2DEXT makes it work. 

Even
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
	glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, 
		GL_TEXTURE_2D, tex, 0);	
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
	

makes it work! This is completely confusing!!

What does the FBO extension have to do with the GL_TEXTURE_MIN_FILTER??? Why is the value of the min filter apparently copyied into the color attachment state when glFramebufferTexture2D is called???

Conclusion:
The exhibited behavior is rather confusing. The behavior is even dependant on the texture internal format (try fp16 vs. RGBA8).
I think at least some of the exhibited behaviour is because of a bug in the drivers. With a GeForce 6600 i didn’t experience these problems.

What do you all think? Do you experience the same problems with this code?

Forgott to post my machine’s spec:

AMD Athlon 3500+ WinXP SP2
ATI Radeon X800 XT Catalyst 6.3

Another sweet snippet (full source):

#include "SDL.h"
#include <GL/glew.h>

int main(int argc, char* argv[])
{
	SDL_Init(SDL_INIT_VIDEO);
	SDL_SetVideoMode(640, 480, 32, SDL_OPENGL);
	glewInit();
	GLuint tex;
	glGenTextures(1, &tex);
	glBindTexture(GL_TEXTURE_2D, tex);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
	glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA8,512,512,0,GL_RED,GL_INT,0);

	GLuint fb;
	glGenFramebuffersEXT(1, &fb);
	glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
	glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, 
		GL_TEXTURE_2D, tex, 1);	
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
	
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

	glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
	glBindTexture(GL_TEXTURE_2D, tex);
	glGenerateMipmapEXT(GL_TEXTURE_2D);

	SDL_GL_SwapBuffers();
	
	SDL_Event e;
	while (SDL_WaitEvent(&e) && e.type != SDL_QUIT);
	
	SDL_Quit();
	return 0;
}

Guess what happens or try it out yourself!

EDIT: FYI this one crashes my computer at the call to glClear(). The mouse and keyboard do not react any more and the monitor goes black.

We’ve had a bug with FBOs and glGenerateMipmapEXT() but I believe it was recently fixed. This sounds very much like that bug. I’ll check it out tomorrow.

At least in Catalyst 6.3 it still crashes.

I don’t use such cases as posted above (yet), but when uploading textures, i wanted to create the mipmaps with glGenerateMipmapEXT, but it always crashed. At the moment it still use automatic mipmap-generation instead.

This is on simple RGB8 or RGBA8 textures.

Jan.

Sorry for the delay, but I’ve looked into this problem a bit more. We just recently fixed a couple of glGenerateMipmapEXT() bugs, which should appear in a future driver. The crash bug remains though, but only occurs on R420 series cards, not on any R300 or R520 based. We should of course never crash, but with that said, it certainly helps if the code in question is sane to begin with. There are a number of bad things there.

  1. You set the filter to no mipmaps, when you clearly are asking for mipmaps. The driver may decide to defer mipmap creation (it’s possible the filter never changes in which case it would be unneccesary to store any mipmaps). It’s recommended that you always set the proper filter the first thing you do to help the driver. This will be both faster, less prone for bugs and potentially save some memory.

  2. Why do you pass GL_RED and GL_INT? Do you normally upload red 32bit integers? Just because you don’t pass any data doesn’t mean you can put just about anything in these parameters. The driver may use them to decide in what format to store the texture to speed up future uploads to this texture.

  3. You’re asking to clear the depth buffer, but you don’t have a depth buffer.

I’ll file a bug report on this issue though.

I have to post here about the glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); function. I have Nivida sorry Humus, :wink: but why does he need that line I don’t use it and everything works fine for my mipmapping with FBO’s, I am not using the SGI mipmapping extension FYI.

Well, since he’s rendering into mipmap level 1 he needs mipmaps to be created at that point, which won’t happen without that line since he calls glGenerateMipmapEXT() first after rendering. Of course, I wouldn’t use both GL_GENERATE_MIPMAP and glGenerateMipmapEXT(), and calling glGenerateMipmapEXT() will overwrite anything he just rendered into mipmap level 1 too.

The best way if you want mipmaps is to explicitly create all mipmaps with a bunch of glTexImage2D() calls. That’s easy enough to do in a loop.

[b]

  1. You set the filter to no mipmaps, when you clearly are asking for mipmaps. The driver may decide to defer mipmap creation (it’s possible the filter never changes in which case it would be unneccesary to store any mipmaps). It’s recommended that you always set the proper filter the first thing you do to help the driver. This will be both faster, less prone for bugs and potentially save some memory.
    [/b]
    Ok, this is a good point.


2) Why do you pass GL_RED and GL_INT? Do you normally upload red 32bit integers? Just because you don’t pass any data doesn’t mean you can put just about anything in these parameters. The driver may use them to decide in what format to store the texture to speed up future uploads to this texture.

glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA8,512,512,0,GL_RED,GL_INT, NULL);

In this call the texture internal format is specified as GL_RGBA8. So the driver should know how to store the texture internally! GL_RED and GL_INT specify the format of the texture data which is about to be uploaded. But the data pointer is NULL so these two parameters should not play a role. The data could be uploaded later by a glTexSubImage call.
But if it makes the driver more happy i can still set them to GL_RGBA or GL_BGRA.


3) You’re asking to clear the depth buffer, but you don’t have a depth buffer.

Requesting to clear a buffer which doesn’t exist should be a noop and not cause a crash.

Originally posted by Humus:
[b] Well, since he’s rendering into mipmap level 1 he needs mipmaps to be created at that point, which won’t happen without that line since he calls glGenerateMipmapEXT() first after rendering. Of course, I wouldn’t use both GL_GENERATE_MIPMAP and glGenerateMipmapEXT(), and calling glGenerateMipmapEXT() will overwrite anything he just rendered into mipmap level 1 too.

The best way if you want mipmaps is to explicitly create all mipmaps with a bunch of glTexImage2D() calls. That’s easy enough to do in a loop. [/b]
Maybe I am wrong but shouldn’t you call glGenerateMipmapEXT() during your setup of your FBO and then call it everytime you shutdown the FBO? If I remember right glGenerateMipmapEXT creates all levels of mipmaps? Thanks and if I am wrong please correct me.

Originally posted by Trenki:
In this call the texture internal format is specified as GL_RGBA8. So the driver should know how to store the texture internally! GL_RED and GL_INT specify the format of the texture data which is about to be uploaded. But the data pointer is NULL so these two parameters should not play a role. The data could be uploaded later by a glTexSubImage call.
But if it makes the driver more happy i can still set them to GL_RGBA or GL_BGRA.

The internal format doesn’t say anything about actual memory layout. That’s totally up to the driver. So when you say RGBA8, the driver may very well store that as BGRA8, ARGB8 internally, depending on what the hardware supports. That’s totally transparent to the application. Some hardware may be able to store a texture with different byte orders. So if you pass GL_BGRA and GL_UNSIGNED_BYTE when you create the texture, the driver will probably assume that this is the source format you’d use when you upload data to it, and thus decide to use BGRA byte ordering to speed up texture upload.

Requesting to clear a buffer which doesn’t exist should be a noop and not cause a crash.
Sure, absolutely. Still not good practice to do so.

Originally posted by Mars_9999:
Maybe I am wrong but shouldn’t you call glGenerateMipmapEXT() during your setup of your FBO and then call it everytime you shutdown the FBO? If I remember right glGenerateMipmapEXT creates all levels of mipmaps? Thanks and if I am wrong please correct me.
glGenerateMipmapEXT creates all mipmaps, however, it’s better to create them manually with glTexImage2D() since that should be faster.