ForceWare 56.72 great precision BUG [EDIT: no bug...]

Hi,
I found very big precision bug in NVidia’s ForceWare 56.72 drivers. Here is the code:

glBindTexture(GL_TEXTURE_2D, glowtex);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
	glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP_SGIS, GL_TRUE);
	glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, glow_tex_size, glow_tex_size, 0,
		GL_RGBA, GL_UNSIGNED_BYTE, NULL);

	glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, glow_tex_size, glow_tex_size);

/* clear and so on */

 glEnable(GL_BLEND);
	glBlendFunc(GL_ONE, GL_ONE);

	glEnable(GL_TEXTURE_2D);

	// ----

	glBindTexture(GL_TEXTURE_2D, glowtex);

	glColor3f(1, 1, 1);

	for(int bias = 0; bias < 12; bias++)
	{
		glTexEnvf(GL_TEXTURE_FILTER_CONTROL_EXT,
			GL_TEXTURE_LOD_BIAS_EXT,0.5 + (float)bias);
		
		glBegin(GL_QUADS);

		glTexCoord2f( 0, 1);
		glVertex2f(0, 0);

		glTexCoord2f( 1, 1);
		glVertex2f(1, 0);

		glTexCoord2f (1, 0);
		glVertex2f(1, 1);

		glTexCoord2f(0, 0);
		glVertex2f(0, 1);

		glEnd();
	}

I read pixels from screen, then I blur it using LOD_BIAS from SGIS mipmaps.
There is a comparision of the same scene on diffrent drivers.
http://www.czacki.edu.pl/~tweety/precision_bug.jpg

I don’t like 56.72 anymore :slight_smile: It causes my program to look awful :frowning:
You can see that on 56.72 with bias > 8 it puts lighter pixels in the center, and black on the corners, and on 53.03 with bias > 8 it puts grey pixels all over the texture. I think it’s posibble that he has something very, very bad with mipmap generation.
What do you think about this ?
I’m running it on GeForce 4 4200.

Try GL_CLAMP_TO_EDGE.

No, it didn’t help. It’s definitely new drivers fault, because on the older (53.03) with the same code it works perfectly.
I hope they will do something about it.

Have you tried adjusting the quality setting in the Display Properties panel for OpenGL?

Quality is just fine, but I found (thanks to your advice) a new option: conformant texture clamp. When it is ON this quality goes down, so I have to turn it off and then everything is just fine. I have no idea why it is enabled by default.

Thanks for help :slight_smile: I knew that I can count on OpenGL forum members :slight_smile:

It’s on by default because it’s The Right Thing. It says “conformant” because it’s just that. Conformant to the OpenGL spec.

This option is in there because broken apps were shipped that rely on the broken behaviour of old NVIDIA OpenGL drivers. Congratulations, you’ve just created another broken app!

Do yourself and everyone else you intend to give your application to a favor. Turn that option back on and fix your application (as I’ve said, use GL_CLAMP_TO_EDGE instead of GL_CLAMP).

Sheesh :rolleyes:

If you use that check-box, then your program won’t do the right thing on ATIs, Intels, Wildcats, or most other kinds of hardware.

The right thing is to do what zeckensack said in the very first reply: set your clamping mode to CLAMP_TO_EDGE. You said you tried it, but I don’t believe it, because that does exactly what turning off the conformant texture clamping checkbox does, except in a way that all drivers understand.

Hey, thanks everyone!
Changing to CLAMP_TO_EDGE worked! I don’t know why it didn’t work earlier, I think I missed some GL_CLAMPs.
Now I can make my animation. I hope I’ll show you soon. :slight_smile: