GL_GENERATE_MIPMAP_SGIS

i wanna make a copy of my frame buffer, and use the mipmaps in the texture to do some funkey effects.

now i found GL_GENERATE_MIPMAP_SGIS in an nvidia doc, that says hardware will generate the mipmap levels for you. now i want to render a particular mipmap level back to the screen, how to i set which mipmap level will be used when rendering?

try Texture LOD bias

Or you could just set both TEXTURE_BASE_LEVEL_SGIS and TEXTURE_MAX_LEVEL_SGIS to the desired mipmap level.

Nico

in my render target set up (when i create my texture):
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP_SGIS, GL_TRUE);

each time i bind the texture:

glTexParameteri(GL_TEXTURE_2D, TEXTURE_BASE_LEVEL_SGIS, 1);
glTexParameteri(GL_TEXTURE_2D, TEXTURE_MAX_LEVEL_SGIS, 1);

hrmm , seems the mip maps arent automatically generated :-/ i just get white when specifiying other mip map levels,

Do you enable generate mip maps BEFORE you upload the texture data? MIP maps are generated at the time of data upload to surface level 0 only if the flag has already been enabled at that point.

Also, what does glGetError() say at the various points in your program? You really should sprinkle assert(!glGetError()) everywhere!

Originally posted by execom_rt:
try Texture LOD bias
I’m sorry to tell it won’t do the trick.

supagu : are you calling
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL_SGIS, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL_SGIS, 1);
before or after calling glBindTexture ?

Also, there is a typo in the source code you posted : you omitted the GL_ prefix before TEXTURE_BASE_LEVEL_SGIS and TEXTURE_MAX_LEVEL_SGIS, but I guess this is not important in your problem.

i tried TEXTURE_BASE_LEVEL_SGIS and TEXTURE_MAX_LEVEL_SGIS both before and fter binding the texture, both result in white, in both cases using the value of 0 uses the top most level, so i dont think order is important there.

i put in a few error checks and nothing came up there either.

as for the defines, yeah i put the defines in my code, taken from the extension guide (but i added them now)

references i’ve been using:
http://developer.nvidia.com/attach/6495

if i set GL_GENERATE_MIPMAP_SGIS to true when i make my texture, and after i bind it (before i copy to it) i do manage to get some out put on the mipmap levels, but i dont think it is what it should be:

level 0 (looks good):
pic

level 1 (wtf?):
pic

Without knowing how you render the image, what graphics card and driver you use, or anything else, it’s impossible to say what that is. Could be a bug in your program for sure.

I have to disagree with -NiCo-, because setting the base and max level will, as a matter of fact, should only generate the desired mip level (see below), but to display it, you’ll definitely need to tweak the LOD, as pointed out by execom_rt.

And then, I’ll disagree with jwatte (but maybe I didn’t understand completely what he meant), because you can completely specify glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP_SGIS, GL_TRUE); after the texture datas have been uploaded in the first place. The mip levels will be generated each time something change in your base level.

I’ve been looking for a way to generate only the mip level I was interested in for some time now, and there’s no way to do that (actually, I didn’t found any, but maybe there are). So you have to specify base level at 0, and max level at your desired level.

Basically, the code looks like this :

 // Bind target texture
glEnable(GL_TEXTURE_2D) ;
glBindTexture(GL_TEXTURE_2D, ScreenRenderTexture) ;

// Size of grab
float ScreenGrabXf, ScreenGrabYf ;
int ScreenGrabXi, ScreenGrabYi ;
float TexSize = 1024.0f ;

ScreenGrabXi = static_cast<int>(ScreenGrabXf = static_cast<float>(RenderViewWidth)) ;
ScreenGrabYi = static_cast<int>(ScreenGrabYf = static_cast<float>(RenderViewHeight)) ;			

// Grab screen
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, ScreenGrabXi, ScreenGrabYi) ;

// Texture LOD bias
glTexEnvf(GL_TEXTURE_FILTER_CONTROL_EXT, GL_TEXTURE_LOD_BIAS_EXT, BlurRate * MaxLODBias) ;

// draw quad
glBegin(GL_QUADS) ;
	glTexCoord2f(ScreenGrabXf / TexSize, ScreenGrabYf / TexSize) ;	glVertex2f(1.0f, 1.0f) ;
	glTexCoord2f(0.0f, ScreenGrabYf / TexSize) ;			glVertex2f(-1.0f, 1.0f) ;
	glTexCoord2f(0.0f, 0.0f) ;					glVertex2f(-1.0f, -1.0f) ;
	glTexCoord2f(ScreenGrabXf / TexSize, 0.0f) ;			glVertex2f(1.0f, -1.0f) ;			
glEnd() ;
 

And now, I might be wrong … but this worked for me.

SeskaPeel.

Originally posted by supagu:
i tried TEXTURE_BASE_LEVEL_SGIS and TEXTURE_MAX_LEVEL_SGIS both before and fter binding the texture, both result in white, in both cases using the value of 0 uses the top most level, so i dont think order is important there.
That’s strange. It sounds like the driver goes into “incomplete mipmap” mode, but it shouldn’t. If your mipmap is complete without touching these, it is automatically complete if you adjust TEXTURE_{BASE|MAX}_LEVEL_SGIS.

Driver bug?

Originally posted by supagu:
[b]i put in a few error checks and nothing came up there either.

as for the defines, yeah i put the defines in my code, taken from the extension guide (but i added them now)

references i’ve been using:
http://developer.nvidia.com/attach/6495

if i set GL_GENERATE_MIPMAP_SGIS to true when i make my texture, and after i bind it (before i copy to it) i do manage to get some out put on the mipmap levels, but i dont think it is what it should be:

level 0 (looks good):
pic

level 1 (wtf?):
pic [/b]
What’s your graphics card and driver version? What’s the texture format?

I’ve seen garbage coming out of mipmap generation on a Geforce FX/Forceware 61.77 for paletted (GL_COLOR_INDEX8) textures. Direct color formats work fine though.

Originally posted by SeskaPeel:
I have to disagree with -NiCo-, because setting the base and max level will, as a matter of fact, should only generate the desired mip level (see below), but to display it, you’ll definitely need to tweak the LOD, as pointed out by execom_rt.

The specification states:

If the value of texture parameter GENERATE MIPMAP is TRUE, making any change
to the interior or border texels of the levelbase array of a mipmap will also compute
a complete set of mipmap arrays (as defined in section 3.8.10) derived from the
modified levelbase array. Array levels levelbase + 1 through p are replaced with
the derived arrays, regardless of their previous contents. All other mipmap arrays,
including the levelbase array, are left unchanged by this computation.

with p = max{n, m, l} + levelbase

Levelmax is only used for displaying purposes and does not constrain the mipmap generation in any way.

So as long as you set levelbase to the level you wish to update, then perform a copytex or teximage call to levelbase, all mipmap levels from levelbase+1 and up are generated.

So he has to to set baselevel to 0 each time he specifies the texture at level zero (whether it is by teximage, texsubimage, copytex…, etc.)and setting baselevel back to 1 when he wants to display it. Maxlevel can remain 1 at all times.

Nico

Originally posted by SeskaPeel:
I’ve been looking for a way to generate only the mip level I was interested in for some time now, and there’s no way to do that (actually, I didn’t found any, but maybe there are).
You mean it’s impossible to generate one “automatically”, isn’t it ? Because you still have the glTexImage2D function for specifying a desired level and only one.

After reading the GL_SGIS_texture_lod spec again, it seems that GL_TEXTURE_MIN_LOD_SGIS GL_TEXTURE_MAX_LOD_SGIS are the right ones to use for this application, not GL_TEXTURE_BASE_LEVEL_SGIS and GL_TEXTURE_MAX_LEVEL_SGIS.

Vincoof, they should both have the same effect. I should point out that, in contrast to what the naming makes you believe, the minlod and maxlod actually count as an offset to the base level.

By setting minlod and maxlod to the desired level(ML). lambda equals ML

and the accessed level(d) at draw time is computed as

d=ceil(levelbase+lambda+1/2)-1
=ceil(levelbase+ML+1/2)-1

Because this is still dependent on levelbase, I decided to go with the levelbase and levelmax approach to keep it simple and general.

If the (copy)texcalls only use level 0 as the level parameter, using lodmin and lodmax would idd be the easiest way to do it.

Nico

k thanks for the clarification.
makes more sense now

i have a radeon 9600xt
driver version: 4.9

so everythink i have tried doesnt work :-/

heres my actual code:

	glActiveTextureARB(GL_TEXTURE0_ARB);
	glEnable(GL_TEXTURE_2D);

	// copy the frame buffer to my screen sized texture
	Engine::GetInstance()->GetDevice()->CheckErrors();

	glBindTexture(GL_TEXTURE_2D, screenTexture);
	glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP_SGIS, GL_TRUE);

	glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, width, height);
	
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL_SGIS, 0);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL_SGIS, 0);

	Engine::GetInstance()->GetDevice()->CheckErrors();

	// then render it back to 20% size
	glMatrixMode(GL_PROJECTION);
	glPushMatrix();
	glLoadIdentity();
	Engine::GetInstance()->GetDevice()->SetOrtho();
	glLoadIdentity();

	glBegin(GL_QUADS);
	{
		glTexCoord2f(1.0f, 1.0f);
		glVertex3f(1.0f, 1.0f, 0.5f);

		glTexCoord2f(1.0f, 0.0f);
		glVertex3f(1.0f, -1.0f, 0.5f);

		glTexCoord2f(0.0f,0.0f);
		glVertex3f(-1.0f, -1.0f, 0.5f);

		glTexCoord2f(0.0f, 1.0f);
		glVertex3f(-1.0f, 1.0f, 0.5f);
	}
	glEnd();

	glMatrixMode(GL_PROJECTION);
	glPopMatrix();

Supagu,

I have seen buggy ATI implementations of GL_GENERATE_MIPMAPS and texture base level before. Seems my devrel inquiry didn’t go through to the driver team. :slight_smile:

Basically I tried the same thing as you do back in 2002 for full screen glow effects (by blending in some mipmaps.)

The solution for ATI drivers is to set the texture parameter GL_GENERATE_MIPMAPS before the first glTexImage().

GL_TEXTURE_BASE_LEVEL_SGIS doesn’t work on ATI, use the environment parameter GL_TEXTURE_LOD_BIAS (yes you need to reset it everytime you bind another texture.)

okay i tried:

glTexEnvi(GL_TEXTURE_FILTER_CONTROL_EXT, GL_TEXTURE_LOD_BIAS_EXT, 2);

which changed the render of my scene (ie. looks like OGL used the smaller mip maps on all rendered objects)
so iu changed it back after setting it on my post effect. But it doesnt actually seem to affect the texture i render to the quad
:-/

Did you forget to specify GL_LINEAR_MIPMAP_xxx?

Take Seskas code as example, it looks right.

Here is, from my sources, a function that returns an “accumulation texture” back to the framebuffer at various mipmap levels. I use GL_TEXTURE_LOD_BIAS_EXT, and it worked, both on NVidia and ATI.

void GL_GRAPHOUT_INSTANCE::AccuReturn( unsigned bias )
{
  // Copies the accumulation texture back to the frambuffer.
  // F = A

  float s0 = 0;
  float t0 = 0;
  float s1 = Current.AccuImgWidth[ Current.Accu ] / mAccuWidth[ Current.Accu ];
  float t1 = Current.AccuImgHeight[ Current.Accu ] / mAccuHeight[ Current.Accu ];

  glTexEnvf( GL_TEXTURE_FILTER_CONTROL_EXT, GL_TEXTURE_LOD_BIAS_EXT, bias );

  glBegin( GL_QUADS );
    glTexCoord2f( s0, t0 ); glVertex2f( -1, -1 );
    glTexCoord2f( s0, t1 ); glVertex2f( -1, +1 );
    glTexCoord2f( s1, t1 ); glVertex2f( +1, +1 );
    glTexCoord2f( s1, t0 ); glVertex2f( +1, -1 );
  glEnd();
}

yes thats it!

didnt have mipmapping filtering set up.

Thanks heaps guys :slight_smile: