gluBuild2DMipmaps - when can "internal format" be used?

The OpenGL documentation for GLU says that the second parameter of gluBuild2DMipmaps is the number of components. However, reading an NVIDIA example, and checking in this forum, I saw that this parameter can be used as the texture’s internal format (which is very helpful). Someone mentioned that this is since GLU 1.3.

How can I know if the parameter can be used in this way? Is this supported in older Windows implementations (Windows 95 and NT4)? Is there a runtime check I can make to be sure that this form of the call is supported? I don’t want to use this feature if I don’t know that it’s guaranteed to work.

BTW, where is the GLU spec online?

Thanks,

Eyal

Hi.


Is there a runtime check I can make to be sure that this form of the call is supported?

gluGetString(GLU_VERSION);


BTW, where is the GLU spec online?

http://www.opengl.org/developers/documentation/specs.html (GLU 1.3)

[This message has been edited by Michail Bespalov (edited 04-23-2001).]

Thanks. I guess I should have checked the documentation page first, ask question later

For NVIDIA’s release 10 drivers, you can always use SGIS_generate_mipmap:

glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP_SGIS, GL_TRUE);

This is much faster than gluBuild2DMipmaps() and it works for all formats, works with glCopyTexSubImage2D(), and much much more.

Thanks -
Cass

[This message has been edited by cass (edited 04-23-2001).]

Thanks, cass. Yes, I thought about using it - especially for textures that are generated dynamically. But I’m not optimising for speed yet, and I’d like the program to work on as many cards as possible.

And after thinking about the above a little, querying the GLU version at runtime doesn’t seem to do much good. Yes, gluBuild2DMipmaps may be a little faster than doing the scaling using several GL calls, but OTOH I’ll have to implement both methods - resulting in more complicated code.

The main benefit of gluBuild2DMipmaps, as I see it, is in its simplicity. If I can’t be sure that I have the right version when I write the code, then I don’t gain this benefit.

What does the version of GLU depend on? Is it the Windows glu32.dll, or a result of the ICD used? If it’s Windows based, what Windows versions support this functionality?

[This message has been edited by ET3D (edited 04-23-2001).]

I don’t believe there’s a ICD mechanism to override GLU. The GLU you have on Windows systems is probably whatever comes from Microsoft.

Cass

You can always specify an internal format. GLU doesn’t care what internal format you specify – it’s internal to the driver. GLU only cares about the type and format.

  • Matt

Originally posted by mcraighead:
[b]You can always specify an internal format. GLU doesn’t care what internal format you specify – it’s internal to the driver. GLU only cares about the type and format.

  • Matt[/b]

But the parameter is not “internal format” it is “number of components”. What I think you’re saying is that this parameter always went directly to glTexImage2D’s “internal format” parameter. This makes some sense, since the constants 1,2,3,4 indeed translate to formats with 1,2,3,4 components when used with glTexImage2D. It’s still be nice to know if this is official behaviour, and there’s no chance of bumping into some GLU check that will foil the attempt to use an internal format.

The best thing IMO is to drop the gluBuild2Dmipmaps function and do your own instead. It’s not much effort and is way faster.

It should be completely reliable to pass in any internal format.

The documentation that tells you that the parameter is “components” is bad, old documentation.

  • Matt

That documentation is the OpenGL 1.1 reference manual (Addison Wesley) But I can see how it can be wrong (caught errors in it before). I’ll use it as internal format, then.