How to interrogate 3D card driver via OpenGL ?

I’m looking for a way to interrogate my 3D card driver about its hardware functionalities, like T&L or mipmap for example.

I don’t want to ask for the API functionalities but for the hardware implemented ones.

functions like :
gluCheckExtension(GL_EXTENSION_I_WANT)
doesn’t suit me because it ask the API wether or not GL_EXTENSION_I_WANT is supported. It could be sofware supported.

So, doesn’t anybody know a way to do such a thing ? I know that directX is able to interrogate the drivers so I hope OpenGL can !

Regards

No, OpenGL in general does not give out that kind of information.

And, is there an other way ?

I hope I’m not getting this the wrong way, but I don’t understand why you want to complicate it so much.If you’re going to make use of some capability like say mipmapping anyway(i.e. you’re going to write the code) why make your program automatically choose whether to use this or not?This would just complicate things(more code,messier code,etc.).Instead just code it and have an option to disable it.The user can then decide whether he wants it or not.This is simpler and better.

Originally posted by MPech:
And, is there an other way ?

No.

Regards.

Eric

On Win32 apps, you can definitely create a d3d context, query the primary device for such things as hardware t&l, then destroy it. Although you can’t guarantee that something exposed as accelerated in d3d will be accelerated in opengl - but 99% of the time it will be. Of course, if you do this, you’ll also have to ship the directx redistribution with your opengl game/app, but you may have to do that anyway, if you’re using directInput for example.

Thanks, but the little percent of possible error doesn’t suit me.

It’s very rare from such a good API that it doesn’t allow programmers to check if their effect is fully hardware rendered or a “little software” rendered.

Isn’t it ?

[This message has been edited by MPech (edited 03-19-2002).]

Well, err, no - the entire 1.1 opengl should be implemented in hardware - you can query that one thing anyway. As for extended functionality (up to 1.3) you can use the extension system for that - if a particular extension does not exist in glGetString then it is not hardware accelerated, and you cannot use it. It’s not like d3d, where if a certain feature is not accelerated, then a software implementation kicks in…it’s on or off in opengl - no middle ground (so I believe).
The only thing you might have trouble with is whether the implementation has accelerated t&l - but why should you care? Most drivers have highly optimised software transform and lighting routines if the card doesn’t accelerate it…so you’re covered for that eventuality.

Originally posted by knackered:
Well, err, no - the entire 1.1 opengl should be implemented in hardware.

  • Selection mode is not.
  • Feedback mode is not.
  • Stippling is not.

You simply can’t assume anything about what is accelerated and what is not.

As someone (Matt ?) said a while ago, you do not even care what is accelerated and what is not. Deep inside, what you really want to know is whether something is fast or not. The only way to do that is to time specific things at the beginning of your app.

Regards.

Eric

You mark a point on this, Knackered. :stuck_out_tongue_winking_eye:

But how to be sure ?
SGI site tells that :
“there exists no direct way to check whether particular OpenGL functions are hardware accelerated or not”

check the URL : http://www.sgi.com/developers/library/resources/openglxp.html#hw_graphics_conf

at the topic -> 5. Obtaining System and Graphics
Configuration Information

So i’m beginning to find sharp benchmark for testing.

[This message has been edited by MPech (edited 03-19-2002).]

Originally posted by MPech:
[b]Thanks, but the little percent of possible error doesn’t suit me.

It’s very rare from such a good API that it doesn’t allow programmers to check if their effect is fully hardware rendered or a “little software” rendered.

Isn’t it ?
[/b]

Why? You’re just bound to make silly assumptions based on that info.

Say, you deny the maximum polycount models and fancy effects because of missing transform hardware. What you don’t know is that you’re running on a future 5GHz monster CPU and 2GB/s bandwith to the graphics subsystem. The hardware vendor further made the very valid tradeoff to incorporate full hardware fragment shaders and save some silicon real estate by doing vertex shaders in software. Your assumption fails.

IMHO there are exactly three things you need to know about your GL implementation.

1)What buffer depths and types (pbuffer, stencil?) are supported? You can get that info when you create your rendering context.
2)What does the gl version I’m running on regard as core functionality? This will all function and regardless whether it’s hardware or software, just trust the implementation to choose the optimum path for you.
3)What extensions are supported on top of the core functionality? These will most likely be done in hardware.

The only Right Thing ™ to do is (as has already been mentioned) to give the user the choice when in doubt. He/she can far better determine exactly what acceptable performance is.

Thanks “professor” but these are elments that I already know…
I did not command you to answer if you didn’t want to.

no need to answer this one I quit.

Originally posted by MPech:
[b]Thanks “professor” but these are elments that I already know…
I did not command you to answer if you didn’t want to.

no need to answer this one I quit.[/b]

Sorry, I didn’t mean to patronize. I’m not qualified anyway
I didn’t see the previous answers because I typed too slowly

Eric:

  • Selection mode is not.
  • Feedback mode is not.
  • Stippling is not.

Didn’t get very far with that list, did you?
I wouldn’t regard any of those things as part of opengl OpenGL is there to render triangles…full stop. I stopped using all those things a long while ago, along with the matrix stack, the translate/rotate functions, and everything else that makes it easy to ‘learn’ opengl.
There just isn’t a need for them - in fact, using them makes it pretty difficult to produce an abstract renderer dll, because that kind of functionality should not be in a renderer. My predecessors’ renderer was sprawled out everywhere in the simulation, because the simulation relied on too much information from the renderer.
I hope you know what I mean, even though it’s a little off the topic of this thread.

knackered,

I agree with you: my list wasn’t that useful… I just wanted to show that even simple statements like “GL 1.1 is HW accelerated” can be wrong. I think you wanted to say that triangles rasterization is HW accelerated !

But I think it is fair to say that the fact that a feature is HW accelerated or not is irrelevant. What you really want to know about is speed. You actually don’t mind where the processing is done.

That being said, I’d be happy if someone can explain me why he’d prefer something to be slow and HW accelerated rather than something fast on the CPU… (OK, OK, you could want to offload some work from the CPU).

Regards.

Eric

P.S.: got something else for my list: I think Edge Flags switch the SW rasterizer on (at least they slow apps down quite a lot !)…

beautiful demonstration:
3dtextures

nice supported on my gf2mx, but every demo using it runs at 2min per frame… not that fast
think about one that wants to implement distance-attentuation. on gpu’s that support 3d textures he’ll use a 3d one, on the others a 2d-1d-combination. now on the gf2mx he will use the ****in slow 3d texture, becasuse he will find it in the extension-string…

but i thought the WGL_ARB_pixel_format_string or what ever you can say somewhere fully hardware, and then it should not be enabled… but i’m not sure about this…

Originally posted by davepermen:
[b]beautiful demonstration:
3dtextures

nice supported on my gf2mx, but every demo using it runs at 2min per frame… not that fast
think about one that wants to implement distance-attentuation. on gpu’s that support 3d textures he’ll use a 3d one, on the others a 2d-1d-combination. now on the gf2mx he will use the ****in slow 3d texture, becasuse he will find it in the extension-string…[/b]
but i thought the WGL_ARB_pixel_format_string or what ever you can say somewhere fully hardware, and then it should not be enabled… but i’m not sure about this…

Great example , very confusing indeed. Is it hardware and just too slow? I’ll presume it falls back to software for the following.
This is probably due to the fact that NVIDIA (all caps!) has a GL1.3 driver for the whole product lineup. Since 3d textures are core functionality in 1.3, they have to offer them, otherwise they, well, wouldn’t have version 1.3.
When exactly were they introduced into the core spec?
Maybe it would be better in this case to not advertise version 1.3 (or whatever it is) support on these chips, the promoted extensions have to be exposed anyway for backwards compatibility reasons. Or is that asking too much?

edit
Whoops, just checked the docs. Taking 3d textures out of the picture would mean falling back to version 1.1.

[This message has been edited by zeckensack (edited 03-19-2002).]

No, it would be very silly for us to not support GL 1.3 on our chips just because not every single feature is accelerated. By that logic, we shouldn’t bother supporting GL at all, seeing as we don’t accelerate all GL 1.0 features.

We hint at the fact that 3D textures are not accelerated on the chips in question by not going out of our way and exposing EXT_texture3D, but this is only a hint at best.

There is no way to query whether a feature is accelerated, and I am opposed to any extension to add such a query, for what that’s worth.

  • Matt

Personnaly I don’t understand what’s the problem…

It’s not like there are 36 different OpenGL systems in every house.
Come on! Most people have an NVIDIA or an ATI card. Don’t you know what’s accelerated on them or not? (let alone the 3D “pro” market, where you should know by heart what every 3D card can do)
“Minimum system requirement” is the key word.

Letting the user decide what he wants to enable is a good thing, but not every user know what’s a 3D card, let alone advanced OpenGL extensions.
User settings are to let to user decide the right balance between quality and performance, there are not meant to help you programming.

PS: This makes me think of people using CPUID to check the CPU flags, and asks on forums what they should do if CPUID isn’t supported by the CPU.
Who cares what would happen with your Win32 app and CPUID if it was ran on a 386 MS-DOS; it wouldn’t even start anyway.

Letting the user decide what he wants to enable is a good thing, but not every user know what’s a 3D card, let alone advanced OpenGL extensions.
User settings are to let to user decide the right balance between quality and performance, there are not meant to help you programming.

I’m not saying that you should have a config menu with lots of obscure names.You can always create a simplified frontend for the users who don’t know much about 3d cards,so they can adjust a texture quality slider or choose between bilinear and trilinear filters(most gamers know these terms anyway).But then again nothing stops you from letting the user choose the exact filter to use for minification/magnification,mipmaps etc.That way the engine’s core is simple and robust and the frontend config menu just helps novice users by filling in the details for them.Therefore you help both yourself and the users.I would be really ticked off if I wanted to see,just see,how this new cool game looked with mipmapping/whatever and just couldn’t do it because the developer hardcoded the best choices,although the driver could support it even slow,and with less effort from the programmer.That’s all I’m saying and besides,…isn’t that what quake does?