PDA

View Full Version : How to interrogate 3D card driver via OpenGL ?



MPech
03-19-2002, 03:26 AM
I'm looking for a way to interrogate my 3D card driver about its hardware functionalities, like T&L or mipmap for example.

I don't want to ask for the API functionalities but for the hardware implemented ones.

functions like :
gluCheckExtension(GL_EXTENSION_I_WANT)
doesn't suit me because it ask the API wether or not GL_EXTENSION_I_WANT is supported. It could be sofware supported.

So, doesn't anybody know a way to do such a thing ? I know that directX is able to interrogate the drivers so I hope OpenGL can !

Regards

DFrey
03-19-2002, 03:33 AM
No, OpenGL in general does not give out that kind of information.

MPech
03-19-2002, 03:45 AM
And, is there an other way ?

zen
03-19-2002, 03:50 AM
I hope I'm not getting this the wrong way, but I don't understand why you want to complicate it so much.If you're going to make use of some capability like say mipmapping anyway(i.e. you're going to write the code) why make your program automatically choose whether to use this or not?This would just complicate things(more code,messier code,etc.).Instead just code it and have an option to disable it.The user can then decide whether he wants it or not.This is simpler and better.

Eric
03-19-2002, 03:55 AM
Originally posted by MPech:
And, is there an other way ?

No.

Regards.

Eric

knackered
03-19-2002, 04:02 AM
On Win32 apps, you can definitely create a d3d context, query the primary device for such things as hardware t&l, then destroy it. Although you can't guarantee that something exposed as accelerated in d3d will be accelerated in opengl - but 99% of the time it will be. Of course, if you do this, you'll also have to ship the directx redistribution with your opengl game/app, but you may have to do that anyway, if you're using directInput for example.

MPech
03-19-2002, 06:11 AM
Thanks, but the little percent of possible error doesn't suit me.

It's very rare from such a good API that it doesn't allow programmers to check if their effect is fully hardware rendered or a "little software" rendered.

Isn't it ?

[This message has been edited by MPech (edited 03-19-2002).]

knackered
03-19-2002, 06:23 AM
Well, err, no - the entire 1.1 opengl should be implemented in hardware - you can query that one thing anyway. As for extended functionality (up to 1.3) you can use the extension system for that - if a particular extension does not exist in glGetString then it is not hardware accelerated, and you cannot use it. It's not like d3d, where if a certain feature is not accelerated, then a software implementation kicks in....it's on or off in opengl - no middle ground (so I believe).
The only thing you might have trouble with is whether the implementation has accelerated t&l - but why should you care? Most drivers have highly optimised software transform and lighting routines if the card doesn't accelerate it...so you're covered for that eventuality.

Eric
03-19-2002, 06:27 AM
Originally posted by knackered:
Well, err, no - the entire 1.1 opengl should be implemented in hardware.

- Selection mode is not.
- Feedback mode is not.
- Stippling is not.
...
...

You simply can't assume anything about what is accelerated and what is not.

As someone (Matt ?) said a while ago, you do not even care what is accelerated and what is not. Deep inside, what you really want to know is whether something is fast or not. The only way to do that is to time specific things at the beginning of your app.

Regards.

Eric

MPech
03-19-2002, 06:33 AM
You mark a point on this, Knackered. ;P

But how to be sure ?
SGI site tells that :
"there exists no direct way to check whether particular OpenGL functions are hardware accelerated or not"

check the URL : http://www.sgi.com/developers/library/resources/openglxp.html#hw_graphics_conf

at the topic -> 5. Obtaining System and Graphics
Configuration Information

So i'm beginning to find sharp benchmark for testing.


[This message has been edited by MPech (edited 03-19-2002).]

zeckensack
03-19-2002, 06:37 AM
Originally posted by MPech:
Thanks, but the little percent of possible error doesn't suit me.

It's very rare from such a good API that it doesn't allow programmers to check if their effect is fully hardware rendered or a "little software" rendered.

Isn't it ?

Why? You're just bound to make silly assumptions based on that info.

Say, you deny the maximum polycount models and fancy effects because of missing transform hardware. What you don't know is that you're running on a future 5GHz monster CPU and 2GB/s bandwith to the graphics subsystem. The hardware vendor further made the very valid tradeoff to incorporate full hardware fragment shaders and save some silicon real estate by doing vertex shaders in software. Your assumption fails.

IMHO there are exactly three things you need to know about your GL implementation.

1)What buffer depths and types (pbuffer, stencil?) are supported? You can get that info when you create your rendering context.
2)What does the gl version I'm running on regard as core functionality? This will all function and regardless whether it's hardware or software, just trust the implementation to choose the optimum path for you.
3)What extensions are supported on top of the core functionality? These will most likely be done in hardware.

The only Right Thing (TM) to do is (as has already been mentioned) to give the user the choice when in doubt. He/she can far better determine exactly what acceptable performance is.

MPech
03-19-2002, 06:49 AM
Thanks "professor" but these are elments that I already know...
I did not command you to answer if you didn't want to. http://www.opengl.org/discussion_boards/ubb/frown.gif

no need to answer this one I quit.

zeckensack
03-19-2002, 06:54 AM
Originally posted by MPech:
Thanks "professor" but these are elments that I already know...
I did not command you to answer if you didn't want to. http://www.opengl.org/discussion_boards/ubb/frown.gif

no need to answer this one I quit.
Sorry, I didn't mean to patronize. I'm not qualified anyway http://www.opengl.org/discussion_boards/ubb/smile.gif
I didn't see the previous answers because I typed too slowly http://www.opengl.org/discussion_boards/ubb/biggrin.gif

knackered
03-19-2002, 08:12 AM
Eric:
- Selection mode is not.
- Feedback mode is not.
- Stippling is not.


Didn't get very far with that list, did you?
I wouldn't regard any of those things as part of opengl http://www.opengl.org/discussion_boards/ubb/wink.gif OpenGL is there to render triangles...full stop. I stopped using all those things a long while ago, along with the matrix stack, the translate/rotate functions, and everything else that makes it easy to 'learn' opengl.
There just isn't a need for them - in fact, using them makes it pretty difficult to produce an abstract renderer dll, because that kind of functionality should not *be* in a renderer. My predecessors' renderer was sprawled out everywhere in the simulation, because the simulation relied on too much information from the renderer.
I hope you know what I mean, even though it's a little off the topic of this thread.

Eric
03-19-2002, 08:27 AM
knackered,

I agree with you: my list wasn't that useful... http://www.opengl.org/discussion_boards/ubb/wink.gif I just wanted to show that even simple statements like "GL 1.1 is HW accelerated" can be wrong. I think you wanted to say that triangles rasterization is HW accelerated http://www.opengl.org/discussion_boards/ubb/wink.gif http://www.opengl.org/discussion_boards/ubb/wink.gif http://www.opengl.org/discussion_boards/ubb/wink.gif !

But I think it is fair to say that the fact that a feature is HW accelerated or not is irrelevant. What you really want to know about is speed. You actually don't mind where the processing is done.

That being said, I'd be happy if someone can explain me why he'd prefer something to be slow and HW accelerated rather than something fast on the CPU... (OK, OK, you could want to offload some work from the CPU).

Regards.

Eric

P.S.: got something else for my list: I think Edge Flags switch the SW rasterizer on (at least they slow apps down quite a lot !)....

davepermen
03-19-2002, 09:19 AM
beautiful demonstration:
3dtextures

nice supported on my gf2mx, but every demo using it runs at 2min per frame.. not that fast http://www.opengl.org/discussion_boards/ubb/smile.gif
think about one that wants to implement distance-attentuation. on gpu's that support 3d textures he'll use a 3d one, on the others a 2d-1d-combination. now on the gf2mx he will use the ****in slow 3d texture, becasuse he will find it in the extension-string..

but i thought the WGL_ARB_pixel_format_string or what ever you can say somewhere fully hardware, and then it should not be enabled.. but i'm not sure about this..

zeckensack
03-19-2002, 11:35 AM
Originally posted by davepermen:
beautiful demonstration:
3dtextures

nice supported on my gf2mx, but every demo using it runs at 2min per frame.. not that fast http://www.opengl.org/discussion_boards/ubb/smile.gif
think about one that wants to implement distance-attentuation. on gpu's that support 3d textures he'll use a 3d one, on the others a 2d-1d-combination. now on the gf2mx he will use the ****in slow 3d texture, becasuse he will find it in the extension-string..
but i thought the WGL_ARB_pixel_format_string or what ever you can say somewhere fully hardware, and then it should not be enabled.. but i'm not sure about this..
Great example http://www.opengl.org/discussion_boards/ubb/smile.gif, very confusing indeed. Is it hardware and just too slow? I'll presume it falls back to software for the following.
This is probably due to the fact that NVIDIA (all caps!) has a GL1.3 driver for the whole product lineup. Since 3d textures are core functionality in 1.3, they have to offer them, otherwise they, well, wouldn't have version 1.3.
When exactly were they introduced into the core spec? http://www.opengl.org/discussion_boards/ubb/confused.gif
Maybe it would be better in this case to not advertise version 1.3 (or whatever it is) support on these chips, the promoted extensions have to be exposed anyway for backwards compatibility reasons. Or is that asking too much? http://www.opengl.org/discussion_boards/ubb/confused.gif

*edit*
Whoops, just checked the docs. Taking 3d textures out of the picture would mean falling back to version 1.1.

[This message has been edited by zeckensack (edited 03-19-2002).]

mcraighead
03-19-2002, 12:28 PM
No, it would be very silly for us to not support GL 1.3 on our chips just because not every single feature is accelerated. By that logic, we shouldn't bother supporting GL at all, seeing as we don't accelerate all GL 1.0 features.

We hint at the fact that 3D textures are not accelerated on the chips in question by not going out of our way and exposing EXT_texture3D, but this is only a hint at best.

There is no way to query whether a feature is accelerated, and I am opposed to any extension to add such a query, for what that's worth.

- Matt

GPSnoopy
03-19-2002, 01:54 PM
Personnaly I don't understand what's the problem...

It's not like there are 36 different OpenGL systems in every house.
Come on! Most people have an NVIDIA or an ATI card. Don't you know what's accelerated on them or not? (let alone the 3D "pro" market, where you should know by heart what every 3D card can do)
"Minimum system requirement" is the key word.

Letting the user decide what he wants to enable is a good thing, but not every user know what's a 3D card, let alone advanced OpenGL extensions.
User settings are to let to user decide the right balance between quality and performance, there are not meant to help you programming.

PS: This makes me think of people using CPUID to check the CPU flags, and asks on forums what they should do if CPUID isn't supported by the CPU.
Who cares what would happen with your Win32 app and CPUID if it was ran on a 386 MS-DOS; it wouldn't even start anyway.

zen
03-19-2002, 02:14 PM
Letting the user decide what he wants to enable is a good thing, but not every user know what's a 3D card, let alone advanced OpenGL extensions.
User settings are to let to user decide the right balance between quality and performance, there are not meant to help you programming.

I'm not saying that you should have a config menu with lots of obscure names.You can always create a simplified frontend for the users who don't know much about 3d cards,so they can adjust a texture quality slider or choose between bilinear and trilinear filters(most gamers know these terms anyway).But then again nothing stops you from letting the user choose the exact filter to use for minification/magnification,mipmaps etc.That way the engine's core is simple and robust and the frontend config menu just helps novice users by filling in the details for them.Therefore you help both yourself and the users.I would be really ticked off if I wanted to see,just see,how this new cool game looked with mipmapping/whatever and just couldn't do it because the developer hardcoded the best choices,although the driver could support it even slow,and with less effort from the programmer.That's all I'm saying and besides,...isn't that what quake does? http://www.opengl.org/discussion_boards/ubb/smile.gif

GPSnoopy
03-19-2002, 03:56 PM
Zen, I totally agree with you. (Although I don't know a lot of game with understandable settings, even for an OpenGL programmer. exemple: IL2 "Render terrain using triangle" option, 4 simple words but totally meaningless :P)

Quake's way is a good way. It adds more flexibility in the control of the engine.
Agreed a 3D engine should provide an easy, understandable and flexible way to configure it.

But it wasn't the point of my previous post.
You shouldn't rely on the user settings to solve the problems related with hardware features.
Quake is able to find out how to run on nearly any system, without the help of users configuring it from A to Z. *IF* you meet the system requirement.

IMO the engine shouldn't stop the user from increasing the image quality, but like with the CPUID example I'm talking about increasing the minimum requirement instead, it shouldn't bother supporting "nearly S3 Virge speed like" 3D hardware when it's meant to be actually run on a GeForce <put very high number here>.
In this topic, if for example you take a GeForce 256 as the min. sys. req., then you can already know all the features that are supported by it under OpenGL 1.3, no need to bother about their existence or not. (Extensions are another matter, but core features of the API are unlikely going to disappear)

Users settings should allow the control of "not so common" and advanced features (example: in the latest OpenGL games you can chose between Vertex Shaders or normal T&L)
But giving choices like "Enabling/Disabling Textures", "8bits color", etc... is a bit extreme and not needed. (Apart from being funny for the 0.01% people wondering how ugly the game could look with those settings)

[This message has been edited by GPSnoopy (edited 03-19-2002).]

zen
03-19-2002, 04:46 PM
I'm currently using a simple interpreter that's integrated with the console.You can export any variable or function in your source code to its symbol table(a ptr to it actually) so you can use it to change the value of any variable and call any function(with arg passing) in the program(or shared object files,like dlls in windows).It's been very helpfull until now.I can control all aspects of the engine by just exporting the variables I pass to opengl(more or less).I haven't thought much about it because I'm not a proffesional so supporting many platforms is not an important goal yet,but you could easily write a simple script for each card which would set the correct values w/o any need for benchmarking etc.This way adding support(rather optimizing) for a specific car should be a matter of minutes.
I agree on all your other points.

BTW: this way even adding support for "enabling/disabling textures" should be a matter of minutes,so that 0.01% can have their fun.In fact that's a cool idea.I'm going to implement that.Thanks. http://www.opengl.org/discussion_boards/ubb/smile.gif

[This message has been edited by zen (edited 03-19-2002).]

GPSnoopy
03-19-2002, 04:57 PM
I also do the same with some "key" variables in the engine, very helpfull indeed!


Originally posted by zen:

BTW: this way even adding support for "enabling/disabling textures" should be a matter of minutes,so that 0.01% can have their fun.In fact that's a cool idea.I'm going to implement that.Thanks. http://www.opengl.org/discussion_boards/ubb/smile.gif


LOL! You're crazy http://www.opengl.org/discussion_boards/ubb/smile.gif http://www.opengl.org/discussion_boards/ubb/biggrin.gif http://www.opengl.org/discussion_boards/ubb/rolleyes.gif http://www.opengl.org/discussion_boards/ubb/wink.gif http://www.opengl.org/discussion_boards/ubb/tongue.gif

[This message has been edited by GPSnoopy (edited 03-19-2002).]

zen
03-19-2002, 05:07 PM
Crazy?Why?Say your engine uses a lot fo textures which take time to load,create mipmaps,light,whatever.Now you want to fix a bug/add functionality to say your ROAM optimizer so you're only interested in the triangles and you don't need the textures.Just add:
set r_wire=1;
set r_textures=0;/*not implemented yet*/
to the init script and you don't have to wait for the textures to load.This is going to make bughunting a little easier on your nerves.

[edit]
I implemented it.It took about 5 minutes.Actually I just disable textures for the terrain which take a lot time to load/illuminate.No need to fiddle with glEnables/glDisables as some textures still need to be loaded(font textures for the console etc).Hmmm,maybe you're right and I *am* crazy.After all it *is* 5:16AM!

[This message has been edited by zen (edited 03-19-2002).]

ScottManDeath
03-20-2002, 02:43 AM
Originally posted by MPech:
I'm looking for a way to interrogate my 3D card driver about its hardware functionalities, like T&L or mipmap for
functions like


Hi

I thought a while ago about directly accessing 3D Hardware using for example low level assembly language to do it in a similar way as the driver does it. I think it would be nice to do it , but for real life applications it wouldn't be worth of implementing it because there are so many 3d graphics chips and configurations. But perhaps if you restrict to e.g. nvida geforce* cards it could be done.

I searched the web for some hints how to do it but the results weren't very helpful.

But I think it should be possible because RivaTuner shows quite detailed low level chip informations about nvida gpu based chips.

Perhaps someone has some hints where to look for further information about accessing graphic chips as a driver it would do.

Bye
ScottManDeath


[This message has been edited by ScottManDeath (edited 03-20-2002).]