Detecting ATI HD series gfx. cards

Due to a GLSL bug in Catalyst drivers (covered in a separate topic in Shaders subgroup), I need to detect all ATI/AMD HD series graphics cards to be able to switch to CPU skinning as an issue workaround.

This is my current ATI detection heuristics:


bool OGL_renderer::is_ati(void)
{
	const char* vendor = (const char*) glGetString(GL_VENDOR);
	if( stristr(vendor, "NVIDIA") )
		return false; // if NVIDIA then is surely not ATI
	if( strlen(vendor) < 3 )
		return false;
	if( strncmp(vendor, "ATI", 3) == 0 )
		return true;
	if( stristr(vendor, "ATI ") )  // NVIDIA CorporATIon - we need space to distinguish
		return true;
	return false;
}

I expect checking is_ati() and stristr(renderer, " HD ") will do the trick.

Are there any better ways / better heuristics? Does anyone have more experience with such drastic workarounds?

Thanks!

If Windows only solution is ok with you, then you could try getting vendor and device id for video card. ATI vendor id is 0x1002 and you can look up device id in this table: http://ati.amd.com/developer/vendorid.html

Put up a dialog, asking the user “is it working?” If the user clicks the “No” button, then measure the frame rate. If it is low, then it’s Intel, else it’s ATI.

OK, somewhat tongue in cheek, but still would probably work :slight_smile:

I use different approach for that. I simply try to use given feature and see if it works.
For example - float texture filtering. At application startup I do the following:

  1. Is proper extension / GL version supported?
  2. Can I actually create FP16 texture and get no errors?
  3. I make a small (for example 4x4) chessboard texture
  4. I render this texture to screen without filtering turned on. Render multiple times - measure performance
  5. I turn on filtering and render this texture to screen again. Render multiple times - measure performance
  6. I compare performance with and without filtering
  7. I check the rendered image if it looks like been rendered with filtered texture
  8. The above test thould be performed inside try/catch clause

#1 is obvious
#2 is for eventual driver bugs (reported GL version not fully supported)
#3 use only black and white texels
#4 and #5 - instead of rendering something <n> times you could test how many times you could render something in time <t> - this way your test won’t last forever if feature falls to software emulation. anyway, some time-out functionality is recommended here.
#6 is to ensure you don’t get software emulation. Actually we don’t care if it’s hardware or software as long as performance doesn’t drop drastically with filtering enabled. Some features may work well in software mode (per-vertex operations).
#7 is to ensure feature actually works
#8 is for driver bugs - if exception thrown, feature is considered to have bad implementation. I had such problems when testing vertex textures. Radeon X850 doesn’t support them and driver caused exception when I tried to compile simple vertex shader with one texture sampler. And since I was implementing my application when there was no ATI GPU on market that would support vertex textures, trying to use this feature was the only actual test that worked.

It’s not difficult to create a base C++/Java class as framework for such test. Then you only implement tests for particular features.

Why I think this test is better?

  1. It does not assume to know that GPU X supports feature and GPU Y don’t (if feature ACTUALLY works, depends on GPU+driver)
  2. It guarantees that feature is actually working (it tests produced output). For example - when user has antialiasing forced in the driver then you’ll run into limitations with rendering to texture.
  3. It protects you at least from some driver bugs that may appear on some upcoming drivers
  4. It protects you from falling into slow rendering path
  5. You don’t have to know future GPU’s capabilities in advance nor release any patch for your application later (no need to extend application’s GPU knowledge base). For example - how can you tell if next generation of Radeon’s will support your skinning? With this approach, you don’t have to know.

So simply put - if you want to see if yur application should use certain feature, simply ise it and see if/how it works.

That’s a fine way to do it, assuming that you don’t have too many things to test. If you have to test each feature for 500 milliseconds, and you have 30 features to test, the start-up time of your program is extended by 15 seconds!

I would also recommend against using the try/except handler. If the driver generates a segmentation violation, then it’s highly likely that the driver has screwed up its internal state, and trying to render more using that driver instance would be very unsafe.

If you have to test each feature for 500 milliseconds

500ms is way too long. Each such test should last a fraction of milisecond, unless you can afford more (if you have very few tests). The case I mentioned (filtering test) should be done using 2x2 or 4x4 texture on about 16x16 render target. All you need is 10-20 iterations and that should take far less than a milisecond. Shortest achieved time is your test result.
Why shortest and not average? Because we want to know the performance when “nothing gets in your way” (thread changes, memory swapping, etc). This helps recognizing software emulation.

And of course, once tests are done, you can store results and assume they do not change unless GL_RENDERER / GL_VENDOR / GL_VERSION strings change.
Additionally, add an option for the user to “recheck video capabilities”.

trying to render more using that driver instance would be very unsafe

More unsafe than letting the driver crash your application during the test?
Besides, I mentioned ATI drivers actually thrown an exception during vertex shader compilation with texture sampler and by the time I wrote that code there was no other way of detecting if that feature is supported (on NVIDIA - NV_vertex_program3, on other GPU’s - unknown).
And life taught me brutally, that supported features doesn’t always actually work. So instead of asking if it’s supported I check if it works. And if some drivers crash during the test, then I have no other choice than to put try/catch there.

Note that you can destroy and create new rendering context after such exception, just in case. This should be enough, but thanks for pointing that out.

The testing method sounds quite good! :slight_smile:

The drawback in the described method is that it covers only functionality bugs. I have to agree with jwatte that running on a driver which has trown exceptions should be avoided. Hmm… maybe the coverage could be widened to user mode crashing bugs by launching sub-processes for each test?

Bugs producing system lockup, reboot or making system unusable/unstable definetly require blacklisting.

Note that, on the other hand, those tests are introducing another source of potential bugs. The programmer writing them would need some time and a big test base available to get experienced in the matters of making them robust enough.

What I like about the method is that the same framework can be used for performance profiling the renderer.

More unsafe than letting the driver crash your application during the test?

No, I think it’s fine to catch the exception, assuming it’s actually a crash that causes a “structured exception.” There’s no safe way to pass a C++ exception from one DLL to another when they use different MSVC runtimes.

Anyway, if you get an exception from the driver, explain to the user that his graphics driver crashed, and would he please re-start the program with “–mode=lame” to run the application safely. That mode would then turn off every feature that you could turn off, without testing for it.

Just out of intrest, what was wrong with checking the max vertex texture image units variable? It returns ‘0’ for my X1900XT, was this not the case before?

The programmer writing them would need some time and a big test base available

Not really :slight_smile:
You only test features you intend to use, so you allready know what you need to know.

The problem is, that simply checking extensions string and gl version is not enough. It turns out that some features aren’t implemented properly. So you learn that: “Feature X is not working on GPU A with driver version #1”, “Feature Y is not working on GPU B with driver version #2” - that’s a knowledge base you build during tests. Now you can put that knowledge base into your application, but how big it would have to be? And what about upcoming GPU’s and new driver wersions?
Well, I think it’s better to use simplier knowledge base: “Feature X doesn’t work sometimes”, “Feature Y doesn’t work sometimes”. Now all the application needs to do is to check these features. No need to know anything about GPU’s and drivers.

So, you only need to know which features cause problems, which you’ll learn during tests anyway. That’s not much of a database: “I use feature X => I know feature X can cause problems => I test feature X”.

what was wrong with checking the max vertex texture image units

You can get value greater than 0 on GPU’s that don’t have this feature and emulate it in software.

Besides, author of this topic has written a skinning software that runs fine on all GPU’s, except for Radeon HD. I believe Radeon HD claims to support all required features but for some reason, these features do not work as expected. So, asking OpenGL for supported features failed here.
Now he can modify his software to “use alternate solution for Radeon HD” or “use alternate solutoin for GPU’s on which current solution doesn’t work”.

So, at this point he allready knows that this feature doen’t work sometimes. He can start bulding a database of GPU+driver combinations that fail to support it and maintain that database by performing tests and releasing patches to his application, or he can add a test to his application and forget the whole thing.

oh, which GPUs do that then? I would have thought the Intel ones at most as there vertex shaders run in software anyway (although to be fair your example was specifically about catching exceptions to deal with ATI cards not supporting VTF).

My question wasn’t related to the OPs post, just wondering why you went to such extreme lengths to validate some information which appears to be present in the GLSL information returned by the driver anyway.

I meant a broad range of gfx. cards for validating if the tests work consistently across all driver / card combos you are targeting… in other words, they can be a source of bugs, too. (“Drivers don’t like tricks”) :slight_smile:

oh, which GPUs do that then? I would have thought the Intel ones at most as there vertex shaders run in software anyway

According to delphi3d.net - GeForce FX returns non-zero value for MAX_VERTEX_TEXTURE_UNITS.
Another reason to test features yourself:

columns:
16fpf - 16-bit floating point texture filtering
32fpf - 32-bit floating point texture filtering
16fpb - 16-bit floating point blending
32fpb - 32-bit floating point blending
sp - best shader precision
VTF - vertex texture fetch
gl_FF - gl_FrontFacing (in GLSL)
values:

    • not supported
    • runs in hardware
      S - runs in software

GPU       16fpf  32fpf  16fpb  32fpb   shp   VTF  gl_FF
GeForceFX   S      S      S      S     16     S     S
GeForce6    +      S      +      S     32     +     +
RadeonX     -      -      -      -     32     -     S
RadeonX1    -      -      +      -     32     -     S

Now which extension or variable would tell you that?
When I implemented my HDR support I had to add 2 code pathes - for GeForce 6 and for Radeon X1 (no fp16 filtering!). Now I wanted to release my application but knew next radeon will very likely support fp16 filtering and could run the NVIDIA codepath, which was faster. How to test for a GPU that doesn’t even exist but you know it will?

Another example - 16-bit vs 32-bit GLSL fragment shaders. On GeForce FX 32-bit shaders work slow, so you better use 16-bit shaders. On GeForce 6 / Radeon 9800 32-bit shaders work fast, so you don’t want to waste precision.
How would you ask GPU which shader precisoin you should use?

Another one - NPOT textures on Radeon X - they work (with some limitations), but are not reported by driver.

We could go on. In some cases, asking driver about certain feature is not enough. That’s why we have posts here like: “How to detect GeForce FX”, “How to detect Radeon HD”, “How to detect if feature is working in software”, which means someone allready ran into problem with some GPU.
The common answer is - “Check GL_RENDERER string and if it’s GeForce FX, don’t use that feature”, then someone says “Don’t do that. GL_RENDERER string may change.”. My solution is just an answer to that problem.

I meant a broad range of gfx. cards for validating if the tests work consistently

Do you mean that if we don’t add such automatic tests to application, then we don’t have to test our application on various GPU’s at all?
I think you’re missing the point here:

This is the situation:

  1. you have an application that uses feature X
  2. you find out that this feature is not working on GPU A
  3. you fix your application and test in to GPU A again

Now the question is how would you fix your application?
Solution:

  1. tell user than your application does not support GPU A :smiley:
  2. tell the aplication not to use feature X on GPU A and test if your application works on GPU A. Few days later someone says: “this feature is not working on GPU B”…
  3. tell your applcation that feature X may fail on SOME GPU’s and it should check if the feature is forking - test your application on GPU A

So the point is - you don’t add a bunch of test at start of your applicatoin just like that - to give yourself more trouble. What you do is to develop your applicatoin normally, check for extensions and GPU capabilities as you would normally do.
Only when you run into a problem when asking driver about a feature is not enough, then add a test to your application. You are allready in a situatoin when you have to test this feature on that GPU to see if you fixed it correctly.

ah, I see, thank you for explaining; I wasn’t so much questioning the rest of the tests btw, just the vertex texturing one :slight_smile:

Yup, I think I got your point. I just wanted to emphasize the fragility of drivers.

For example, in this concrete ATI HD issue a test would be the best solution performance wise. But stability wise, one broken driver code path will get used for a moment (during the test), potentially destabilizing the application / system.

The trade off is a matter of taste (and experience). :wink:

one broken driver code path will get used for a moment (during the test), potentially destabilizing the application / system

You could say that these tests introduce a risk of unstability, but if there is no test then application doesn’t know that functionality is broken and will use it. So IMHO, tests themselves do not introduce the risk of using broken codepath in driver since this risk allready exist.
As I mentioned before:

  1. You only test what you actually use in your application.
  2. You only test features that allready caused trouble and there’s no simplier way to detect them. That’s exactly the situation described in first post in this thread - something’s not working for no reason.

Note that it’s up to programmer what to do when there’s an exception. Application could simply shut down and launch another instance of istelf with proper command line parameters.
If you combine this with solution that performs tests only once (untill GL_VERSION / GL_RENDERER / GL_VENDOR changes) and stores test results in a file, then even if test leads to horrible system crash, then next time - your application will run just fine - it will not use nor test the same functionality again.

Well, I’m using this solution anyway because I consider it to be the best I know. Perhaps some of you can use it and benefit from it aswell.