Detecting SM2.0 / SM3.0 support

My application must detect if GLSL is supported and if features like filtering FLOAT16 textures, blending on FLOAT16 render targets and vertex texture fetch are hardware accelerated.

As for detecting GLSL support - testing if GL_ARB_shader_objects and GL_ARB_shading_language_100 are supported is not enough. My firend’s GeForce4MX reports that, even though he didn’t use NVemulate. His drivers emulate GLSL by default.
Checking for GL_NV_vertex_program2 solved this problem. Also Checking for GL_NV_vertex_program3 is usefull to detect vertex texture fetch.

Unfortunately requiring GL_NV_vertex_program extension raises another problem: ATI cards do not report this extension.

So the final question is:

How to detect if Shader Model 2.0 and 3.0 is supported on ATI cards?

Well, you may always try to compile a shader that uses some advanced features, like dynamic branches etc. and then parse the log. I think you should have a message that the shader will run in software if it is not supported(or it will simply not compile)
As for filtering… I don’t really know. The FP filtering should be present as an extension in the extension string I may add, otherwise it is really difficult to determine it. I mean, we still have ARB_texture_non_power_of two or such

Yes, I had such shader compiled upon initialization to check if VTF is supported.
But I still need to perform tests for FLOAT16 support, but that’s pretty ugly to implement: you have to both measure time (software fallback on NVIDIA cards) and analyse rendered image (ATI cards do not fall back to software - they simply do not filter texture).
Worse yet - I mentioned GeForce4MX reporting GLSL support by default. I still need a way to ensure I’m dealing with at least Shader Model 2.0 card.

In addition to the GL_ARB_shading_language_100 and GL_ARB_shader_objects you should also check for GL_ARB_fragment_shader and GL_ARB_vertex_shader extensions. The first one (GL_ARB_fragment_shader) is not supported on GF4MX cards while the second one is (by emulation).

Thanks, Komat.
Guess that solves fragment shader emulation problem.
As for distinguishing SM2.0 and 3.0 I’ll just go back to my old code that tested each required functionality. Hopefully GL3.0 will bring more legitimate ways to detect supported functionality.

I was thinking of experimenting with glGetError. Perhaps on RadeonX850 I could get errors when enabling filtering of FLOAT16 textures or enabling blending during rendering to FLOAT16 texture.
But as proverb goes: “better safe than sorry”, so perhaps I just stick to my current tests (time measures and image analysis) - you never know if driver will report error properly or not.

Thanks again.

Doh!
glBegin(GL_QUADS) throws an exception during my test for blending on GL_RGB_FLOAT16_ATI render target.

This code works perfectly on GeForce6 and GeForce7, and it used to work on RadeonX850 until today (because today the driver was updated).

Guess I’ll have to add try/catch to my test and assume that blending is not supported if exception is thrown :stuck_out_tongue:
If I have more info on this I’ll start new topic in coding-advanced.

You should use the GL 2.0 functions. Detect if the GL version is >=2.0

Also Checking for GL_NV_vertex_program3 is usefull to detect vertex texture fetch.
That’s what NV suggested in a pdf.
On ATI, the GLSL infolog should say “software” or “hardware”. I’m guessing the X1900 supports VTF?

like filtering FLOAT16 textures, blending on FLOAT16 render targets
Detect the GPU with glGetString(GL_RENDERER) :slight_smile:

Originally posted by V-man:
Detect the GPU with glGetString(GL_RENDERER) :slight_smile:
Bad idea. You cannot support future GPUs with this approach. It’s also a constant maintenance burden to keep up with the new renderer names.

You should use the GL 2.0 functions
No can do. My application must remain compatible with OpenGL 1.5 GPU’s (and older, but in that case shaders are disabled).

On ATI, the GLSL infolog should say “software” or “hardware”
This does not solve FLOAT16 blending / filtering detection problem. Besides infologs are not standardized, therefore I’m trying to avoid analysing them.

Thanks for posting anyway. :wink:

Besides infologs are not standardized, therefore I’m trying to avoid analysing them.
Yes, they aren’t officially standardized but I think ATI will continue with this method in the future.

What I said was is yo can use glGetString(GL_RENDERER) for the blend on FLOAT16, not the infolog.

The alternative method is if you have GL_NV_vertex_program3, then FLOAT16 blend is possible even if one is not related to the other. For ATI cards, I wouldn’t know.

To detect SM3.0 vs. SM2.0 cards on ATI you could look for the GL_ATI_shader_texture_lod extension.

Couldn’t find GL_ATI_shader_texture_lod specs anywhere. I’ve checked delphi32.net and seems like this extension is only wypported by Radeon X1k, but unless I read specs I’m not relying on it.

By the way - I got things working using my tests (had to add try/catch to deal with that exception in ATI driver). Still the topic remains open, I guess.

I’ll keep an eye on GL_ATI_shader_texture_lod. As soon as I have proof that it will never be supported on any SM2.0 card I’ll use it.

Thanks!

herm… Are you aware of the fact that Humus is working @ ATI ?
If he tells you that you can rely on this extension to see whether ATI hardware is SM3, you can trust him.

The GL_ATI_shader_texture_lod extension is documented in the “Level of Detail Selection in GLSL” paper in the ATI SDK. None of the SM2.0 hardware supports this functionality, so you can count on this only being exposed on SM3.0 hardware.

Ok, I’ve read the specs. Will do.
And yes, I knew Humus works for ATI. When he said to check for this extension, I knew it’s enough to distinguish ATI’s SM3.0 GPU from ATI’s SM2.0 GPU.
I just needed to check if this extension is not something that can be supported (emulated :wink: ) by GeForce FX.

Thanks guys!

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.