Shader Model x?

Hi

How to determine the shader model supported by the HW&Driver in OpenGL?

Thanks.

“Shader Model” is not a concept that OpenGL comprehends or supports.

Why? I mean is not a good feature to have? How can tell about the support of GLSL shading features, like in D3D HLSL? When translated to intermediate assembly level.

In openGL you can check if the card supports the individual features of these shading modes.

but there is a simpler way you can find in the openGL wiki.

How? Querying certain extensions? or is there any GLSL feature I can run to check for that?

use glGetString with the GL_EXTENSIONS parameter and it will return a list of all available extensions, then all you have to do is to search that string for the right one.
There is a pretty ok function for this in lesson45 on nehe.gamedev.net

But there are no specific features in GLSL for this.

#ifdef should work in GLSL iirc.

I mean is not a good feature to have?
No, it is not.

Who decides what goes into a “shading model”? Who decides which features are in one and not in another? What does it even mean, when people can blow past limits (like program length) without even knowing it? When a driver revision can make a shader that was near the limit fail, and make another one that failed previously succeed?

The purpose of a “shader model” in D3D is to be able to say that this code follows the limitations set down in a specification. These limits are defined by the assembly language used as the foundation for D3D HLSL. Glslang does not have an assembly language; it is compiled directly into the machine-specific construct, so it is not possible to speak of most of the useful kinds of limitations that a “shader model” defines.

The construct has no value in OpenGL’s shader system.

I see. As far as my understanding to the dshader model concep, it tells about the possibility of interpreting the high language shader syntax into an a semantic equivalent machine/video card code or even an intermediate assembly (like in DX).

My question why GLSL cannot be queried for that low level compilation instead of submitting the shader to either running in efficinetly because of no equivalent assembly, or because there’s no equivalent at all. It could compile in the desired symantec and works, but what about the former 2 cases? Does it worth handling them properly?

Thanks.

Yes, shader model means that but shader model also means a certain GPU support X ALU instruction, X tex instructions at minimum.
GLSL doesn’t exactly define it this way. It wants a minimum of 512 uniform components (I think), some number of tex image units and some number of varying components.

Offline compilation will become part of the next GL version : the Lean and Mean

IMO, shader model is better. I can use the ref rasterizer in D3D, compile my huge HLSL shader for SM 2.0, and it will tell me that it fails. I don’t need to have a SM 2.0 card in my hands.
Or we need to have GPU profiles. ATI could release a tool and I tell it to emulate some specific GPU. nVidia already has their nvemulate.

Correct me if I’m wrong. Lets assume a highlevel shader loop:

for i= 0; i<n; i++:

endfor
where n is varrying

This can compile successfully in shader model xyz, while not the case in an older shader model abc, where such symantics are not supported. In shader model abc this could work:

for i = 0; i < 10; i++:

endfor

where the loop is unrolled.

Based on this concept, we don’t really need to query a shader model versionnumber given by the API creator, since we can simply compile and it either fails or succeeds.

Thanks.

I can use the ref rasterizer in D3D, compile my huge HLSL shader for SM 2.0, and it will tell me that it fails. I don’t need to have a SM 2.0 card in my hands.
The downside is that not all SM2.0 hardware is identical. Some SM2.0 hardware (various ATi chips) is really close to SM3.0’s featureset, but isn’t able to call itself SM3.0. But your test would force you to a different path, even though the hardware is potentially available to do what you need.

In short, the Shader Model system is not fine-grained enough to cover all hardware.

Originally posted by glfreak:
Based on this concept, we don’t really need to query a shader model versionnumber given by the API creator, since we can simply compile and it either fails or succeeds.
Isn’t that what the wiki says?
What if it compiles successfully and runs in emulation?

[b]The downside is that not all SM2.0 hardware is identical. Some SM2.0 hardware (various ATi chips) is really close to SM3.0’s featureset, but isn’t able to call itself SM3.0. But your test would force you to a different path, even though the hardware is potentially available to do what you need.

In short, the Shader Model system is not fine-grained enough to cover all hardware.[/b]
Yea, some are 2.0a, 2.0b, 2.x and when I asked, people said you can’t query the version because it just returns major num and minor num (D3D).
The solution is to check instruction count.