Shader Model

How to get the shader model that GLSL supports? Is it the same as HLSL model specification? What’s the equivalent?

I ask because I see this becoming a standard to describe a card’s capabilities, and I want to make sure it’s not measured by a “Non”-standard API (Direct3D).

Is this what you want ?

glGetString(GL_SHADING_LANGUAGE_VERSION);
This returns a string with the GLSL version in the format “major.minor”

Oh not that. what I mean is shader model number, 3.0, 4.0…like in HLSL

I think (not sure):
Something that should work on all cards: try to compile, link and use (draw one pixel) with a representative GLSL shader.
Otherwise, hope extension-strings or wglGetProcAddr are enough.

But the standard is called shader model x, never seen a card review that say supports GLSL 1.20 or 1.30

Card reviews say “supports opengl2.0” or “2.1” or “3.0”, in tech-specs if ever GL is mentioned. Users of the 2 opengl games have to browse forums for info whether those 2 games will run on card X, if they don’t have an nVidia card. It’s a gaming-on-DX world.
It’s true that shader-models are much more meaningful, it’s down-to-the-metal. GLSL is designed in such a way that used features could be emulated/simulated i.e by unrolling loops and recompiling shaders every frame. (though it ultimately only returns errors in implementations). Over-abstraction :slight_smile: .
Query for NV_ extensions -> so if it’s an NV card you know for which exts to look for. Otherwise, try compiling GLSL shaders (and hope you have luck). You guarantee your success with GLSL if you stick to SM1.0-like functionality on non-NV hw. Heh, might as well just go the way of ARB-asm :P. Or constantly send bug-reports to ATi on shaders that don’t compile, they’re quick at fixing things.
Or limit non-nvidia path to OpenGL3.0, and again hope for the best.

So, a forward-looking optimistic approach would be to just try-out all your shaders (compile, link, draw a pixel, fetch pixel, compare). And roll-back to a lower-quality path if any fails; rinse and repeat.

P.S: I haven’t tried running complex GLSL on ati and intel cards, just recently bought some for such testing to be done later. I’m looking at GLSL pessimistically for the problems I’ve met with non-nv cards till several months ago, and all similar reports on forums online; ultimately abandoning the idea of eye-candy on non-nV hw. An all-or-nothing situation. Also, my non-hobby GL tackling easily requires just under SM2.0 functionality, for which arb-asm is enough. I can only extend my condolences to devs that need sm3/sm4 eye-candy in their serious projects.

Try looking at the other extensions the card supports.

All shader model 2 cards should support EXT_framebuffer_object. Shader Model 4 will support EXT_texture_array. For SM3, I’m not sure.

SM 1.0 = (NV_texture_shaders && NV_register_combiners2) || (GL_ATI_fragment_shader && GL_EXT_vertex_shader)
SM 2.0 = (NV_fragment_program && NV_vertex_program2) || GL_ARB_draw_buffers
SM 3.0 = NV_fragment_program2 && NV_vertex_program3
SM 4.0 = EXT_gpu_shader4

Or this
http://www.opengl.org/wiki/Shading_languages:_How_to_detect_shader_model%3F

Standards are good. If Microsoft - or any other interest with sufficient influence - in concert with the ISVs/IHVs are able to assign a meaningful, marketable number to each generation of hardware (and shading language) it’s a good thing for everyone in the business to make money.

OpenGL has to float somewhere above a convention like this since by design and definition the API is not tied to a specific generation of hardware. Owing to the high level of its abstraction its mandate is loftier and less specific about the details of the implementation - that’s a trade off.

In short, OpenGL is not the standard anymore!

If by “anymore” you mean “since at least 10 years ago”, then yes.

I don’t necessarily agree with this. OpenGL runs on real chips, and if the real chips have common feature sets or limits across vendors, (whether due to coincidence or a “steering hand”), there’s no overriding reason why GL shouldn’t make that information more clearly available to the programmer.

Couldn’t agree more, Rob.

Mr. Kilgard hit on the topic of “Direct3Disms” in his Siggraph Asia presentation which addressed some of the larger issue of “standardization” in a way that’s meaningful to folks porting from D3D. Making OpenGL a relatively painless, short-term transition for the majority of desktop game makers, for example, would not only pave the way to some semblance of standardization among the APIs but may also open the door to markets hitherto untapped due to budgetary constraints within increasingly long and costly development cycles. As I see it, anything that’s good for PC gaming in general is potentially good for OpenGL in particular.

P.S. One “OpenGLism” I’d like to see in D3D is a symmetric clip space cube :slight_smile:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.