Shader Model 3.0 in OpenGl

can anyone answer me since when does OpenGl support Shader Model 3.0 (all its features) and if it is by ARBs or it is already in the core?

sorry if i said something idiot…its because im really wanting to look up for who has gived SM 3.0 support first…OPGL or DX…and maybe some of you can help me

thanks

Shader Model (1.0, 2.0 or 3.0) is just D3D way of grouping certain capabilities any given card has. OpenGL doesn’t work this way. You can access everything SM3.0 provides for D3D, in GLSL. You make your GLSL shaders and their run (or not): no shader models present.

If you wan’t, you can access SM2.0 feature set with ARB_[vertex,fragment]_program. For other features higher than SM2.0 (like SM3.0) you have to worh with GLSL or use NVIDIA’s extensions.

Originally posted by KRONOS:
[b]Shader Model (1.0, 2.0 or 3.0) is just D3D way of grouping certain capabilities any given card has. OpenGL doesn’t work this way. You can access everything SM3.0 provides for D3D, in GLSL. You make your GLSL shaders and their run (or not): no shader models present.

If you wan’t, you can access SM2.0 feature set with ARB_[vertex,fragment]_program. For other features higher than SM2.0 (like SM3.0) you have to worh with GLSL or use NVIDIA’s extensions.[/b]
hmm i understand…i already knew that OPGL dont use the name SM 3.0…he only has the features to do it

i want to know since when GLSL is capable of doing things tha is only possible with “SM 3.0” …like dinamic branching, Dynamic Flow Control, Texture Lookup, Vertex Frequency Stream Divider, geomometry stancing, etc…things that microsoft and Nvidia consider as Shader Model 3.0

Originally posted by armored_spiderman:
i want to know since when GLSL is capable of doing things tha is only possible with “SM 3.0” …like dinamic branching, Dynamic Flow Control, Texture Lookup, Vertex Frequency Stream Divider, geomometry stancing, etc…things that microsoft and Nvidia consider as Shader Model 3.0
Dynamic flow control and texture lookup (in the vertex shader) are possible with GLSL.
The vertex frequency stream divider and geomometry stancing are not available in GL because well, it is not needed (check out ARB meeting notes and NVIDIA’s developer site).

And I would be carefull, because SM3.0 doesn’t mean, for example that you are able to texture fetch in a vertex shader. For example in ATI’s latest card, it “provides SM3.0”, thus texture fetch. But then, ATI doesn’t provide any texture format to use in vertex texture fetch, making it useless. SM3.0 doesn’t mean that you automaticly have texture lookup in the vertex shader.

hmm but i want to know since when is possible to do Dynamic flow control and texture lookup with GLSL.

And where cai i find the notes that explain about geometry stancing and vertex stream dividing being useless…i read about it in the site of nVidia…if it is useless why she should put it there?

sorry if i am disturbing you with stupid questions…im just a newbie in Shading Language trying to understando something…thanks for the help

GLSL is just a language. It supports the flow control and stuff just by design.

What the underlying hardware can do can be queried by the OpenGL extensions unrelated to the GLSL language.
For example check out this document on how to distinguish different NVIDIA architectures by looking at the OpenGL extensions:
http://developer.nvidia.com/object/nv_ogl2_support.html

Other ways to query for features include the usual glGet* mechanism, for example vertex texture lookup capabilities would be queried with GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS. 0 means no vertex textures possible.

instancing is very useable in d3d, in ogl it will not provide such big performance difference. but IMHO it will be good to have it, because it can significantly reduce function calls. in my case, from millions to maybe 20 and that is a lot different CPU load. but my app is fillrate bound, so it will not help, but…

Originally posted by shelll:
instancing is very useable in d3d, in ogl it will not provide such big performance difference. but IMHO it will be good to have it, because it can significantly reduce function calls. in my case, from millions to maybe 20 and that is a lot different CPU load. but my app is fillrate bound, so it will not help, but…
hmm i will read the Nvidia OpenGL 2.0 support PDF…any questions i will bring to you guys :wink: thanks for the help

can you explain better to me how instancing can be good in d3d and not in OGL?and what about High Dinamic Hange? it is used in OpenGL? if it is…how it is done?..is there any feature of D3D that OPGL cant do either?

a

HDR can be done in ogl :wink: just google. for the instancing thing, just search this forum, you will find the answer. but i still want instancing in ogl…

Originally posted by armored_spiderman:
can you explain better to me how instancing can be good in d3d and not in OGL?
Explained here:
http://download.developer.nvidia.com/dev…_instancing.pdf

Originally posted by Relic:
[quote]Originally posted by armored_spiderman:
can you explain better to me how instancing can be good in d3d and not in OGL?
Explained here:
http://download.developer.nvidia.com/dev…instan cing.pdf
[/QUOTE]i have readed tha ATI clains that she dont uses vertex texture fetch because she has another way to do it that is better and faster than VTF … it is called R2VB …of course that ATI can be just defending herself…thats why i want your opnion about it

she clains too that Nvidia hardware dont have enough power to use vertex texture fetch massively, like in a game…the performance would be poor …apparentelly Nvidia implemented it but dont putted to much resources on that

“She”? :wink:
Dude, if R2VB means render to vertex buffer that’s something different then texture fetches inside the vertex pipeline.
Vertex textures are normally used for displacement mapping, but can also be applied for general purpose stuff.
I prefer having a hardware feature over not having it. It means I can use it creatively today.

Originally posted by shelll:
HDR can be done in ogl
Except that ATI doesn’t support alpha-blend floating point textures in OpenGL, even with the Radeon X1800 (it does, but in software rendering) …

execom_rt: taht is poor, that even x1800 can’t do that in HWm if it is true :smiley: …but you can write your own blending shader :wink:

Originally posted by armored_spiderman:
can you explain better to me how instancing can be good in d3d and not in OGL?
It’s not that it not useful it’s just much less useful than d3d.

This question comes regularly in other forms… try check it out but the outline follows:
Some OpenGL users noticed that GPU performance improve almost with batch size and that there’s no point in using 2000 function calls when you can do, say, just 400.

Some other users notice however that GL has marshalling. Marshalling allows to take a certain amount of calls (say 150 but it’s just a random number) and send them all at once to the video card, thus eating much less cpu power than their counterpart. Because of this, the improvement using instancing it’s expected to be so low it’s not worth implementing it.

I also would like to have instancing but I see they have good reasons for not proposing it.

Originally posted by Obli:
[b] [quote]Originally posted by armored_spiderman:
can you explain better to me how instancing can be good in d3d and not in OGL?
It’s not that it not useful it’s just much less useful than d3d.

This question comes regularly in other forms… try check it out but the outline follows:
Some OpenGL users noticed that GPU performance improve almost with batch size and that there’s no point in using 2000 function calls when you can do, say, just 400.

Some other users notice however that GL has marshalling. Marshalling allows to take a certain amount of calls (say 150 but it’s just a random number) and send them all at once to the video card, thus eating much less cpu power than their counterpart. Because of this, the improvement using instancing it’s expected to be so low it’s not worth implementing it.

I also would like to have instancing but I see they have good reasons for not proposing it.[/b][/QUOTE]hmm thanks…so because of marshalling…instancing become “useless” because marshalling can do the same and can do it faster?..tha is what i have understood

i use to call ATI as she =P … from what i am reading here…it seems that ATI hardware is more fore gaming than for developing…looks like tha hardware lack in the support for some OpenGl features…these features that lacks in ATI hardware are too important for 3D development? you guys tha work with this (im just a gamer trying to understand things better) … any of you use ATI to work? in this area how does Nvidia compare with ATI?

i dont wanna to bring a discussion…just trying to get some information =)

again i thank tou all that are helping

Originally posted by Relic:
“She”? :wink:
Dude, if R2VB means render to vertex buffer that’s something different then texture fetches inside the vertex pipeline.
Vertex textures are normally used for displacement mapping, but can also be applied for general purpose stuff.
I prefer having a hardware feature over not having it. It means I can use it creatively today.

yes i know its not the same thing…but ATI says that the result will be the same (what can be done with vertex textures…can be done with R2VB)… but i agree that is better having it than not having it

i just not understand why Nvidia dont implemented High Dynamic Hange with FSAA …it turned to be a great point in favor of ATI hardware…at least for gamers

i just not understand why Nvidia dont implemented High Dynamic Hange with FSAA …it turned to be a great point in favor of ATI hardware…at least for gamers
HDR & FSAA is not related. You can play games with or without HDR and with or without FSAA. Combination of this effect may or may not work on some hardware.

Bigger problem is huge and growing hw architecture difference. This force developers to write separate codepaths for ATI na NVidia. Results can be:

  1. Game work perfectly on hw-1 but work sloppy on hw-2
  2. Game work nice on both hw, but not using latest features and effects
  3. Game don’t use extra features from hw-1 and hw-2… ie… useless tranistors on GPU :slight_smile:

Because of this, most games doesn’t use 3Dc or Vertex texture fetch. Now, we are facing with texture filtering issue on ATI. R2VB is nice feature, but Im afraid… it will be used in very rare situations… maybe in some very hw dependent software.

HW vendors MUST agree about some feature. For example FP textures on ATI…If something is a texture it MUST have all texture properties and behave like regular texture object, no metter is it byte, short, half or float pixelformat. Lack or filtering is very serious limitation

HW vendor have two options… Make some deals about hw future feature set or paying more people that should go to game developers companies and offer optimizations for free, just to be sure that some new AAA game runs better on their hardware.

yooyo

Originally posted by yooyo:
[b] [quote]i just not understand why Nvidia dont implemented High Dynamic Hange with FSAA …it turned to be a great point in favor of ATI hardware…at least for gamers
HDR & FSAA is not related. You can play games with or without HDR and with or without FSAA. Combination of this effect may or may not work on some hardware.

Bigger problem is huge and growing hw architecture difference. This force developers to write separate codepaths for ATI na NVidia. Results can be:

  1. Game work perfectly on hw-1 but work sloppy on hw-2
  2. Game work nice on both hw, but not using latest features and effects
  3. Game don’t use extra features from hw-1 and hw-2… ie… useless tranistors on GPU :slight_smile:

Because of this, most games doesn’t use 3Dc or Vertex texture fetch. Now, we are facing with texture filtering issue on ATI. R2VB is nice feature, but Im afraid… it will be used in very rare situations… maybe in some very hw dependent software.

HW vendors MUST agree about some feature. For example FP textures on ATI…If something is a texture it MUST have all texture properties and behave like regular texture object, no metter is it byte, short, half or float pixelformat. Lack or filtering is very serious limitation

HW vendor have two options… Make some deals about hw future feature set or paying more people that should go to game developers companies and offer optimizations for free, just to be sure that some new AAA game runs better on their hardware.

yooyo[/b][/QUOTE]hmm first off all i want to thanks you all that have spended time answering my newbie questions, people that are high related to 3D development and GLSL losing time to answer so commom questions is really odd to see and i apreciate it…forgive my english…i know it sucks =P

what i have been reading is that…this HDR+AA hardware that ATI claims to have needs do be worked by software too…just like the patchs that have been released to make Nvidia run HDR + AA on some specific games…so its beguns to get hard to understand how does works this on ATI hardware…

to not run from the theme of the topic… lets go back to GLSL :slight_smile:

can anyone explain to me the diference betwen static branching and dynamic branching? :smiley: