for a better understanding of shaders

I’ve read some little pieces of texts about the specs of shaders: vertex programs, vertex shaders and such. But that makes me don’t understand at all what they are and why are they so many different kind of them. Sorry if that’s not so an advanced topic.

First, a vertex program is called program not shader because it operates on vertex and not on fragments. Okay. But what about vertex shader ?
And a fragment program is a shader that operates on fragments, isn’t it ? I’m really confused with all that terminologies.

Finally, what’s the difference between all these vertex/fragment program/shaders and the shading language 1.0 extension ? And what about shader objects ? Are there any need to mix them in order to have a (even simple) working shader ?

Thank you in advance.

PS: Does a book like the
<a href=“http://www.amazon.fr/exec/obidos/ASIN/0321197895/qid%3D1120054415/402-7038615-9127342” target=“_blank”>http://www.amazon.fr/exec/obidos/ASIN/0321197895/qid%3D1120054415/402-7038615-9127342
opengl shading language
</a>
could help me more than the specs on this website ?

That’s simple:
Vertex and fragment programs are the OpenGL extensions defining an assembly level language to program the vertex and fragment pipelines of a graphics chip. There are multiple versions because vendors invented them, then they were standardized to an ARB (OpenGL Architecture Review Board) extension. Newer versions came out as vendor specific again and so on. They never made it into the OpenGL core because a high level language was preferred by the committee.
Vertex and fragment shaders are the extensions which say that the same is possible with the higher level GL shading language (GLSL).
Shader objects is the extension which defines the necessary OpenGL functions to manage vertex and fragment shaders.
The silly shading_language_100 extension is defunct.
OpenGL 2.0 has vertex and fragment shaders and the shader objects extensions in the core and a glGetString to query the shading language version.
The shading language itself is not defined in the OpenGL 2.0 spec but in a separate PDF.

The orange book is about the OpenGL Shading Language. I didn’t need it to learn GLSL, but used the spec.
You can find many GLSL examples on the internet to learn from.

Thank you a lot, this enlightened me !

Only two programmable pipelines on our graphic cards, countrary to renderman that had more (about 5 if I’m right), isn’t it ?

About the shader/fragment/program terminology, a vertex shader seems to be an ‘abuse’ word of vertex program in a high-level language. Isn’t that something that can confuse people ? But this might have something to do with renderman that used the name shader for all its programmable parts.

About programs/shaders: I think most programmers won’t use vertex/fragment programs anymore in the profit from their shader versions. I knew that there were an assembly version and a more high-level language (bit like C) but, in fact, I was thinking the assembly version were just abandonned.
I think if they all remain, it must be because there might have any good things using program instead of shaders. But they all need to be compiled before to be used, am I right ? If so, then is there any reason to have kept programs ?

Okay for the shader object.
Okay for the rest.
I think I won’t buy the book then.

Some of the confusion comes from the fact that OpenGL and DirectX has chosen somewhat different names for the same functionality:

OpenGL vertex program ~= DirectX vertex shader
OpenGL fragment program ~= DirectX pixel shader

These shaders/programs are written in an assembly like language. Wether people use the word shader or the word program usually depends on their background, I usually use the word shader myself but I try to use the word program when posting in these forums.

OpenGL GLSL ~= DirectX HLSL

These are programs written in a c-like language and includes operation on both fragments/pixels and vertices. These c-like programs are compiled into equivalent assembly code.

I think more and more people are moving over from the assembly shaders/programs to GLSL/HLSL.

Originally posted by jide:
Only two programmable pipelines on our graphic cards, countrary to renderman that had more (about 5 if I’m right), isn’t it ?
Yes, one vertex pipeline and one fragment pipeline. Although all modern graphics cards have several parallell pipes for both vertices and fragments.

I’m curios about what you said about renderman, which are the 5 pipelines you mentioned?

/A.B.

Thanks for the precisions. So do you mean a pipeline is a virtual machine running directly assembly code ? Is that true ? Isn’t that micro-codes instead (just like for any cpu) ? Maybe I’m thinking bad on this stuff.
So, if you’re right, this means that making vertex/fragment programs would be best and even faster ! As shaders are slowler that the default way, it might not be so undifferent.

About renderman, here is the full list:

. Light source shaders
. Volume shaders
. Transformation shaders
. Surface shaders
. Displacement shaders
. Imager shaders

So they were 6 in fact. Just tell me if you want more precision about the aims of each of them all, I’ve got some quick description about them.

Renderman was the thing making the idea for hardware side programmable pipeline. With renderman they use the name shader whether that they were for vertex or fragment.

Shader just confuse me, that’s why I prefer using the names programmable pipelines so that everything is almost clear (but the assembly/high-level language differenciation).

More next… and there are :slight_smile:

Shader is a program that is applied to vertices or fragments(pixels in DX). Lightning shader means that this program computes lighting, fog shader is responsible for fog calculation. If you want you can write program that does both things and others and that’ll be new shader(program).

What is understood by pipelines is that multiple fragments/vertices are computed at one time, so if GF6 has 6 vertex pipes, 6 vertices are transformed and shaders evaluated for each of 'em. Result for each vertex might be different depending on input params.

I’d still recommend you to read some shading speciffic book. Maybe even GLSL spec available from front page will answer most of your questions that might slip by in this conversation.

Okay, I’ve read about the ten first pages of the document. I understand just a bit more what they are and why they are. But that doesn’t explain all for me. I hope you won’t see it bad if I continue this thread.

First, a shader replaces what the matching fixed pipeline would have done. Why ? Wouldn’t have it better if the shader had been added after what the fixed pipeline have done ? I try to explain more. A vertex shader must perform all vertex stuff (vertex coordinates, texel, lighting…). So if I write a shader that needs to do lighting, I must also take in consideration the transformation process. It looks like a problem for me, but that might be because hardware isn’t that easy. What do you think of that ?

More, wouldn’t have it better to have shader be placed as pre/in/post meaning it can be done before the fixed pipeline, or replacing it, or after it has computed ? Indeed, this will allow more flexibility and granularity.
Indeed again, I’ve realized when seeing some vertex shader demos that the result without shader is slightly different from the result with shaders (I noticed some displacements on vertex). This was in fact for some Cg demos.

I don’t want to change what shaders are (I’m not so pretentious).

Finally, what can shaders do that fixed pipelines cannot ? I actually don’t see, almost for the vertex components.

PS: Madman, you’re not plenty right. It’s not because cards have pipelines that they can do parallelism, it’s simply just because they own several pipelines (but I might have misunderstood what you tried to say).

First, a shader replaces what the matching fixed pipeline would have done. Why ? Wouldn’t have it better if the shader had been added after what the fixed pipeline have done ? I try to explain more. A vertex shader must perform all vertex stuff (vertex coordinates, texel, lighting…). So if I write a shader that needs to do lighting, I must also take in consideration the transformation process. It looks like a problem for me, but that might be because hardware isn’t that easy. What do you think of that ?
What if I don’t want that. For example in vertex shader I can treat lights positions in world space and I don’t need transformed lights position. Only one thing is related to fixed function pipeline and this is position invariance. So this two lines to same job but it is a bit different:

Finally, what can shaders do that fixed pipelines cannot ? I actually don’t see, almost for the vertex components.

Wow… a lot more things…

  • 4-bone vertex weighting calculation for 3d characters,
  • transforming view vector into texture (tangent space) for parallax or relief bump-mapping,
  • avoiding gl_Modelview matrix by using 4 vertex attributex (good for pseudo geometry-instancing),
  • offseting vertex position on it’s normal depending on some coefficient (if you want to “blow” 3d model)

And finally, on new GPU-s fixed function pipeline is removed from die and replaced with shaders that emulate fixed function.

yooyo

There is a tool over at 3D labs that generates ffp equivalent shaders, performance seemed to be worse than in ffp approach though IIRC…

Try to generate few shaders and examine output, in few minutes you’ll understand that you might need more of smthn. or that you can optimize some part of the code for your needs.

Concerning ‘Finally, what can shaders do that fixed pipelines cannot ? I actually don’t see, almost for the vertex components.’.
Vertex programs can be tricked to work on CPU, as it was realized some time ago, but in that case you loose a lot of speed. GPU can process vertices a lot faster than CPU, moreover, it’s kinda free vertex CPU that you just have, why don’t use it? Moreover, for the most of time approach using shaders looks more elegantly than hacking CPU to do that job.

If honestly, I’m not using ffp lightning at all for like a year or two, as I allways have cut down, tweaked shader in few lines that does the job.

Thanks guys.

Yooyo, I don’t understand all what you said. But I finally agree that it might be best for shaders to replace their corresponding fixed units. However you might develop more your example about position invariance. For me your codes are different as long as I don’t know what the first line is supposed to do. What is ‘ftransform()’ ? And what’s that modelviewprojection matrix ? I hope it’s normal to find that last strange for a ‘normal’ gl programmer that is used to play with modelview and projection matrices but not both at the same time.

About vertex attributes. I’ve already heard about them. Do you mean it’s possible to use ‘extra-vertex-data’ that’s not been used for other purposes than what they were intended to be ? Using texture coordinates for non-used texture units for other purpose than texturing looks a good idea, but this, in some ways, pervades them.

Now, about what shaders can do but not fixed pipelines. Vertex weighting can be done on cpu side, as I guess for transforming view vector into texture (coordinates I suppose). The next ones are more special and I think I need a more understanding about shaders.

Finally, good point about new graphic cards, I just didn’t know.

Now, Madman. Do you say that even if our graphic card render a scene at a quiete low rate (say 10 fps), this card is still able to do some other stuffs like shaders without more slowing down the rendering ?
Okay for the lighting part.

ftransform() is basically the way to tell the shader that you want the vertex transformed with the fixed function pipeline. It is the only fixed functionality that is still available when using shaders. When you use it you are guaranteed to get exactly the same result as with the ffp, when you calculate it manually (like the second line does) you may get slightly different values because of rounding errors.

glModelviewProjection is the product of the modelview and the projection matrix. It is just a shortcut that is provided because this product is used very often.

You can use texture coordinates for anything you want in a shader… Shaders are just programs, they are required to output a position (vertex) or a color (fragment). You can pass any values you like to the vertex shader and from the vertex to the fragment shader. I don’t know what you mean by “pervades”, but perhaps that’s just me not understanding english :wink:

About what shaders can do but not fixed pipeline: What can 3D cards do that you can’t do on the CPU? Of course you can do skinning on the CPU, but then you’d have to send the skinned meshes to the GPU for rendering for every instance. With vertex programming you’d just have to store the unskinned vertices in a buffer on the graphics card and you can draw many instances with these, just sending the bone positions to the card and letting the vertex shader do the skinning…

Or just imagine the simple case of per pixel lighting. The fragment shader needs the normal vector, tangent and binormal, but the fixed function pipe doesn’t produce these on a per fragment level, so you need at least a vertex shader that forwards these values…

If you’re asking “what can I do in a shader that I can’t also do on the CPU?”, the simple answer is “nothing at all”. However, the GPU will typically outperform the CPU by a very comfortable margin – in the case of fragment shaders, it will even outperform the CPU by several orders of magnitude.

Hence, the question you should be asking is “what can I do in a shader that I can’t also do with the fixed-function pipeline?”. The answer here is “too much to mention”. Custom lighting models are just one of the more common examples that come to mind, but it doesn’t end there – not even close.

Complex shader executes around 250 instructions on each fragment (pixel DX). Each operation is mostly vectorial (3-4 component). Now multiply it with your resolution for ex 1280x1024, and take overdraw into account, x1.5-x3.0. Final arithmetic should give you pretty scary picture. Yet such program will be realtime on high-end GPUs.

I’m okay with all what you said.

Sorry for my bad words, I wanted to say pervert not pervades, meaning that it’s stranged to use something intended for one reason in other stuffs.

Now, you speak of making lighting with shaders. But what’s the main goal of them since opengl already provides lighting ? Are you saying one day or another we’ll do only shaders and simply forget almost all the other functionalities of gl ?

Another surely simple question: can we create geometry directly inside shaders or do we absolutely need to use existing ones (like vertex arrays for example) ?

If we can create geometry threw shaders, does it mean all ‘normal’ gl functions (vertex arrays and such) will some day or another not been in used anymore - thus becoming deprecated ?

Hmm, you must understand that there are different types of lighting if to speak in computer terms…

Computer graphics is the way to simulate real world’s lighting that is caused by atom/photon interactions. As it is virtually imposible to compute all photons fast we must make some approximation. Google for Phong lighting model - that is one of simplest ones(good article by Tom at delphi3d.net). But of course, it is not enought. You may see the lighing as custom function F(…) that returns color of the fragment and takes some parameters as input. This parameters may differ from one lighting model to another. maybe you want to render cartoon-like scenes? Than you go with cell shading. For very realistic models you may take some more advanced models. OpenGL implement only basic vertex Phong model(also called Blinns model), it is simply noth enought if you want different objects to look differently. Try to implement translucent/reflective/blabla surface using only OpenGL’s basic model! If you play computer games, you must have seen that new ones look much better than older ones - that’s because they use complex lighting models.

Hope I could help you a bit. And really, that’s sort of stuff one can find out by himself.

back to shader/program stuff - it’s just a naming problem, just like fragment vs. pixel. I prefere the name program, but bad taste is bad taste :slight_smile:

I know quantum mechanics, at least a bit. If you excite an atom, its electrons will change of state and will make quantas of energy…

I wasn’t talking about different kind of lighting models under computers but I wasn’t neither clear.

I think I must admit that shaders are best than default providen functionalities. They are a generalisation.

Thank you all for having taken from your time.

I think I understand them a bit better. But I still have some remaining questions I should answer.

First of all, I still haven’t written any shaders yet, so be indulgent with me…

As anyone knows, shaders operate on the gpu side. Then, say I make a vertex shader for any good reasons, let’s say in order to rotate and place some geometries. Now say that any other part of my program that must be done on the cpu side will require that new transformed geometry in order to do some stuffs on it (say doing some collision detection).
How far can one do that ? Is it allowed, is it well or a big no no ??
If it’s not good, then this will imply that I’ll do all the transforms maths on the cpu side and then not require the use of shaders at all. Won’t I ? Or does this imply I’ll have to do both (for example for more better performances): transforms on cpu and gpu ?

u cant do it easily u will have to find someway to get the information back, eg render the resulting vertices into a colorbuffer (eg framebuffer or texture) and read those values back.
normally for collisions etc the meshes used are different than the actually rendered meshes eg a couple of bounding spheres vs 5000 polygon mesh,
personally im doing all the math myself on the cpu + passing the vertices in worldspace to the shader
*benifits, simplier less shaders are required as each shader operates in worldspace
*u only do the calculations once + then u can pass that to the gpu multiple times (think shadows where u may be drawing the same mesh 20 times, if u do the calculations in the vertex shader then the vertice has to be recalucated again + again for each pass)
*cpus are much more flexible + unlimited, gpus can only handle a limited amount of data eg gpus balk often with skinning

So here is where that hurts. I’ll take that into consideration. Thanks for the lights.