T&L and vertex shaders...

I don’t speak English very well and I hope that you understand what I mean.
By default in opengl transform & lightining calculations are done by the cpu or by the gpu?
Vertex shaders are a way to do t&l with the gpu?
How can I use t&l and vertex shaders?

Please help me!

By default in opengl transform & lightining calculations are done by the cpu or by the gpu?
Vertex shaders are a way to do t&l with the gpu?

No. You don’t need vertex shaders to do hardware T&L.

Transformation and lighting calculations can be done by your own software, or you can use the modelview matrix and builtin light and fog functions.
It is advisable, and often simpler, to use these instead of doing it yourself.

It’s possible that the driver does T&L in software, it’s also possible (if the hardware supports it) that it is done in hardware.
By using the modelview matrix you leave it up to the driver.

Vertex shaders are an alternative to the fixed function T&L. So it’s almost separate from the hardware/software T&L question.

If you want more advanced transformation (e.g. bones on the GPU, or you want to calculate tangents and binormals for bumpmapping), you can use vertex shaders.

Of course, if the hardware doesn’t support vertex shaders, you may get software fallback anyway.

Originally posted by Sabonis:
How can I use t&l and vertex shaders?
By using GL you’re actually using HW T&L by sure means by now. I heard there are some cards, mainly on laptops which does not have HW T&L yet, altough this is probably out of date.

For vertex shaders, it’s a bit more complicated. You need something like NV_vertex_program (really old), ARB_vertex_program (old), GLslang (bunch of extensions, finally supported, really good).

I suggest to take a look at ARB_vertex_program (you may like how it works) then switch immediatly to GLSL.

Cards supporting this functionalities usually have it hardware-accelerated.

GLSL are a bunch of extensions which should be
ARB_vertex_shader & ARB_fragment_shader
ARB_shader_object
… and possibly others. :wink:

Thank you.
Can you recommend a book about pixel shaders?
Can you find me any tutorials about pixel shaders?

For books, check out GPU Gems 1 & 2. Head over to Amazon and look at the selection, read the reviews…

For sample code, check out the NVIDIA and ATI developer websites. Look for the SDK samples.

I need something for Ati gpu…
gpu gems is specific for nvidia!

gpu gems is specific for nvidia!
No, it is not :rolleyes:

Originally posted by Sabonis:
I need something for Ati gpu…
gpu gems is specific for nvidia!

Really ?

There’s the ‘orange book’ called the Opengl shading language. You can find it easily on amazon.com

T&L when invented was a revolution because that allowed graphic cards to process all t&l stuff on the gpu side. So the cpu was free from doing this. The only way to do that was to use glTranslate, glRotate and so.
Vertex shaders can do the same but you will need to create your own shader for the same purpose.

Doing that on the software side, you will simply use your own functions for translations, rotations and so.

Hope that helps.

Originally posted by jide:
[b]T&L when invented was a revolution because that allowed graphic cards to process all t&l stuff on the gpu side. So the cpu was free from doing this. The only way to do that was to use glTranslate, glRotate and so.
Vertex shaders can do the same but you will need to create your own shader for the same purpose.

Doing that on the software side, you will simply use your own functions for translations, rotations and so.[/b]
I don’t understand what you’re meaning there.
As far as I can remember, the only point was in leaving more parallelism, allowing CPU to processo other data. Not to mention it was simply necessary to put data near GPU and the fact CPUs are slow, old, ugly Von-Neumann while GPU exploit stream processing paradigm.

That’s what I meant: if the GPU can do some jobs that the CPU can be free from, then, this automatically imply a parallelism between the CPU and the GPU.

If I’m not wrong, at the first ages of GL, transformations (with glTranslate, glRotate…) were all done on the CPU side. T&L brought the ability for those functions to be done on the graphic card.

Originally posted by jide:
… at the first ages of GL …
:smiley:

It was not THAT long ago. Of course, workstation graphics had hardware T&L earlier, but for consumer products the Geforce and Radeon cards were the first ones that did T&L in hardware.

No, you are right. This was what were happening.
By the way, I now understand you said you need to transform the vertex yourself in the shader (or use the position-independant functionalities) by the transformation matrix.

Right Overmind. It wasn’t so much at the first ages of gl. It was indeed the geforce series (and probably the radeon ones) that first provided T&L.

You’re welcome Obli.

The glRotate/glTranslate/glScale wasnt and i think isn’t done by the hardware, and you dont need to use them in order to get HW T&L. Those caluclations are pretty few compared to what the HW T&L take cares of, and that is the vertex * matrix that happens for each vertex you send to opengl.

how the card gets the matrix, via glRotate and glTranslate, or via glLoadMatrix doesnt matter.

As long as you doenst send premultiplied ( world coords ) vertex coordinates you will get HW transform.