PDA

View Full Version : GLSL vs. CG: linking shaders on the fly?



ebak
05-27-2004, 06:41 PM
So, I've been working extensively with NVidia's CG, but I'm happy to see that GLSL has come as far as it has in terms of driver support. With ATI cards' lackluster performance using CG, I'm adding GLSL support for non NVidia cards.

My question is this: Using CG, it is not necessary to link vertex shaders and fragment shaders - you just bind both when you need them. This was really useful for applications that need to mix and match a large number of vertex shaders with a large number of fragment shaders. In my case, I have about ten vertex shaders and five fragment shaders. With CG, I never have to link (it happens implicitly when binding the shaders).

With GLSL, it seems I have two options.

1. during initialization, link and store programs for every combination of vertex/fragment shader == 50 programs total

2. link vertex and fragment programs on the fly, right before I use them.

Which option would be best, or am I missing a third? It seems that both might be bad for performance, but I haven't done any testing. The second option sucks, as I'd have to grab all attributes and variables after each link.

I really think I'm missing something here. Any ideas?

Ffelagund
05-27-2004, 10:19 PM
Hello, without any doubth I choose the first option, even if you have 50 program objects with all your shader objects combined, you only have one copy of every shader (those program objects have the shaders shared), so in terms of storage, the resources are minimal, and the link process in the startup stage saves you to do it on the fly (link shaders into a program object isn't free)

ebak
05-31-2004, 08:26 PM
Thanks for the reply - You're right; after testing, I've found that linking is definitely not free - to compile and link 50 programs, it's taking around 45 seconds! That's crazy - using the same shader set in CG takes under 10 seconds.

I guess the GL driver is the performance factor for glsl compiling and linking. I'm using NVidia's beta GLSL driver on a Geforce FX 5900; perhaps that's the problem. Now if only I can get my hands on a Radeon 9800 Pro :)

Anyone else found performance problems with the NVidia drivers with GLSL?

sqrt[-1]
05-31-2004, 09:17 PM
I think (could be wrong) that early Nvidia drivers don't actually "compile" their shaders until the Link phase. -So in you case it would be recompiling both fragment and vertex shaders in every link. (as opposed to Cg where you just compile them once)

I would try the "uber" latest beta drivers (think 61.x ) from some of the leaked sites on the web and see if the problem goes away.

John Kessenich
06-01-2004, 11:38 AM
Some compiling may deferred until linking, and this is likely, as global optimizations can be done then. OGL2 only requires parsing and returning language-level errors at compile time.

However, deferring later compilation stages is important within the vertex set of shaders, or within the fragment set of shaders. It should be possible to do lightweight relinking between already linked shaders, if all the shaders objects within the same type (vertex or fragment) are the same.

Doing so may take implementations a while to figure out though, as it might not naturally fall out of implementing the API.

JohnK

Korval
06-01-2004, 04:45 PM
I always found the linkage between multiple shader types to be somewhat forced in glslang. I love the idea of linking multiple vertex or fragment shaders together, but I don't like not being able to mix-and-match different vertex and fragment shaders together.

The idea behind it was to make sure that the user had set their varying variables up correctly. But that can be done via a glError or a query function after setting the separate vertex and fragment program states.

One more thing not to like about glslang. :rolleyes: