From what I understood from the GLSL spec, you can have several fragment shader objects in use together under one program container.
Only one, and indeed, exactly one, of any group of fragment shader objects used like this should have a main() function.
Quoting from memory here, so I’m not sure how the seperate fragment programs communicate, perhaps the linking stage takes care of it, perhaps you have to use variables…
If you haven’t already got the GLSL spec, go here and get it…
It’s similar to linking with normal CPU programs. You attach more than one shader object to a program object, but only one of them contains a main() function. This shader can then call functions declared in the other shaders, and global variables that have the same name are shared.
Basically, it seems you need to prototype the names of any functions in shader B, that you’re going to use in shader A, and then the (linker?) goes and finds them…
I don’t have the book, but one thing which allows programs to be assembled from small source code modules are the glShaderSource parameters. You can give an array of string pointers which combined give the single shader.
The other thing is that you can attach an arbitrary number of shaders to build one program object. Of course only one main() entry function must be present per vertex and per fragment part.
If these are 2D textures, one way to increase the number of accessible 2D textures is to use cubemaps. You can put 6 texture images in one unit and addressing is done via the lookup vector.
Another way would be to put the textures into a 3D texture and lookup individual slices.
The third way is multipassing.