GLSL and Shading

Personally, I prefer GLSL to, e.g., HLSL (regrettably, I cannot say the same about OpenGL and DirectX :frowning: ). Nevertheless, I use it for GPGPU programming and write really complex shaders. Some of the features I would like to propose include:

  1. #include directive. Currently supported by NVIDIA GLSL compilers and works the same as #include in, e.g., C++. As shaders are becoming more complex, and share common code, why not provide #include functionality?

  2. Member functions in structures (similar to what is currently supported in Cg and languages such as C++). IMO, this improves code readability and helps structurize it. User-defined constructors (apart from a default one) also fall into this category. As hardware makes further progress, virtualization may be added, too.

  3. A kind of “noinline” or “inline” keyword being added to a function definition which recommends the compiler not to inline it explicitly. Becomes relevant, IMO, as a certain form of function calls becomes available in hardware.

  4. Using textures as general-purpose read-only memory arrays (use of explicit integer coordinates etc.) May require some additional hardware support, however, would be suitable for GPGPU computations.

  5. and() and or() functions for bvecN types (currently, I have found no way to do anything like that except either using vecN instead or performing && for each of the N fields explicitly).

  6. Geometry Shaders (to maintain a kind of parity with DirectX 10).

1: No need. You can “include” by simply adding the shader files during the compilation step, and adding several shaders (of the same type) to the linking step. There’s no need to dirty the language with an explicit include when there are better ways to accomplish the same goal.

2: No OOP. Shaders aren’t nearly complex enough to need it. When we start writing 50,000 line shaders, maybe then (and only maybe) we can talk.

3: Meh. If the compiler does a good job, then it does a good job. It probably knows far better than you do as to what should be inlined and what shouldn’t.

4: That’s not going to happen.

5: There is no 5.

6: Might be useful.

7: Is there actually going to be some kind of hardware with this functionality?

  1. Agree with you, in general. What, however, if I type in my shaders in RenderMonkey (and check them for compile errors there), then extract it from the .rfx file and use for my purposes? Should RenderMonkey (or other shader editors) include such a feature as concatenating several files (which, I think, is rather simple)? If so, that will surely suite…

  2. That’s quite OK. Procedural programming can do. The longest one I’ve written is no more that 2000 lines:)

  3. Maybe. And I want to believe that the compilers will be able to produce separate functions soon. I’ve encountered compilers which were unable to do such a thing even though the underlying GPU supported function calls.

  4. Let’s leave it for some “upper level” to emulate. Currenly, I use 2D addresses when storing data in 2D textures (so as not to spend time on convertions).

  5. …

  6. Well, that’s the only piece of functionality, IMO, lacking with bvecN.

  7. I think yes. Just visit

http://www.ati.com/developer
http://developer.nvidia.com

and view the latest (from GDC 2006) presentations - they’re talking about it a lot. I’ve also got some information (from one of my acquaintances working at NVidia) that they’re planning to release DX10-ready chips by Windows Vista release (which is taking place in 2006-2007, I don’t actually remember when).

Should RenderMonkey (or other shader editors) include such a feature as concatenating several files (which, I think, is rather simple)?
Well, yes. The multi-file compilation/linking features of glslang are not optional. Tools should support them.

  1. I think yes. Just visit
    I kinda look at geometry shader functionality like this. If both ATi and nVidia are working on said hardware, they have a vested interest in the following:

1: OpenGL supporting it.

2: OpenGL supporting it cross platform (ie: none of the DX8-era competing extensions crap that they pulled).

Since they’re both prominant members of the ARB, and strong supporters of OpenGL, either they’re already working on an appropriate glslang extension, or they’ve dropped the ball. If they’re working on it, they don’t need us urging them to do it (and with the end of ARB meeting notes, we have no way of knowing whether they’re working on it at all). And if they’ve dropped the ball, no amount of urging from us will change matters.

So I tend towards a “have faith” approach. And if that faith is ultimately misplaced, it certainly won’t be the first time the ARB has screwed up/not kept up with the times.