Suggestions for a declarative shading language

I would like to open a discussion about a declarativ shading language for OpenGL I am working on now. Basically, I would like to hear some suggestions (and critic) that may help me in developing the language.
First of all, why am I doing it? I always wanted to write an own compiler - this is a nive opportunity. Secunde, I search for easier and more intuitive ways of programming the shading hardware.
Basically, the SL is a functional language, where variables and functions are defined only once (no rewrite possible). Also, there is no distinction between vertex and fragment shaders. The compiler will determine which expression has to be computed in which stage and setup varying variables accordingly. There is no dynamic branching or looping, but we can use list operations. The type system and standart library are similar to ones of GLSL.
Each type o fthe language is a list, so we can use list operations (sum, mul, anything else?) on everything. List components are accessed using integer index where list identifier is used just like a function. When calling functions with just one argument, the parenthesis may be emited. To access x component of a vector v we can use v.x, v 0 or v(0)
Here is a sample code that illustrates basic points:

 
# vertex position
attribute vec4 position
# texture coordinate
attribute vec2 tex
# bone weight
attribute vec4 weight

# array of bone matrices
uniform mat4[4] bonemat
uniform tex2D base

# do we need to apply the skin?
const boolean applySkin = true

# transformed vertex position
# compiler will determine the 
# variable type based on the expression(here vec4)
# also illustrates blocks
out vpos:
    # skinned
    skinned_vpos = sum([weight i * position * bonemat i for i in range(0, 3)])
    # not skinned 
    normal_vpos = position * bonemat 0
    # evaluate vpos
    return if applySkin skinned_vpos else normal_vpos

# texture lookup component(just to illustrate functions)
texLookup(coord as vec2) = tex2D(base, coord)

# fragment color
out color = texLookup tex

 

Also, there is an idea of incapsulating computation into different “objects” and binding them together in the last phase, but I haven’t had much thoughts about it. Basically, it would be something like:

  
 object model1

  var1 = ...
  var2 = ... 
 end

 object model2
  requires model1
  
  var3 = model1.var1 + ...
 end

 object model3 extends model2
  var3 = ...
 end


 out 
  vpos = if modelblabla model1.var1 else model3.var3
  color = model2.var2
 end

but i don’t know if it can be usefull at all :-/

noone interested?

:frowning:

How would you map your object model to the pixel and vertex shaders? What do the objects actually represent in terms of material or environmental properties? Do they make any sense at all - their scope is only local to the shader - I would expect objects to store some kind of persistant (non-local) state information.

As i already said, i don’t reall know what do do with objects :slight_smile: It was just a design idea that came to my mind. Actually, i thought of obejcts as individual shader elements that compute some values - so to speak, objects as “solution methods”. One could define a set of objects ( a library) and combine them for a united model. Like you can have objects that represent lighting model, texuring model etc. Then you can invoke selected objects together by linking.
What is more important to me is the functional shading language i described above, as I already started writing the compiler.
I any case, there is no division between vertex and fragment shaders.

In general I like the idea. Partly because I generally like functional languages :wink:

I’m not entirely sure how you intend to find out where the border between the fragment shader and the vertex shader will be.

About the functional syntax itself, I prefer to use the “map” construct to the “foreach” costruct.

It works along the lines:
Given a function f(x), and an array A:
map(f, A) := { f(a) | a in A }

Combine this with a lambda construct, and the ability to use functions as objects themselves (like in scheme), and you have a very powerful language. Actually, the ability to use functions as objects that can be given as parameters is already enough to implement the “map”-function, but I’m sure the compiler can do a better job at this :wink:

The functional stuff sounds cool.

What functionality does this add that we don’t have already? In other words, what would justify the expense of introducing a new language and programming model? By expense I mean the redesign of current compilers and the introduction of a new programming model to the shading masses. And perhaps more importantly, would this take any existing functionality away?

As you may recall, I suggested a high level meta language some time ago, but the idea was met with jeers and catcalls, then pelted with rotten tomatoes and cabbage (except for your kind response, for which I’m most grateful). I of course fully understand the criticism and the reluctance to embrace outspoken boat-rockers, but I like new ideas, even if they seem a bit crazy at first :wink:

Not to detract from your idea, but wouldn’t it be cool to have a GPU-/API-agnostic meta-language that could be compiled into a form that’s readily linkable/executable and yet easily persisted to disk? You could then layer a functional language or any language you like on top of it, much in the same way VB, Managed C++ and C# can be compiled to .NETs IL. Dunno, maybe it’s just me. I realize there’s not much incentive at this point for the IHVs, as the last thing they want is to turn their hardware into little black boxes, not much to market but price and performance, and features just get in the way. Heck, maybe that would stifle innovation. There’s always more than what meets the eye.

you have a very powerful language
But what does this power buy you? That’s the part I don’t get. What necessary (as opposed to “I like functional programming and think everything should use it”) features does this language provide that aren’t available in other shader languages?

I don’t understand functional programming. I do understand XSLT, and people tell me that it’s a functional language, so I supposed I do understand it to a degree. But I’ve never been able to comprehend, for example, make. So it might be best to take some time to justify what this idea provides that isn’t provided by more linear programming models.

Nice, looks that we finally have a discussion here :slight_smile:

First of all, the question “why do we need it?” is really important. The answer is actually: we don’t. The functional SL won’t allow us doing something what imperative SL would prohibit - in the end they all run on the same hardware.

However, there is a difference between imperative and functional programming that very well applies to graphics. Imperative programs are set of statement that describe how to do something, while functional programs describe what has to be done. We have one declaration that is logicaly and syntactically bound to the same expression. Moreover, we have much plainer and simpler syntax then in imperative languages = morer intuitive. I started using functional programing with Boo(Python clone for .NET), and it was something amasing: I wrote a fairly complex algorithm for the first time using new concepts, and it compiled without errors and even run without errors! This is because functional code is less messy (provided a rich syntax). So functional SL will most probably reduce your development efforts and make your shaders more easy to understand.

Most important, functional SL directly matches teh underlying hadware model! It makes compiling and optimising much easier, because we already have a well-defined stream of vectors in our shader. I am not shure if this will get us better performance (as compilers are fairly good now), but shurely the compiler(driver) will be simpler, i.e. faster and better.

Finally, I am not fond of C (being Pascal programmer myself :-), so I was looking for alternative ways. Fuctional programming is great -really great - to perform computations, while imperative programming is great for dialogs and GUI. In shaders,we have only several streams of computation, making it a perfect candidate for a SL.

Well, I am not trying to convert all the market to functional SL (it would be nice hovewer), I only want to explore possibilities. I think that functional SL will make the shader developement easier. Hopefully, I can finish the compiler in following month (I still have my exam to consider :frowning: ) so you can try it out :wink:

Actually I don’t really understand functional programming myself :stuck_out_tongue: I never really used a functional SL, expect for some elments in Boo, but what I saw there, I liked it very much!

To sum everything I said up:

  • I don’t believe that functional SL will introduce away new or take any existing functionality away.
  • I do believe that functional SL will make developement easier, the shaders nicer and developers happier :slight_smile:

@Overmind: thanks for the map() function! It will save me from the parsing of the annoying ‘for’ statement. I will include it.

As for the division into vertex and fragment shaders, this is fairly simple. This is not my idea, I found it on the net. We assing frequencies to each expression like: “vertex”, “fragment” etc. We know that color is computed each fragment, so it has frequency “fragment”. This frequency will be also assigned to each subexpression we have there. On the other side, we can determine the computational frequency of the expression based on it’s subexpressions. So for each expression we have two frequencies: the frequency it can be computed at and the frequency it is required at. For example, in my shader above “tex” is provided on “vertex”(it is an attribute), but required on “fragment”. This means that tex has to be a varying variable. The nice thing is that optimisations like moving the code from the fragment to the vertex shader will apply automatically.

Zengar, are you the person who wrote some compiler that converts a “Pascal shading language” to ARB_vp and ARB_fp. It was more than just a compiler. It was a IDE like RenderMonkey or Shader Designer.

For shaders, I think compiler sophistication should be kept at a minimum.
If the ARB wants, and it was hinted in that GL 3.0 discussion, they could have a binary target. Then anyone who wants can write a HLSL.
This is possible with D3D. I’m sure you all know D3D has a offline compiler (vsa.exe and psa.exe) and the binary format is probably documented somewhere.

No, I am not the one who wrote it :slight_smile: I am just learning :wink: Hovewer, I know that SL and I liked it. It was basically GLSL but with Pascal. An intermediate format for shaders would be a very nice thing. Hovewer, it must be something simple and something that directly maps to hardware in the execution matter. For last year, I am trying to find an intermediate representation that would be flexible enough (my interest on SL comes partly from there ). Currently, there is an idea of a SSA-form language that is very high-level but should still be very cheap for purposes of recompilation (to native binary). This is all only research though (I even tried to write a paper on it some time ago, dunno if I still have it somewhere). Unfortunately this is something I have to do in my free time as I study linguistics (and am just too tired most of teh time), so it does not develop very quickly…

You’ve certainly put some thought into this. It’s very interesting. A few years back I created a Unreal script compiler, partly for the challenge and partly because I just like the Unreal engine design. It was the most difficult thing I’ve ever done, that’s for sure. One thing that became abundantly clear was the limitations of C++ within the context of a game scripting engine. Having things in the script (runtime type info and the single inheritance) hardwired to the native side makes things ugly and awkward at times. Nonetheless, I thought Tim Sweeney’s addition of state objects was very clever and incredibly useful. If you’re not familiar with it, a state is just an encapsulation of virtual functions that group into what amounts to an array of virtual function tables for the class. A class can be in any one of its user-selectable states, and a particular version of a replicated virtual function in that state will be the one used when that class member function is invoked, either from within the script or from the engine. This is totally cool if you ask me. I would welcome something like this in any language :slight_smile:

If the ARB wants, and it was hinted in that GL 3.0 discussion, they could have a binary target. Then anyone who wants can write a HLSL.
Yes, an intermediate language would be really cool. While i like GLSL, I don’t like the idea that we are restricted to the single language.

What necessary features does this language provide that aren’t available in other shader languages?
What necessary features does GLSL provide, that aren’t available in the assembler targets?

But what does this power buy you?
Simpler development :wink:

Of course, I don’t want another GL extension to expose a functional shading language. A third party compiler is the appropriate form of introducing such a language.

For lack of an intermediate language, this compiler will have to write GLSL code. That’s unfortunate, but not that bad either…

A bit OT:

One thing that became abundantly clear was the limitations of C++ within the context of a game scripting engine.
Actually, I’m currently working on an extension to C++ to add exactly these “missing features” you’re talking about (reflection, scripting and some advanced object oriented concepts), while maintaining high performance and a clean syntax on the native side :wink:

Take D (www.digitalmars.com) :wink:

What necessary features does GLSL provide, that aren’t available in the assembler targets?
2 things.

1: Glslang provides vertex texturing, looping, and various other things. Only nVidia’s assembly extensions expose them.

2: I’m not a fan of high-level shading languages, particularly one built into drivers. I am opposed to glslang as it is defined, and I think it should be much lower level.

Perhaps we should refine what is meant by “high-level”. To me, “high level” necessarily means a syntactic construct sufficient to allow for ease of coding while at the same time providing the compiler with enough context to generate optimized code. And therein lies the trade-off: you can only get so “high” before you loose control at a finer granularity and thus alienate the API and the hardware.

Obviously compiler complexity is an issue here, since IHVs don’t have infinite time and resources to devote to compiler construction; and since capabilities and shading models are changing so quickly, a good idea today may be a bad one tomorrow. That’s why I believe that the closer we can stay to the hardware, the better; it would seem that the current shading model and syntax is pretty well suited to that task, and it’s one that we’re already very familiar with.

P.S. A functional language (something like LISP) for e.g. procedural textures would be neat, as a compact syntax for composition.

P.P.S. What I meant by a high-level meta language was that it supplies “enough” context to the compiler to generate “good” code; that’s all.

I’m not a fan of high-level shading languages, particularly one built into drivers.
That’s exactly what I’m talking about. Noone in this thread suggests building a new shading language into the drivers, just about making an alternative language :wink:

About vertex texturing: That’s only because the ARB assembly targets won’t be updated anymore, while GLSL is. Compare introducing GLSL to extending the ARB assembly targets to provide vertex texturing in terms of pure features and driver stability, and you’ve got a clear winner :wink:

My point was that I don’t consider “what neccesary feature does … provide that … doesn’t have” a valid argument, because it can be used against any high level language, while it should be clear that we need high level languages none the less to manage the increasing complexity of shaders.

As far as vertex texturing goes, all D3D10 class hardware will have to support it, for better or worse. One thing I really love about this is that the feature playing field is going to be leveled to a much greater extent (at least initially). I think the only area there’s some leeway in is multi-sampled render targets, or something like that.