SlangFX anyone interested?

I recently had to use Direct3D9 for a project (don’t close this thread yet…) and I really liked the effects framework DX9 offers to shader developpers.
Now I’m back using OpenGL, but I really miss the effects…
So I’m considering doing an “SlangFX” library build on top of the GlShading language, that offers similar capabilities (state setup, techniques, passes, …) I was wondering if anyone would be intestested in using such a library (I would not make it as “generic” if it’s only used for this project), and I was also wondering if anyone would be interested in contributing as it seems a lot of work. (I would plan to release the source under Gpl or LGpl)
Finally I would like to take the effects a bit further so you can script at all granularity levels, like per primitive and per frame granularities too. (Obviously evaluated on the cpu, but that way it is transparent to use at all levels)

I am aware of CgFX but it’s not really an OpenGl solution (just seems a DirectX rippoff build on the cg compiler) and is somewhat nvidia specific. (And as far as I know it only works on Windows)

Charles

I’d be interested in that.

I’m interested too but I think there should be a possibility to have low-level server-side memory/buffer support. For example for render to * techniques.

I’m interested in such a library too. I think this is exactly one of the things missing in the opengl community.
Currently I’m working on a similar system too, so maybe I’m willing to contribute :slight_smile:

Such a render managment system could also be somewhat api/platform independent. On one side the data driven mostly api independent state management and then the platform depending state executing side.

Its not clear to me what you mean with "take the effects a bit further " what do you mean with per primitive and per frame granularity?
I think the additional low-level support (per primitive) is rather uninteresting. Really interesting and difficult are the more higher levels, like resolving effect dependencies (or shader bouncing like it is called in the gamedev thread) especially if you want to keep it generic.

some interesting links concerning this issue:

several shader library talks from ATI
http://www.gamedev.net/community/forums/topic.asp?topic_id=169710
http://www.gamedev.net/community/forums/topic.asp?topic_id=217522
http://xengine.sourceforge.net/

This is something I wanted ever since we were able to finally use GLSL. Of course I’m writing my own material files which is very similar to the FX files for my engine. They aren’t to a point where I can fully use them like I want but it wont be like that for long. :smiley:

-SirKnight

Yes!

It is the main missing thing from OpenGL - vs DX. Needless to say, I’ve considered rolling my own too :slight_smile: .

Well the problem of rendering to textures was something I would use the per frame granularity for, so in the per frame you can do stuff on the cpu (since we added it ourself) there you can do calls that for example assign to textures

myCube = RenderCubeMap(worldSpaceOrigin)

Problem with adding this is that these sort of things would be callbacks to the user using the system. This would reqire every user to implements lots off callbacks to support all these. (Well I guess you could cut on callbacks like only “RenderSceneForFrustrum” and just call this 6 times for a cubemap, etc… still it would require some extra “meta” state, say you want to render to a sniper scope with some “heat” effect, this would require all shaders to be overriden by the heat effect, it’s not really clear how this should be done to me… you could add techniques named blahblah_Heat to every effect or something similar but that’s a bit hacky)
In the effects I currently do some stuff like this by using the annotations, on load time I scan all anotations and support stuff like:

texture cubeEnv <string function="cubeenv"; string type="cube";>;

But this is obviously very limited, and only supports a few hardcoded generation types, for shadows I use

technique TestMaterialGF4 <string ShadowTechnique="DefaultGF4Shadows";>

This is somewhat more generic, but still requires the engine to recognize “ShadowTechnique”.

Charles

edit:

Hmmz thinking about the assign to textures thing, it could be handy at other levels too probably, allowing generic expressions on textures, like if they are in the global state they could just be done at load time, allowing you to mix two textures, generate normals from bumps, … all with only a load time overhead.

Originally posted by Pentagram:
[b]

myCube = RenderCubeMap(worldSpaceOrigin)

Problem with adding this is that these sort of things would be callbacks to the user using the system. This would reqire every user to implements lots off callbacks to support all these. (Well I guess you could cut on callbacks like only “RenderSceneForFrustrum” and just call this 6 times for a cubemap, etc… still it would require some extra “meta” state, say you want to render to a sniper scope with some “heat” effect, this would require all shaders to be overriden by the heat effect, it’s not really clear how this should be done to me… you could add techniques named blahblah_Heat to every effect or something similar but that’s a bit hacky)
[/b]
yeah, I think this are the non-trivial things to get right and general usable.
My (not yet finished) approach is to have meta states / equivalent classes for vertex attributes, uniforms and textures. The nice thing is that you have also a more loose coupling between effect description, material and render object description.
Each meta state consists only of a name and data type info. Meta states can be registered/created at runtime.

Then I have at the bottom the rendertechnique description with general render state settings and meta states for required parameters of the technique.
On top of that I have a material defining some of the meta states (like textures and some uniforms).

And on top of that I have a MaterialBinding for mapping vertex attribute meta states to concrete render state objects to bind the vertex data of the mesh.

rendertechnique, material and vertexdata/mesh are data driven, materialbinding is generated.

For all other required but yet undefined meta states after materialbinding you can register callbacks or otherwise the object cannot be rendered using the material.
Well, the callbacks and undefined meta state handling is still implementation phase :slight_smile: but I think there are some nice things possible, like callbacks or provide convertion between meta states and so on.

I think you really have to provide some type of callback or other feedback coupling to the rest of the render system. But with template policies and a collection of predefined helper methods hopefully that would be possible in a generic way.

And with the heat shader example. I think if you impose some restriction to the shaders like this structure:


This is somewhat more generic, but still requires the engine to recognize “ShadowTechnique”.

yes, but you need anyway a feedback binding to the render system. I only think it should be minimal and easy to plugin.
And with something like renderScene(Material*, Frustum*) as you already mentioned the most important cases like shadow maps or reflections can be handled.


Hmmz thinking about the assign to textures thing, it could be handy at other levels too probably, allowing generic expressions on textures, like if they are in the global state they could just be done at load time, allowing you to mix two textures, generate normals from bumps, … all with only a load time overhead.

What do you mean by this? Assign metas states to textures?

Maybe you could do this using a script language such as Lua (www.lua.org) or Python. I would be very happy to define my shaders in Lua :slight_smile:
OpenSceneGraph has a library called osgFX which handles different rendering techniques for the same effects, but it is not automatic, one has to program all possible solutions to fit a combination of extensions. But it is nice :stuck_out_tongue:

I’m interested cause maybe I can use it in my projectA.
Perhaps I’ll participate.

At the moment I’m implementing an effect file grammar and compiler similar to direct fx files or cgfx (using http://spirit.sourceforge.net ).

While looking into directx docs and cgfx for inspirations several questions emerged.

First, it seems that CgFX is also available for Linux. At least there are libCgFX.so and libCgFXGL.so libs in the linux cg package. But I haven’t linux installed and didn’t tested it.

So what reasons are there to build another effect file system apart from minor syntax changes?
CgFX doesn’t support glsl at the moment, do somebody know if and when it will be supported?
The CgFX support is labeled as beta. Anybody used it? What exactly means beta here?

Then, do somebody have links to effect file docs/tutorials/descriptions? The directx and cgfx docs weren’t so helpful. Especially some things of the api and semantics are not clear to me. Are there only the built semantics or can new be defined from the application?

@romanoGL:
At first I thought too it would be nice to specify effects in lua. But if you look at the CgFX syntax and compare it to a possible lua solution I think the special FX-File syntax is more clean and readable. Nevertheless I guess it wouldn’t be too complicated to change binding from special syntax to lua if you have the render states abstracted into c++ objects…

Hello! I’m interested, too, and also planning to implement something own.

I’m still very new to the FX component to DirectX, though, would like to learn more about it, but could not find any (useful) documentation.

Can anybody recommend some literature, tutorials or links about FX?
(As this will teach me what is required for OpenGL!)

Another question: How do you combine FX with stencil shadows (and arbitrary many light sources)? This seems difficult because stencil shadows are inherently multi-pass: First the ambient part is rendered (lay z down), then for each light source, the stencil buffer is prepared and the light source contribution rendered.
(Note that multi-pass above refers to the entire scene(!), not to the passes in an FX file, which are per-primitive or per-object.)

Thus, do I get it right that you are not able to put the entire effect (ambient + bump/spec for one or more lights) into a single effect (FX file), but rather have to define an FX effect for the ambient part of the surface, and another for the per-light-source contrib??

Many thanks in advance! :slight_smile:

You can put the entire effect in one file. Just put a bunch of techniques into the same file, and name them after your materials, or light types, or however you sort passes in your engine.

The DirectX 9 SDK has a reasonably good description of effect files (.fx file format), assuming that you know DirectX conventions and render states. You can download it off of msdn.microsoft.com – although it’s a really big download!

Well cgfx is basically cg(HLSL) and not glSlang, I think for glSlang to really become usefull some sort of similar system is needed.
In theory this should be faster in the end too as the drivers are compiling the code not the intermediate assembler. (The old let the driver compile it debate…)
There also is “free” glSlang compiler code at the 3DLabs website, this code seems similar to the public Cg compiler nividia offers, so this could be a big help :slight_smile:

I’m working on some sort of syntax specification currently, I’ll hope to have something showable soon.

I’m still unsure about how to do some things, like the glSlang code will be embedded in fx like syntax files, but how should I extract the code “the right way”?
There are some options like having a simple recursive descent parser that when it encounters a function/variable definition just saves it in a string for the compiler, but I don’t know if that will be robust…
The other way I think of is making sure the synax is still valid glslang, by automatically adding some creepy defines the (driver’s)compiler wouln’t know there are things going on behind it’s back. This seems even hackyier and if errorneous code is passed to the driver this may even give weirder errors as the defines confuse the compiler or cause errors to be the wrong errors.
The last way is writing some translator that parses the whole thing in some abstract syntax tree, then does some processing on it and writes the processed code back to glSlang, problem here is that the code isn’t really related to the original code anymore, wich may again be causing messed up errors (a lot of #line’s could help maybe :smiley: ) I guess writing cg from the abstract syntax would work too :stuck_out_tongue: . I’m evolving more towards the last option certainly with the 3DLabs parser already producing a abtract syntax tree.

Charles


The last way is writing some translator that parses the whole thing in some abstract syntax tree, then does some processing on it and writes the processed code back to glSlang, problem here is that the code isn’t really related to the original code anymore, wich may again be causing messed up errors (a lot of #line’s could help maybe :smiley: ) I guess writing cg from the abstract syntax would work too :stuck_out_tongue: . I’m evolving more towards the last option certainly with the 3DLabs parser already producing a abtract syntax tree.

I think this approach is too complicated and inflexible. As fx files are for data description and configuration, I think the way to go is more like you first suggestion.
My current approach is:

//...
VertexShader somevertexshader : GLSL
<
//anotations for shader variable/system variable mapping, also compiler config possible, includes and so on
   string include[] = {"othervertexshader","basevshaderlib"};
   float4x4 SomeMatrix : worldview;
   float4 MeshVertexPos : vertex_position;
>
{
//direct compilable shader source code
   void someFunction (){
      gl_Position = SomeMatrix*MeshVertexPos;
   }
}
//...

Imo the fx file system shouldn’t know anything about the shading language. Then you can easily plug several languages in and out with only providing the compiler binding.

What are your concerns with this approach?

@jwatte:
I have looked into the sdk docs and the effect file reference but still there remain some open questions to me. Like the semantic stuff. Are only the built in semantics possible, or can I add new ones with call backs into my programm? Ok, you can always use annotations, but with adding own semantics you would have a more event based system .

Can anybody elaborate on current CgFX status? (When) will there be glsl support? How usable or how beta is the current version?

@valoh
Well I wanted a more generic system (see my first post) where you can write glslang code that runs on the cpu too. Like what passes to enable/disable, conditional render to texture, … would be driven by scripts written in the glSlang syntax. I would certainly share namespaces between the cpu/gpu scripts so you can have utility routines accessible from both languages.
Basically I want it to be as transparent as possible so you don’t really have to know multiple languages with different syntaxes like < > to create block constructs etc… otherwice you just have embedded glslang in a different language I am more trying to do extended slang (what the fx system is clearly trying to do)

If you want to learn the fx system you should also look at the samples that come with the sdk, especially the “effect edit” sample offers some easy to understand examples.

Charles

Originally posted by Pentagram:
[b]@valoh
Well I wanted a more generic system (see my first post) where you can write glslang code that runs on the cpu too. Like what passes to enable/disable, conditional render to texture, … would be driven by scripts written in the glSlang syntax. I would certainly share namespaces between the cpu/gpu scripts so you can have utility routines accessible from both languages.
Basically I want it to be as transparent as possible so you don’t really have to know multiple languages with different syntaxes like < > to create block constructs etc… otherwice you just have embedded glslang in a different language I am more trying to do extended slang (what the fx system is clearly trying to do)

[/b]
Are you sure it make sense to put all into a pseudo glslang dialect? What is there generic if you limit yourself by the existing glslang syntax style? With a layered approach you can plug in other bindings (directx, console shader description?).
glslang has no concept of passes and scene rendering it only describes vertex and pixel transformation.
So what you would create would have very little to do with glslang. Further glslang is compiled in the driver so from your dialect you have to create clean code to provide the driver with it. What would be the advantages of this?
The goal is an easy and flexible configuration method of the render process with all it’s render states and data dependencies. glslang is just one kind of render state, so why not handle it as one?

I’m sceptical that such a mixing approach has advantages over a layered solution. You are limiting your system by some glslang syntax similiarites where I don’t see a benefit.

Are only the built in semantics possible, or can I add new ones with call backs into my programm?

As of DirectX 9, the semantics are just strings. Btw, you can configure the entire FFP using FX files, if you want, by setting up world, view and projection matrices based on semantic-identified input parameters, as well as all the other render states.

Note that there are no “callbacks”; it’s all done by the application inspecting the effect, and finding out what parameters there are to set.

This sounds like an interesting discussion. I’ve been loosly considering doing something similar. I’m kind of in the camp for the closer to fx files for architecture (reinterpretation of shading seems like it would just have too many issues). However, glsl was not made to be all included in the same file like hlsl was. Would you guys just interpret function parameters and such and build a string description of the attributes (first thing that comes to mind)?

Anyone interested in starting an open source project? I don’t know exactly how much time I’d be able to devote to it, but considering some of the benefits, this could be a great time investment (especially for those who need cross platform stuff).

Ok this is how the code currently looks

//
// Application visible variables
//
interface TestShader {
	vec3 allVar;
	technique A;
	technique B;
	technique C;
	abstract vec4 difuseColor();
}

//
// Implementation
//
class TestShader {
	
	vec3 privateVars;
	vec3 privateVars2;
		
	void test() {
		
	}
		
	void test2() {
		
	}
	
	technique A {
		VertexShader = 
	}
	

}

It’s very much like the Fx files, I wanted to to other stuff but this seemed like the only thing that is possible without parsing the whole language tree.
I added the interface part tough, as I think it’s important do decouple the application from the implementation (Multiple effects would use the same interface, and the application can only get at the interface vars) this also simplifies the parsing of the parameters.
The abstract part is stuff that is not specified in the shader but by the application. At run time shaders are compiled based on the different values of the abstract part. It’s not totally worked out yet tough but the idea is that you can have your application generate some code, for example from a visual shader editor, or like different light styles (pointlight, directional,…) with the effect only describing how light and surface parameters get combined into the final color.
I’ve started doing some simple code, the preprocessor and lexer were taken from the 3DLabs slang compiler, making the work progress much faster.

Charles