PDA

View Full Version : #include in glsl



Godlike
11-19-2009, 07:50 AM
It would be great to have the #include preprocessor keyword in glsl. I know that this requires a few changes in the API but it would be great to have it as an option. I believe that nVidia's cg has it, I dont know about HLSL.

What do you think?

ruysch
11-19-2009, 09:40 AM
HLSL has it; For some reason this has never been added to GLSL, I personnally had hoped for at least some sugar in GL3.2 but I dont think you will get #includes in GL any time soon.

I have never really heard a good argument for not providing at least some of the syntetic sugar that both NVIDIA Cg and Microsoft HLSL provides. Perhaps some of the ARB members/contributors could shine some light?

Dark Photon
11-19-2009, 11:38 AM
Remote rendering? Too advanced for DX. GL has had it forever. There, the machine the shaders are compiled/used on isn't the machine the shaders are loaded from.

Whether the GLX protocol was ever flushed out to make this happen though, I'm not sure.

You'd think these could be handled on the way in before it hits the server, but consider conditional #includes which depend on state only known in the server's GLSL. Ugly.

That's just my off-the-cuff thought. No doubt there are other reasons for/against.

yooyo
11-19-2009, 01:09 PM
Includes is easy job to handle in application. So.. parse your shader code (before you pass string to compiler) and unroll includes as you wish. Driver should not deal with that.

Brolingstanz
11-19-2009, 02:05 PM
#pragma escapes macro expansion and the compiler will ignore unrecognized tokens, so it might make for a semi-standard meta-data scaffolding for artist-tool communication, rather than burying and searching for messages in comments or such like.
...
#pragma include("whatsits.h")
uniform sampler2DArray Material;
#pragma attribute(Material, "TMU=0; Filter=LinearMipmapLinear")
...

ScottManDeath
11-25-2009, 12:15 PM
Yeah, this is trivial with boost. The added benefit is that you easily can add support for a virtual file system...


std::string Shader::PreprocessIncludes( const std::string& source, const boost::filesystem::path& filename, int level /*= 0 */ )
{
PrintIndent();
if(level > 32)
LogAndThrow(ShaderException,"header inclusion depth limit reached, might be caused by cyclic header inclusion");
using namespace std;

static const boost::regex re("^[ ]*#[ ]*include[ ]+[\"<](.*)[\">].*");
stringstream input;
stringstream output;
input << source;

size_t line_number = 1;
boost::smatch matches;

string line;
while(std::getline(input,line))
{
if (boost::regex_search(line, matches, re))
{
std::string include_file = matches[1];
std::string include_string;

try
{
include_string = Core::FileIO::LoadTextFile(include_file);
}
catch (Core::FileIO::FileNotFoundException&amp; e)
{
stringstream str;
str << filename.file_string() <<"(" << line_number << ") : fatal error: cannot open include file " << e.File();
LogAndThrow(ShaderException,str.str())
}
output << PreprocessIncludes(include_string, include_file, level + 1) << endl;
}
else
{
output << "#line "<< line_number << " \"" << filename << "\"" << endl;
output << line << endl;
}
++line_number;
}
PrintUnindent();
return output.str();
}

Godlike
12-21-2009, 06:52 AM
Because of the lack of include in GLSL as well as the capability to have all the shaders code in the same file I've created a parser that supports this features and a few more unimportant at the moment.

The parser works with preprocessor pragmas and the syntax is similar to c++ OpenMP. The useful pragmas are 3:
#pragma anki include "path/filename.glsl"
#pragma anki vert_shader_begins
#pragma anki frag_shader_begins


For example you can write the following (dummy) shader program test.glsl:


// common code
#pragma anki vert_shader_begins
#pragma anki include "test1.glsl"
// vert shader code
#pragma anki frag_shader_begins
// frag shader code

You can feed this to the parser:


shader_parser_t parser;
parser.ParseFile( "test.glsl" );

...and then extract the 2 shaders code using the vert_shader_source and frag_shader_source:


cout << parser.vert_shader_source << endl;

The above will write:


// common code
#line 2 0 // #pragma anki vert_shader_begins
#line 0 1 // #pragma anki include "test1.glsl"
// Im test1.glsl
// LALAAAAAAA
#line 3 0 // end of #pragma anki include "test1.glsl"
// vert shader code

As you can see it keeps track of the lines so that the driver's compiler can print the correct lines in the error messages and warnings.

Here is the source ancient-ritual.com/programming/shader_parser.tar.gz (http://ancient-ritual.com/programming/shader_parser.tar.gz). If someone is interested I will clean the code, make it completely abstract and properly release it.


PS: I have to mention that the code is not clean because it caries unused code. Code that I use in other libraries of my engine

PS2: It is tested in GCC

knackered
12-27-2009, 05:36 PM
I find it difficult to believe that anyone inclined to write OpenGL shader code would be incapable of writing similar functionality in less than 10 minutes. Only use third party code if it's going to save you significant development time - because if it isn't, then writing it yourself will.

kRogue
01-16-2010, 02:03 PM
Just a few thoughts on #include in GLSL, the easy part is adding a simple pre-processor to handle #include, then feature creep comes: support #ifdef/#endif guards, then that feature creeps more to having to parse #if/#ifdef/#elif/#ifndef lines... then you start twiggling your head some in that people can make really complicated macro systems too... then you realize to do it the right way, you end up writing a full blown pre-processor.. ick...

In truth it would be a good place for GLU to handle it, along with the idea that you pass to GLU some function pointers for opening files so that you can have your own virtual file system.

Too bad GLU is most likely dead and we will never see it improved (matrix ops, proper GLSL pre-processor, update to GL3 core profiles for GLU tesselators and NURBS). Sighs.

Alfonse Reinheart
01-16-2010, 03:25 PM
then feature creep comes

But, since you are implementing this yourself, you can choose not to succumb to feature creep.


Too bad GLU is most likely dead and we will never see it improved

I suppose. But I don't really understand how it could have been updated. It is essentially just a library that makes OpenGL calls. Who would be responsible for distributing it?

Godlike
01-18-2010, 05:05 AM
Just a few thoughts on #include in GLSL, the easy part is adding a simple pre-processor to handle #include, then feature creep comes: support #ifdef/#endif guards, then that feature creeps more to having to parse #if/#ifdef/#elif/#ifndef lines... then you start twiggling your head some in that people can make really complicated macro systems too... then you realize to do it the right way, you end up writing a full blown pre-processor.. ick...

In truth it would be a good place for GLU to handle it, along with the idea that you pass to GLU some function pointers for opening files so that you can have your own virtual file system.

Too bad GLU is most likely dead and we will never see it improved (matrix ops, proper GLSL pre-processor, update to GL3 core profiles for GLU tesselators and NURBS). Sighs.

Actually this is not an issue. You can implement a custom preprocessor that parses only the #include and then feed the output to the glShaderSource. You dont have to parse the #if/#ifdef/#elif/#ifndef etc, these will be handled by the driver compiler.

PS: If I understood what do you mean

kRogue
01-18-2010, 07:36 AM
Actually this is not an issue. You can implement a custom preprocessor that parses only the #include and then feed the output to the glShaderSource. You dont have to parse the #if/#ifdef/#elif/#ifndef etc, these will be handled by the driver compiler.


Well no, because #ifdef/#endif guards are used to make sure a file is not included twice, and for that matter for poorly done #include's to stop a file indirectly including itself.

Worse, you may want to have some extra sugar on whatever thingy you use to handle GLSL source, for example use the same source file but add #define's before the source which in turn may or may not affect what files are included, and the story gets worse as included files might add their own defines that in turn affect other files.

So all in all, the simple #include and then trying to use it like you would in C suddenly blows up in front of one's face.

Godlike
01-18-2010, 09:48 AM
As far as the #ifdef/#define/#endif in included files (like the C/C++ .h files) and circular including:

Imagine this lala.glsl file:


#ifndef _LALA_GLSL_
#define _LALA_GLSL_

void Lala()
{
}

#endif

And now the main.glsl file:


#include "lala.glsl"

void Foo()
{
}

#include "lala.glsl"

The main.glsl will unroll into this:


#ifndef _LALA_GLSL_
#define _LALA_GLSL_

void Lala()
{
}

#endif

void Foo()
{
}

#ifndef _LALA_GLSL_
#define _LALA_GLSL_

void Lala()
{
}

#endif

In the above example the function Lala wont be defined twice. Im not suggesting that Im right and you are wrong. There may be some scenarios Im not aware of. Please post a more detailed example.

Ilian Dinev
01-18-2010, 11:05 AM
The problem is with 2 headers that include each-other.
The other problem is rare: with headers, where preprocessor #ifXXX/#endif directives are not balanced inside the file.

Alfonse Reinheart
01-18-2010, 11:28 AM
Well no, because #ifdef/#endif guards are used to make sure a file is not included twice

You're implementing #include yourself. I'm pretty sure that gives you the license to make it work as you see fit. And if that means you automatically ignore any includes you've seen before, then they are ignored.


So all in all, the simple #include and then trying to use it like you would in C suddenly blows up in front of one's face.

Again, only if you choose for it to.

These are shaders. They're not that big. These are not programs that you need #include trickery to make work.

Dark Photon
01-19-2010, 07:54 AM
These are shaders. They're not that big. These are not programs that you need #include trickery to make work.
Well we gather in your world they are "not that big", and maybe there are only 5 of them, and they were all hand-coded for the specific object they appear on.

But in larger, more general applications, they can get to be fairly large and you can end up with a fairly large number of them, whose permutations are defined by your users, not you. Hand-code them all? Don't think so. Time to market too great, and that's a waste of a perfectly good developer.

We are also dealing with this #include issue. But rather then implement our own preprocessor to compete with the GLSL preprocessor, we selectively concat various shader sections together based on shader environment settings, and we tightly constrain the concat order and number of permutations to a very small number. However, this only works because we don't export this behavior to our users.

GLSL Suggestion: Most of the reason for this concating / #include shader business in our world is the lack of being able to set shader constant identifier values via the GLSL API. In Cg, these are called literal parameters (CG_LITERAL). These constant identifiers define the "shader permutation" you want (feature A = yes, feature B = no, feature C = permutation 12, etc.). Yes, this is the ubershader approach. In your shader you just use these constant identifiers in "if" expressions in your shaders, and then when it's compiled, the constant values of these parameters are compiled/folded into your shader and the compiler can rip out ifs, function calls, and large chunks of "dead code" automatically to build the shader permutation you want from general shader logic.

Without constant variables settable via the GLSL API, you end up having to sprintf your const variables into a hunk of GLSL shader code and concat (or #include) that with the rest of the shader code, which is a needless hastle.

skynet
01-19-2010, 08:20 AM
These are shaders. They're not that big. These are not programs that you need #include trickery to make work.
Let me contradict you. When you got lots of them and want to share code between them, its hard and clumsy to do that at present.
Also, I'm against "then write your own preprocessor". First I do not want to write a full blown GLSL-capable preprocessor. Second, even if I did that it still must be able to interact with the GLSL-builtin preprocessor (I'd have to chose a different syntax like $include, $if $define etc. so it won't mess up GLSL-internal #ifdef's). Third, it would still lack of the ability to "inject" symbols into the GLSL preprocessor by code (I'm doing that with string concatenation now, as DarkPhoton described).

Alfonse Reinheart
01-19-2010, 11:40 AM
But in larger, more general applications, they can get to be fairly large and you can end up with a fairly large number of them, whose permutations are defined by your users, not you. Hand-code them all? Don't think so. Time to market too great, and that's a waste of a perfectly good developer.

When I was talking about "#include trickery", I meant having #includes that need to interact with preprocessor defines and such. That is, things like:



#ifdef SOME_DEFINE
#include "oneFile.h"
#else
#include "anotherFile.h"
#end


This is the kind of thing you need a full preprocessor for. If all you have is a list of #includes at the top of your shader, you don't need anything more than a simple #include mechanism.

Remember: #include doesn't mean "compile this file into a translation unit with mine." It means "include everything in this file at this location." The former is what you actually need; the latter is not.

skynet
01-19-2010, 12:25 PM
Just imagine you want to do something like:



#if __VERSION__ >= 130
# include "FancyUBOHeaderStuff.glsl"
# define TEXTUREGRAD_AVAILABLE 1
#else
# include "GLSL120HeaderStuff.glsl"
# ifdef GL_ARB_shader_texture_lod
# extension GL_ARB_shader_texture_lod : enable
# define textureGrad texture2DGradARB
# define TEXTUREGRAD_AVAILABLE 1
# elif defined(GL_EXT_gpu_shader4)
# extension GL_EXT_gpu_shader4 : enable
# define textureGrad texture2DGrad
# define TEXTUREGRAD_AVAILABLE 1
# else
# define TEXTUREGRAD_AVAILABLE 0
# endif
#endif

#if TEXTUREGRAD_AVAILABLE
# include "WithGradients.glsl"
#else
# include "WithoutGradients.glsl"
#endif

Ilian Dinev
01-19-2010, 02:05 PM
If the "#extension" is governed by #ifXXX/#endif blocks, then there's no problem in just replacing "#include" lines. You just have to avoid those 2 simple cases I mentioned.


But anyway, making a full-featured preprocessor is not really hard.
Here's mine: http://dl.dropbox.com/u/1969613/openglForum/CPP_Parse01.7z
There's also a possibly easier-to-Plug'n'Play one from the srccode of the 3dfx shader-verifier (google it). Among all other existing opensource ones. (just that mine is with public-domain license, provided-as-is).

Alfonse Reinheart
01-19-2010, 02:10 PM
Just imagine you want to do something like:

I see what you're trying to do. But have you considered that, while #include can solve these problem, it may not be the best solution to do so? You're trying to solve a lot of different problems with these macros. Maybe there's a better way.

Personally, I've found that defining my own shading language is a more reasonable solution. I can control the syntax (having auto and template-like functionality is cool) and I have the tools to abstract away extensions and so forth. It was a lot of initial work, but it seems to have worked out well so far.

In any case, there's one problem left to deal with in terms of implementing #include: how do you do it? Do you make it a callback, or do you supply a number of shaders that you give string names to? How does that deal with the possibility of threading shader creation? How does it handle GLX stuff where the server is running on a different machine?

skynet
01-19-2010, 04:00 PM
Personally, I've found that defining my own shading language is a more reasonable solution.
I'm a graphics guy, not a compiler guy. To me this solution sounds like overkill and might introduce more problems than it solves. Plus, how would I implement dependencies on #defines that only the real GLSL preprocessor knows? And it still leaves me with the question, how do _I_ solve the #include problem then ;-)

Both, Cg and HLSL support includes.
Cg seemingly supports callbacks and "pre-uploaded files/source strings". D3D is using callbacks.

Yes, from OpenGL's perspective, callbacks are scary (we have none up to day). The advantage of callbacks is that you only need to load/upload files that are actually referenced.
The mechanism Cg calls "virtual file system" requires that you upload each and every file that _might_ be needed (you don't know unless you start parsing GLSL, but I really want to avoid that) in advance. But in the end... there are not so many anyway, say less than 100, with a few KB each. This is done is splitseconds.

The question for threaded compilation is a different topic. I'm against introducing "truely" asynchronous compilation. I.e. start compilation, set a sync object, render some things, come again a few seconds later, check sync object for completion and see if it worked. This is not how graphics programs work. They either need a shader or not.
I guess, by asynchronous compilation you want to fight the true problem: slow compilation. So why not solve this problem in a better way, by introducing precompiled binary blobs?
And if you still wanted asynchronous compilation, you can do that already by doing the compilation in a second thread using an auxiliary (list-sharing) context. Its just a matter of how clever the driver writers are that it really gains you performance.

Alfonse Reinheart
01-19-2010, 04:20 PM
Plus, how would I implement dependencies on #defines that only the real GLSL preprocessor knows?

Why would you need to? All of the GLSL #defines are based on things you can query about the implementation (extensions, version number, etc) from the OpenGL API. So the system is well aware of what features are available and what are not.


And it still leaves me with the question, how do _I_ solve the #include problem then

However you want. The point of making a language is to be able to what you want. So if you want to define a functional programming-based shader language, you're free to do so. If you want to define importing the way that C# or Java does, again, you can do so.


The question for threaded compilation is a different topic.

No, it isn't a different topic. Right now, the implementation has the freedom to compile shaders in another thread. This is a good thing. Once you have callbacks, you need to make sure that the callback happens in the thread that called it, and that it needs to happen synchronously with the calling of CompileShader or LinkProgram, as needed.

I do not want to see that freedom taken away from implementations.

skynet
01-19-2010, 04:53 PM
I do not prefer callbacks at all cost. I can live with preloaded source strings as well.


Right now, the implementation has the freedom to compile shaders in another thread
Right now glCompileShader() and glLinkProgram() are not asynchronous anyway... After some time they return and tell me if it worked or not. So how could it benefit from multithreading? Using multiple threads to compile a single shader? Is that really done in todays compilers?

Alfonse Reinheart
01-19-2010, 05:41 PM
Right now glCompileShader() and glLinkProgram() are not asynchronous anyway... After some time they return and tell me if it worked or not.

No, they don't. They only have to tell you when you ask them. That is, when you call glGetShaderiv/glGetProgramiv. Until then, the actual work can be done on another thread.

Admittedly, few are the programs that actually wait very long after starting the compile/link to check to see if it completed. But the freedom for the implementation is there.

Gedolo
02-19-2010, 09:06 AM
@Godlike

Please do clean and release your stuff. It's very useful, could help many people.

Godlike
02-23-2010, 09:31 AM
@Godlike

Please do clean and release your stuff. It's very useful, could help many people.
Great. Im working on it

kRogue
03-06-2010, 04:49 AM
I suppose. But I don't really understand how it could have been updated. It is essentially just a library that makes OpenGL calls. Who would be responsible for distributing it?


Just as Khronos makes a GL standard, an EGL standard and there is a GLX standard (not clear if that is maintained by Khronos or not), it is feasible to generate a GLU standard for GL3 or higher. As for who makes it and who distributes it these are the beans as of now:

For most Linux distributions, AFAIK, GLU is from Mesa. [quite likely that this is the case for almost all open source OS deals as well, BSD, etc]

For MS-Windows, AFAIK, GLU is some aging GLU from Microsoft.

Not too sure on Mac, most likely, but I am not too sure, Apple makes and maintains GLU on Mac.

At any rate, GLU is worth making a standard for and
updating. What remains is an implementation, which I would bet that Mesa at the very least would be right on top of quite quickly. [Witness WebGL, another standard from Khronos, being supported in WebKit].

Eosie
03-06-2010, 10:17 AM
For MS-Windows, AFAIK, GLU is some aging GLU from Microsoft.
You can use GLU from Mesa even on Windows. AFAIK it's distributed under the MIT licence which is very closed-source friendly.