PDA

View Full Version : GLSL source code via function calls.



kRogue
02-21-2011, 02:03 PM
Currently, AFAIK, the only way to feed in shader source code to GL is as raw C-strings. There is a GL extension to do "include" within GLSL, GL_ARB_shading_language_include (http://www.opengl.org/registry/specs/ARB/shading_language_include.txt), but it is, atleast to me, introduces more pain than I care for.

Consider the following (simple) API:




typedef void* GLSLSourceStream ;

//returning NULL indicates failure
typedef GLSLSourceStream (*GLOpenSourceStream)(const char*);

//close a stream
typedef void (*GLCloseStream)(GLSLSourceStream *);

//read characters from a stream, returns number of characters read.
//if the stream ends before those number of requested, then for each
//byte past the value written is 0.
typedef int (*GLReadSourceStream)(void *ptr, size_t count, GLSLSourceStream *stream);



/*!
\param shader name of GL shader, i.e. from glCreateShader
\param open_stream function pointer that specifies how streams are
opened for the call
\param close_stream function pointer that specifies how streams are
closed for the call
\param read_stream function pointer that specifies how streams are
read for the call.
\param stream_name "file" name to pass to open_stream to open
a GLSLSourceStream for reading.
*/
void glShaderSourceFromStream(GLuint shader, const char *stream_name,
GLOpenSourceStream open_stream,
GLCloseStream close_stream,
GLReadSourceStream read_stream);



The mentality is now if the GLSL "preprocessor" encounters



#include "some_file"


it would use the function pointer open_stream to get the contents of the "file".
If open_stream failed (say by returning NULL), then the preprocessor can say "no dice". The above is quite rough but functional enough to get the job done (since the number of characters read is returned).

Ideally, the GL implementation will run the preprocessor/fetch the data at glShaderSourceFromStream, but leave compiling for later (i.e. glShaderSourceFromStream would run the preprocessor but not compile). Issues about querying GL what the GL source string naturally crop up (before or after preprocessor essentially).

Alfonse Reinheart
02-21-2011, 04:01 PM
So, how do you handle directories with chains of includes? For example:



//main shader:
#include "dir1/fileB.vert"

...

// /dir1/fileB.vert:
#include "fileC.vert"

...

// /dir1/fileC.vert:
...


The "fileC.vert" is in the same directory as "fileB.vert", so B's include doesn't need to have a path. With your method, there's no keeping track of the path of the current file.


Consider the following (simple) API:

This may just be me, but providing callbacks and implementing a virtual file system does not strike me as "simple". Especially when one might really want compiling to happen on a separate, private GPU thread.

kRogue
02-22-2011, 01:10 AM
The "fileC.vert" is in the same directory as "fileB.vert", so B's include doesn't need to have a path. With your method, there's no keeping track of the path of the current file.


Dead on. I realized this after I went to bed. Below is an API suggestion that takes into account relative path names, etc.



typedef void* GLSLSourceStreamUserData;
typedef void* GLSLSourceStream;


/*!
Open a source stream for reading.

\param name "name" of file to open for reading.
\param str GLSLSourceStream that path given by name
is relative to, if NULL indicates a non-relative
path (explained below)
\param data opaque client data.

usage case is as follows:

if within a GLSL source, the preprocessor encounters

#include "file.glsl"

that means that the file is in the same path as the current file,
as such str will be passed as the stream being read from so that
the client supplied functions can "fetch" the path.

if within a GLSL source, the preprocessor encounters

#include <file.glsl>

that means the file is not a relative path and the file.glsl
refers to the "include" path. In this case str will be passed
as NULL and the client supplied functions will likely use the
parameter "data" to open a stream

*/
typedef GLSLSourceStream (*GLOpenSourceStream)(const char *name,
GLSLSourceStream str,
GLSLSourceStreamUserData data);


/*!
Close a source stream.
\param str GLSLSourceStream to close
\param data opaque client data.
*/
typedef void (*GLCloseStream)(GLSLSourceStream str,
GLSLSourceStreamUserData data);


/*!
Read from a source stream count characters placing the results into ptr
and returning the number of characters read.

\param ptr memory location to read values to.
\param count number of characters to read from stream
\param stream GLSLSourceStream to read from
\param data opaque client data.
*/
typedef int (*GLReadSourceStream)(GLchar *ptr, GLsize_t count,
GLSLSourceStream stream,
GLSLSourceStreamUserData data);

/*!
\param shader name of GL shader, i.e. from glCreateShader
\param stream stream from which to read GLSL source code
\param data opaque client data passed to client supplied funtions
\param open_stream function pointer that specifies how streams are
opened for the call
\param close_stream function pointer that specifies how streams are
closed for the call
\param read_stream function pointer that specifies how streams are
read for the call.
*/
void glShaderSourceFromStream(GLuint shader, GLSLSourceStream stream,
GLSLSourceStreamUserData user_data,
GLOpenSourceStream open_stream,
GLCloseStream close_stream,
GLReadSourceStream read_stream);







This handles chains, relative paths, etc as the GL application sees fit. The current extension for includes leaves it undefined how difference path strings are merged, for example weather or not the path strings are case sensitive.



This may just be me, but providing callbacks and implementing a virtual file system does not strike me as "simple".


Typically, the function calls will map to the local OS's function calls or whatever file system abstraction an application is using (for example physfs). The current extension, GL_ARB_shading_language_include (http://www.opengl.org/registry/specs/ARB/shading_language_include.txt), makes an application register every "file" in the application code building a large tree of path-strings. Additionally, the GL implementation then needs to implement the rules of merging different path names, when 99% time that functionality should be decided by the application (i.e. use OS rules, etc).



Especially when one might really want compiling to happen on a separate, private GPU thread.


The preprocessor is not a CPU intensive beast. The semantics I imagine is that a GL implementation will create the preprocessed source text before glShaderSourceFromStream returns, but NOT compile it. Additionally, the rule is that once glShaderSourceFromStream, the GL implementation is saying it will not attempt to use those functions pointers, the stream or the user_data once glShaderSourceFromStream returns.

Alfonse Reinheart
02-22-2011, 02:01 AM
\param str GLSLSourceStream that path given by name
is relative to, if NULL indicates a non-relative
path (explained below)


So, the OpenGL implementation, and therefore your source code, must be able to ensure that the stream remains open until the source string is not only read from a file, but run through the preprocessor. Even if that did not mean running through the compiler, that could still result in quite a few things being open at any one time, as you get long shader includes. Every shader on the path to the root shader will have to have its stream open even after all the file access is done.

That feels weird to me. You're generally not supposed to leave file handles open any longer than strictly necessary to access your data.

It also requires that the user's stream implementation record the full pathname of the open file.


The current extension, GL_ARB_shading_language_include, makes an application register every "file" in the application code building a large tree of path-strings.

True. But it only happens once. And it's pretty simple code to write, if you want to mimic the layout of the filesystem.

And on simplicity, shading_language_include doesn't require you to use a completely different interface to compile things. It even works in tandem with separate_shader_objects or any new functions that compile shaders. I'd bet that this was another reason for them doing things the way they did.

So all you have to do is this initialization setup work, and all your current code will work just fine.


The preprocessor is not a CPU intensive beast. The semantics I imagine is that a GL implementation will create the preprocessed source text before glShaderSourceFromStream returns, but NOT compile it.

You're assuming that the preprocessor is a distinct step that is divorced from the act of compilation. While this was true for most older compilers, and remains true even for some more recent ones, it's entirely likely that they built the preprocessor into their compiler's tokenizer. It makes the compilation more efficient overall.

kRogue
02-22-2011, 04:46 AM
So, the OpenGL implementation, and therefore your source code, must be able to ensure that the stream remains open until the source string is not only read from a file, but run through the preprocessor. Even if that did not mean running through the compiler, that could still result in quite a few things being open at any one time, as you get long shader includes. Every shader on the path to the root shader will have to have its stream open even after all the file access is done.


There is a "close" command too, the expectation is that the GL implementation would use that once it has read the contents of a stream.



And on simplicity, shading_language_include doesn't require you to use a completely different interface to compile things. It even works in tandem with separate_shader_objects or any new functions that compile shaders. I'd bet that this was another reason for them doing things the way they did.


This works in tandem with separate shader objects, it is just setting source code. Not compiling, not linking, just preprocessing.




You're assuming that the preprocessor is a distinct step that is divorced from the act of compilation. While this was true for most older compilers, and remains true even for some more recent ones, it's entirely likely that they built the preprocessor into their compiler's tokenizer. It makes the compilation more efficient overall.


Proof? I have not seen a C/C++ compiler without a distinct preprocessor in a long, long time. Name one. On the side of GL implementations, again, proof requested. As for making the preprocessor and compiler in one step, that just smells bad and odd since it would likely completely mess up any complicated macro expansion.

Alfonse Reinheart
02-22-2011, 01:29 PM
There is a "close" command too, the expectation is that the GL implementation would use that once it has read the contents of a stream.

But the open command takes a stream that refers to the current file, so that the path can be relative to the current file. This needs to be an open stream. Therefore, during the preprocessor step, the stream for the string being processed needs to be open. Which means that each previous string in the chain of includes that lead to this file needs to have its stream be open.

Unless closed streams are still valid objects. In which case, you need to have a distinction between closed streams and deleted streams.


This works in tandem with separate shader objects, it is just setting source code. Not compiling, not linking, just preprocessing.

But it doesn't work with glCreateShaderProgramv, which goes directly from strings to linked programs. This is a thing people will want to use, since it combines compiling and linking into one step.

kRogue
02-23-2011, 12:22 AM
But the open command takes a stream that refers to the current file, so that the path can be relative to the current file. This needs to be an open stream. Therefore, during the preprocessor step, the stream for the string being processed needs to be open.

Just like a real pre-processor. The number of open streams required is the highest depth of includes, not the number of includes, so not an issue. The expectation is that any stream GL opens it also closes, so at the end of the GL call, the only stream not closed is the one passed (as I have now written the descriptions). Chances are it would be best if GL also closed the passed stream too.




But it doesn't work with glCreateShaderProgramv, which goes directly from strings to linked programs. This is a thing people will want to use, since it combines compiling and linking into one step.


GL_ARB_shading_language_include (http://www.opengl.org/registry/specs/ARB/shading_language_include.txt) also has that issue, but not exactly rocket science to fix:



GLuint glCreateShaderProgramFromStream(GLenum type, GLSLSourceStream stream,
GLSLSourceStreamUserData user_data,
GLOpenSourceStream open_stream,
GLCloseStream close_stream,
GLReadSourceStream read_stream);

Alfonse Reinheart
02-23-2011, 01:01 AM
Just like a real pre-processor.

I'm no expert on compiler design, but are you claiming that compilers routinely open files and keep them open during the actual compilation? Wouldn't it make more sense for them to open the file, load it into memory, close it, and then process the file?

I can understand wanting to avoid the memory overhead for large files, but it makes more sense to do it this way. Especially if the same file is included multiple times over the course of an executable's compilation. That way, you don't have to load it multiple times.

Which actually goes to another reason why shading_language_include's method is better; you're doing a lot less file accessing. You load the files exactly once and never more than that.


GL_ARB_shading_language_include also has that issue

No it does not. The extension does provide the glCompileShaderIncludeARB function, which allows you to give it a number of named strings that will effectively appear before the shader as if it has #include'd them. But the specification specifically states that any shader compilation functions will still process internal #include's just fine. And since glCreateShaderProgramv is defined in terms of glCompileShader, which shading_language_include states is equivalent to calling glCompileShaderIncludeARB with certain arguments, then there is no problem.


but not exactly rocket science to fix:

My point is that you have to create an entire new entrypoint, rather than allowing the old entrypoints to work. That means, at the very least, you must go through your code and change it.

Let's say I wanted to use #includes with Ogre3D. Under shading_language_include, I wouldn't have to touch any of Ogre3D's code. All I would need to do is set up the strings before I gave Ogre3D my shaders, and then everything works perfectly. Under your system, I now need to modify Ogre3D's shader loading. And I have to somehow feed Ogre3D my functions for loading shaders, or give it those functions.


Proof? I have not seen a C/C++ compiler without a distinct preprocessor in a long, long time. Name one. On the side of GL implementations, again, proof requested.

In Clang, the preprocessor is effectively part of the lexer. The two work hand-in-hand. Its preprocessor outputs tokens, not strings.

What you're effectively saying is that the shader needs to run a preprocessing step that generates, not a sequence of tokens the way most preprocessors do, but actual strings. Strings that will, in turn, be fed back into the compiler later.

kRogue
02-23-2011, 04:32 AM
No it does not. The extension does provide the glCompileShaderIncludeARB function, which allows you to give it a number of named strings that will effectively appear before the shader as if it has #include'd them. But the specification specifically states that any shader compilation functions will still process internal #include's just fine. And since glCreateShaderProgramv is defined in terms of glCompileShader, which shading_language_include states is equivalent to calling glCompileShaderIncludeARB with certain arguments, then there is no problem.


I stand corrected. That is true.. but this is pointless as glCreateShaderProgramv as is will not allow one to specify an include path, i.e. the include path is empty, so as far as I can see, then all includes in that source must be of the form #include "/absolute/path". Shudders.



I'm no expert on compiler design, but are you claiming that compilers routinely open files and keep them open during the actual compilation? Wouldn't it make more sense for them to open the file, load it into memory, close it, and then process the file?

I can understand wanting to avoid the memory overhead for large files, but it makes more sense to do it this way. Especially if the same file is included multiple times over the course of an executable's compilation. That way, you don't have to load it multiple times.


Sighs. I cannot tell if you are deliberately dense or not. Naturally a compiler or the suggested API point would open the stream, copy the contents to a string, close the stream and then go to town. With that, my proposal then has at most one stream open at a time.




What you're effectively saying is that the shader needs to run a preprocessing step that generates, not a sequence of tokens the way most preprocessors do, but actual strings. Strings that will, in turn, be fed back into the compiler later.


Um no. All I am saying is that the operations associated to a preprocessor can be done quickly. Taking that further, the "preprocessor and tokenizer" can be merged and done at the call to set the source string. Again not rocket science. However, there is one point worth noting. The current ARB extension allows for a GL implementation to do pre-compiled headers, but only under a lot of conditions. However, this looks like a waste of time to bother with. How big are the shader source codes typically? Not big at all, unless you get into ubber includes. Additionally, thanks to #ifdef jazz, precompiled includes are not going to be all rosy as the GL implementation will then need to track the #ifdef dependency of the includes, so not likely to happen.



Let's say I wanted to use #includes with Ogre3D. Under shading_language_include, I wouldn't have to touch any of Ogre3D's code. All I would need to do is set up the strings before I gave Ogre3D my shaders, and then everything works perfectly. Under your system, I now need to modify Ogre3D's shader loading. And I have to somehow feed Ogre3D my functions for loading shaders, or give it those functions.


Or maybe the logical thing: as Ogre3D is an open source project, patch it and submit the changes. In all brutal honesty this example sounds far too artificial anyways.

Additionally, one part that the current extension makes me nervous about is the global nature to the path strings, across the entire context share group. This means that one cannot easily isolate from a shader source setting call what is getting consumed as it is dependent on *more* global GL state. The method I propose does not add additional global GL state (which in my eyes is always a good thing).

Alfonse Reinheart
02-23-2011, 11:08 AM
Sighs. I cannot tell if you are deliberately dense or not. Naturally a compiler or the suggested API point would open the stream, copy the contents to a string, close the stream and then go to town. With that, my proposal then has at most one stream open at a time.

OK, let's consider that. What you have proposed is the following:



OpenFile(main) [cb]
ReadFileToString(main) [cb]
CloseFile(main) [cb]
ProcessFile


The [cb] references your callbacks, and (main) represents the stream of the file being read. If the file has includes, then it becomes this:



OpenFile(main) [cb]
ReadFileToString(main) [cb]
CloseFile(main) [cb]
ProcessFile
{
OpenFile(include) [cb]
ReadFileToString(include) [cb]
CloseFile(include) [cb]
ProcessFile
{
...
}
}


However, if (include) uses a relative path, then the only way for the "OpenFile" callback to know what it is relative to is to be given the stream of (main). Of course, we closed that stream before calling "ProcessFile". Therefore, it is not a valid stream and cannot be passed to "OpenFile."

Therefore, in order to make what you want actually work, you must do this:



OpenFile(main) [cb]
ReadFileToString(main) [cb]
ProcessFile
{
OpenFile(include, main) [cb]
ReadFileToString(include) [cb]
ProcessFile
{
...
}
CloseFile(include) [cb]
}
CloseFile(main) [cb]


So no; there is not "at most one stream open at a time."


All I am saying is that the operations associated to a preprocessor can be done quickly. Taking that further, the "preprocessor and tokenizer" can be merged and done at the call to set the source string. Again not rocket science.

My point is that what was once a single, contiguous process (compilation, and with separate_shader_objects, compilation+linking) now must be separated into two distinct processes. Is this doable? Certainly. Is it harder than implementing a path recognition scheme (which is the most burdensome thing that shader_include asks the implementation to do)? Depending on how the compiler is implemented, it seems likely. Writing a simple path recognition system could be done in an afternoon.


but this is pointless as glCreateShaderProgramv as is will not allow one to specify an include path, i.e. the include path is empty, so as far as I can see, then all includes in that source must be of the form #include "/absolute/path". Shudders.

And this is an onerous burden? How many directories of includes do you expect to have? In general, I'd consider it a good thing to have more absolute paths; at least you know where things are actually coming from just from looking at a shader source file.

More importantly, I am given a choice. If I want to support the standard APIs, I still get to have includes; I just have to make the paths absolute in the main files. However, under your mechanism, I don't get to support the standard APIs at all. I either use the new API and get #include support, or I use the current APIs and get nothing.

I think that choice is worth putting "/" in front of a few strings.


Or maybe the logical thing: as Ogre3D is an open source project, patch it and submit the changes. In all brutal honesty this example sounds far too artificial anyways.

What's so artificial about it? Having to change pre-existing code is not a good thing if it can be avoided. And there's nothing "artificial" about it; I could just as easily have cited a non-open-source project.


This means that one cannot easily isolate from a shader source setting call what is getting consumed as it is dependent on *more* global GL state. The method I propose does not add additional global GL state (which in my eyes is always a good thing).

Being able to isolate what gets consumed in your suggestion depends on how you implement the streams. If you use the simple file IO scheme as you initially proposed, then you're dependent not only on global state, but on hard-disk state. Which can be changed by other programs. Global state is global state, whether owned by OpenGL or by the OS (at least OpenGL state is something that can only be changed by your application).

And if you want some kind of isolation for different streams, then you have to have many different stream implementations. Which increases the complexity of the user's code.

Which is fine, if that's what you need. However, it's hard to argue that this complexity is congruent with the idea that this is a "(simple) API."

kRogue
02-24-2011, 12:25 AM
So no; there is not "at most one stream open at a time."


You are right and you are not being helpful though. Rather than just point out what is wrong, why not give suggestions to fix an issue. At any rate the fix is pretty easy and honest, introduce a new type "GLStreamPath" and a function to get it:



typedef void* GLSLSourceStreamUserData;
typedef void* GLSLSourceStream;
typedef void* GLSLStreapPath;

/*!
Open a source stream for reading.

\param name "name" of file to open for reading.
\param path path that the stream to open resides in, NULL indicates name is an absolute path.
\param data opaque client data.
*/
typedef GLSLSourceStream (*GLOpenSourceStream)(const char *name,
GLSLStreamPath path,
GLSLSourceStreamUserData data);


/*!
Close a source stream.
\param str GLSLSourceStream to close
\param data opaque client data.
*/
typedef void (*GLCloseStream)(GLSLSourceStream str,
GLSLSourceStreamUserData data);

/*!
Get the "path" of a GLSLSourceStream.
The contents of the source path should be deleted
with GLClosePath.
*/
typedef GLSLStreamPath (*GLFetchPath)(GLSLSourceStream);


/*!
Delete a stream path.
*/
typedef void (*GLClosePath)(GLStreamPath);

/*!
Read from a source stream count characters placing the results into ptr
and returning the number of characters read.

\param ptr memory location to read values to.
\param count number of characters to read from stream
\param stream GLSLSourceStream to read from
\param data opaque client data.
*/
typedef int (*GLReadSourceStream)(GLchar *ptr, GLsize_t count,
GLSLSourceStream stream,
GLSLSourceStreamUserData data);

/*!
\param shader name of GL shader, i.e. from glCreateShader
\param stream stream from which to read GLSL source code
\param data opaque client data passed to client supplied funtions
\param open_stream function pointer that specifies how streams are
opened for the call
\param close_stream function pointer that specifies how streams are
closed for the call
\param fetch_path function pointer that specifies how to fetch paths of streams
\param close_path function pointer that specifies how to free a path object
\param read_stream function pointer that specifies how streams are
read for the call.
*/
void glShaderSourceFromStream(GLuint shader, GLSLSourceStream stream,
GLSLSourceStreamUserData user_data,
GLOpenSourceStream open_stream,
GLCloseStream close_stream,
GLFetchPath fetch_path,
GLClosePath close_path,
GLReadSourceStream read_stream);





Simple and easy to do. Also honest saying up front what is going on.



Being able to isolate what gets consumed in your suggestion depends on how you implement the streams. If you use the simple file IO scheme as you initially proposed, then you're dependent not only on global state, but on hard-disk state. Which can be changed by other programs. Global state is global state, whether owned by OpenGL or by the OS (at least OpenGL state is something that can only be changed by your application).


In all honesty using local OS file IO is only for quick and dirty situations. In distribution of an application, that application's data usually carries with it a virtual file system for resources, scripts, etc. That is where I really see myself using such a thing.

The parts that in the current GL ARB extension for shader includes that bother me are
merging of different strings to common paths/filenames is undefined beyond handling ".." and ".". This is most troubling since different environments have different rules. a file path must be registered and the registration is global. That means that layered and modularized writing in GL just got messier again. A function at a lower layer has no idea what paths have and have not been registered, and this is troubling. I can definitely see where some framework create their own "library" of headers and they wish to use. That requires that they make sure their paths are unique, essentially what I am talking about is "path" namespace pollution. Moreover, since the contents associated to a path can be overwritten or the path itself deleted, the layered framework will have hard time guaranteeing correctness when it interacts with other layered libraries. One could say that each such library must document and declare a reserved path namespace, but as one has more such libraries interacting the issue will get worse with time, significantly. Compounding the pain, in contrast to when one compiles, this problem is not visible until the application runs! Combine this with updated .dll's (or .so's) and you can have a real mess.

The insistence of adding additional functionality through existing API points by adding additional global state is terrible idea. That means that the results of a function call depend on global state and an application that uses a layered library must then know what version of GL the library is written against so it can set the global state to what that library expects. The examples are abundant in buffer objects (i.e. meaning of glGetTexImage, glTexImage, glVertexAttribPointer, glDrawElements, totally change depending on what is bound). That is insane and a recipe for disaster.

Now there is another way besides from what I have suggested to handle the global state issue: introduce an object "virtual GL filesystem":



void
GenFileSystems(size_t count, uint *names);

void
DeleteFileSystems(size_t count, uint *names);


/*!
For each 0<=i<count, for each path P of the
filesystem names[i], insert P into the file
system target.
*/
void
AbsorbFileSystems(GLuint target, size_t count, uint *names);



and then for the new functions of the current ARB include extension, they have an additional argument of what filesystem to use. This avoids the global state madness, but does not address the handling of name merging of paths. This allows one to easily mix different implementations of a set of GLSL utilities without needing to play the horrific #ifdef game. Note that my suggestion above already has this functionality in it.

Adding a new entry points is NOT a big deal, GL_EXT_direct_state_access adds a ton of them, and it did not matter. The .dll/.so made was not that much bigger.

Alfonse Reinheart
02-24-2011, 04:34 PM
You are right and you are not being helpful though. Rather than just point out what is wrong, why not give suggestions to fix an issue.

I am being helpful; I'm pointing out the flaws in your proposal. It's your responsibility to find fixes for them because it's your idea.

Remember: I'm perfectly happy with shading_language_include (though I wouldn't be averse to an object wrapper for filesystems). I think that your proposal is over-designed and needlessly complex for the needs of the vast majority of users, for the reasons I have pointed out. I could have been actually unhelpful by simply not commenting on this proposal at all. In which case you would not have known about the flaws in it and never have designed fixes for them.

Stop making this personal. I am not attacking you; I'm attacking your idea. Respond to what is written and not the person doing the writing.


Simple and easy to do. Also honest saying up front what is going on.

Except that it is neither simple or easy for the user. It may say "up front" where included strings are coming from for a particular compile, but the user now must implement 4 callbacks. If new functionality comes down the pipe for shaders that uses a new compilation routine (see separate_shader_objects), that new functionality will have to provide new entrypoints in order to explicitly provide inclusion. And so on.

Every time something is corrected in this proposal, it becomes more and more complex for the user to implement. It went from 3 callbacks to four. From 1 user-defined object to 3. Inclusion really should not require this much effort on the part of the user.


merging of different strings to common paths/filenames is undefined beyond handling ".." and ".". This is most troubling since different environments have different rules.

I don't know what more definition you want. If you're talking about case sensitivity, that's effectively implicit in the spec: string compares are case sensitive. It would be good to have this stated explicitly, of course. But otherwise, there's nothing more to specify.

This will not result in problems when trying to mimic a filesystem. If you're mimicing a Windows filesystem, which is case insensitive, then nothing changes. Files that had different names under case insensitivity will continue to have different names under case sensitivity. And if you're mimicing a case sensitive filesystem, again, everything is fine because shading_language_include is case sensitive.

And if you're trying to mimic a filesystem that uses different path separators or hardlinks or some-such... well, tough. That's not something the vast majority of people need to do.

This extension was designed to provide the greatest utility while having the least implementation and user burden. And it strikes a very good balance. No, it doesn't do everything you could possibly want to do with a filesystem; but it does do enough.


I can definitely see where some framework create their own "library" of headers and they wish to use. That requires that they make sure their paths are unique, essentially what I am talking about is "path" namespace pollution.

Really, how much inclusion are you expecting to exist? I don't have these kinds of include issues in my C++ projects, let alone for the relatively small numbers of shaders that people will be using. Reasonable conventions ensure that this won't happen for any real-world applications. The scope of shaders simply doesn't merit it.

Not only that, you can always test whether a path already exists in the filesystem. Yes, that means a library will throw an error when it attempts to be used with a conflicting one. But this occurrence will be so rare anyway that doing anything more than just failing would not be worthwhile.

Is it possible for conflicts and so forth to happen? Absolutely. But is this eventuality likely enough to be worth the effort that a user would have to spend to write a virtual file system? I highly doubt it.

Also, let's consider how your proposal handles this situation.

Let's say you have two layered libraries that provide their own includes. And you have some shaders that need to use includes from either library as well as some private includes of your own. How do you gain access to their library of includes? They would have to provide access to these includes. But since there is no agreed-upon protocol for this, each library can implement this in their own way. One library might just have some loose files, while another may have put all their include shaders in an XML file format.

So your filesystem code now needs to parse whatever their mechanism for storing these things is. A "simple" stdio implementation of a filesystem goes right out the window; you must use some form of virtual file system. That means writing a lot of code. In the case of name conflicts, you have to decide which library gets primacy.

Isn't that a rather large quantity of work for the user? It certainly isn't "simple", which is how you initially described this proposal. If global state and a few reasonable conventions can solve the problem without excess effort on any party, why not use that?


Now there is another way besides from what I have suggested to handle the global state issue: introduce an object "virtual GL filesystem":

That still requires direct interference with the creation and life-span of a shader object. So it can't work with separate_shader_objects. Nor can it work with any extension that creates programs internally based on shader source that you provide unless it also exposes this interface.


Note that my suggestion above already has this functionality in it.

No, it does not. It simply forces the user to play their own variant of "the horrific #ifdef game" in his filesystem implementation instead. It doesn't solve the problem; it just hands it to someone else.


Adding a new entry points is NOT a big deal, GL_EXT_direct_state_access adds a ton of them, and it did not matter. The .dll/.so made was not that much bigger.

It's not a question of the number of entrypoints (though DSA defined quite a few functions of questionable merit. Seriously, was there a need for functions like glMultiTexParameteriEXT, which still depends on bound state even though getting rid of that was the point of DSA?). It's a question of ease of use and integration with existing code.

You spoke of layered libraries. The global state of shading_language_include makes it easier to use layered libraries with includes precisely because it doesn't add new entrypoints for compiling. Yes, it also means that layered libraries could in fact interfere with your filesystem. But the benefits of ease-of-use outweigh the risks.

Extensions need to work well with existing functionality and code. We've seen this time and again; people will not radically rewrite their rendering code on a whim. By making shading_language_include work with existing functions, it means that users don't have to rewrite their code. They just add some simple initialization code and everything works perfectly. Because of this, they are more likely to actually use the feature.

Yes, this mode of thought lead directly to the weirdness of how array buffers are attached to attributes, the bind+gl*Pointer paradigm that confuses new users constantly. Even so, the ability to switch back and forth between buffer objects and client arrays with minimal code changes was instrumental to the widespread adoption of buffer objects. Yes, by now they should have corrected the API oversight, but at the time, it was a perfectly defensible and reasonable action.

And, in this case, there is no API strain caused by shading_language_include. The API is not made more obscure or difficult to use. It is plainly evident exactly what is happening: the filesystem that the user specified to OpenGL is being used for shader string inclusion, for all shader compilation. While the specific effect of shader compilation is now bound to certain global state, that's essentially what you want when you're talking about an include mechanism. Ultimately, any form of inclusion means that the result of compilation will be affected by state external to the shader object itself. Whether it is global GL state or some user-defined code.

Global state is not always a bad thing. Particularly when you're dealing with a concept that is fundamentally global.

kRogue
02-25-2011, 05:40 AM
I am being helpful; I'm pointing out the flaws in your proposal. It's your responsibility to find fixes for them because it's your idea.


It is fine to point out flaws, but again if the flaws are easy to fix, and if you see them, don't you think it would be atleast professional to suggest it? Moreover, in general, but not in this case, feedback on suggestions which comments on portions that are good/nice is critical too, as it prevents throwing out useful suggestions.



Stop making this personal. I am not attacking you; I'm attacking your idea. Respond to what is written and not the person doing the writing.


I am not taking it personal but again the attitude "attacking an idea" is all messed up. A more constructive and useful approach is to go from "ways to make an idea better".

Back to the proposal:


Except that it is neither simple or easy for the user....

I strongly disagree on this opinion. Indeed, many complicated applications have their own virtual file system and as such, those exact functions are already available for such a system.



Every time something is corrected in this proposal, it becomes more and more complex for the user to implement. It went from 3 callbacks to four. From 1 user-defined object to 3. Inclusion really should not require this much effort on the part of the user.


This is how things evolve. How many API points are in the current include specification? More than this, but you claim that the current extension is simpler and it has more overhead per file you wish to be able to include. The suggestion I am proposing puts it at a roughly one time cost per application virtual filesystem: implementation of the call backs. The current extension puts the price at each "file" to allow for inclusion.



I don't know what more definition you want. If you're talking about case sensitivity, that's effectively implicit in the spec: string compares are case sensitive. It would be good to have this stated explicitly, of course. But otherwise, there's nothing more to specify.


Different situations call for different rules: Are the files from the OS? Are the files virtual from a .zip or .tar.gz? What are the expectations of the developer for file name matching? Different archive formats have different rules for saying two files are the same. Character cases is just the beginning, what of more exotic file names such as URI's? [Here think of WebGL]. Right now only thing that is supported is Unix style paths, nothing more... and as such immediately limits it so that adding additional functionality becomes even harder and messier.




Really, how much inclusion are you expecting to exist? I don't have these kinds of include issues in my C++ projects, let alone for the relatively small numbers of shaders that people will be using. Reasonable conventions ensure that this won't happen for any real-world applications. The scope of shaders simply doesn't merit it.


Absolutely wrong. Before namespaces in C++, there was where implementors of frameworks needed to prefix each of their classes with a letters, MFC's CMyClass, Qt's QMyClass, TurboVision's TVMyClass.. the list goes on. Now to get includes to correctly not interact a framework, let's call is Ygg, will need to do something along the lines include <YUtil/"> .. you can make a bet that common names will be used often, such as "util", "quaternion", etc... naturally this means that a framework will need to document what their include paths are to make sure they do not get overwritten. Not a good thing and flaky behavior.




Not only that, you can always test whether a path already exists in the filesystem. Yes, that means a library will throw an error when it attempts to be used with a conflicting one. But this occurrence will be so rare anyway that doing anything more than just failing would not be worthwhile.

Is it possible for conflicts and so forth to happen? Absolutely. But is this eventuality likely enough to be worth the effort that a user would have to spend to write a virtual file system? I highly doubt it.



Following this line, then an application and/or framework needs to implement *something* and *something messy* to handle that error condition. Compounding the issue is that the error code will not be hit that often and so that error handling code will not get real life usage often to be able to really trust it outside of the developer house. Compounding the messiness is that both the framework and an application will need to check for each file they wish to add to the include system and have some behavior to deal with filename collision. The increase in code complexity to handle that is heck a lot larger than the call back API, this includes modifying shader source code to the "new" names and generating the new names. All of this because of an insistence of using global state rather than localized state.




That still requires direct interference with the creation and life-span of a shader object. So it can't work with separate_shader_objects.


Huh? We are talking setting shader source code. That is it. Yes it requires a new entry point to pass along the file system, but so what.



Extensions need to work well with existing functionality and code. We've seen this time and again; people will not radically rewrite their rendering code on a whim. By making shading_language_include work with existing functions, it means that users don't have to rewrite their code. They just add some simple initialization code and everything works perfectly. Because of this, they are more likely to actually use the feature.


This is not a realistic argument. Here is an obvious thought: an extension rather than changing the meaning of an existing API point based from global state adds a new API point whose usage closely mirrors those of existing API's. That makes it safe and easy to use.




Nor can it work with any extension that creates programs internally based on shader source that you provide unless it also exposes this interface.

That is a good thing, it makes the beans well defined and clear. If one wants to support other bits then a new function should be given. Lets take this argument further. Suppose you have a GL wrapper that has an entry point corresponding to glVertexAttribPointers, which takes as argument a pointer. Here is a question: what is the expected behavior of that wrapper if something is bound to GL_ARRAY_BUFFER? Take a step further, that function of the wrapper is used by a template happy function, then what? Making the interpretation of an existing API point change dependent on global state coming later is down right dodgy and it makes a mess.