PDA

View Full Version : Modern GLSL style



Janika
07-04-2012, 12:18 PM
With the new C++ 11 standard, we would like to see GLSL adopt the C++ standard and introduce classes to the shading language. I can think of many useful applications. One is representing a fragment by a class. For example:

class MyFragment
{

void SetColor(...);
void SetPosition(...);

Color color;
};

This way we can direct the fragment to any location or even generate other fragments, as I suggested before.

std::list<MyFragment> frags = frag.Clone(4);

frags[0].SetPosition(...);
frags[1].SetPosition(...);
frags[2].SetPosition(...);
frags[3].SetPosition(...);

I think OO is the way to go in shaders and it will solve many problems. For now make it an extension:

GL_ARB_cpp11withSTL_glsl

menzel
07-04-2012, 02:01 PM
(I'll get the popcorn...)

thokra
07-04-2012, 02:39 PM
I wonder if this is gonna be in 3D...

Alfonse Reinheart
07-04-2012, 02:53 PM
OK, you seem to be operating under the mistaken belief that if you have C++11, that would teleport you into a magical world where fragment shaders can write to different fragments and other such nonsense.

Having access to std::vector, std::list, classes, templates, template metaprogramming, or anything else that has to do with C++ means absolute nothing with regard to how a fragment shader works. It doesn't matter what language it uses; what matters is what the actual shader stage does. And that's not going to magically change just because you add C++ features to the shading language.

Janika
07-04-2012, 03:03 PM
I don't get it. The CPU is the same for all languages, but every language has its own powerful set of features. C++ is more powerful than COBOL, though they execute on the same machine CPU. The same applies to shading languages. Maybe a subset of C++ will do it, but it's how the language works. Look at HLSL, it has the register binding feature which exists on same hardware that GLSL is incapable of doing. I view it syntactically rather than hardware wise.

Now I'm thinking of a more powerful feature. Instead of having different paths, shader GPU code and C++ CPU code, we can make both run in the same context but dispatched to the right processor or both...This way we can use Visual Studio debugger to debug shaders at run time like we do with code. That's why I suggested using C++ instead of C-like syntax that cannot mix with CPU code.

aqnuep
07-04-2012, 03:49 PM
............ :doh:

kRogue
07-07-2012, 12:03 PM
Please, read this: http://bps11.idav.ucdavis.edu/talks/04-realTimeRenderingArchitecture-BPS2011-houston.pdf (pdf warning). It will open one's eyes on what is reasonable to expect and to not expect from a fragment shader.

Then, take a look into CUDA (or OpenCL), for using a GPU in a more generic (and flexible) fashion with the cost of potential added development pain. Your last point is goes to the path of GPGPU.

mhagain
07-08-2012, 10:02 AM
This suggestion is broken in so many different ways.

First of all it assumes that the high-level shader code you write is going to be an exact line-for-line representation of what your compiler will generate and what will actually run on your GPU. Guess what - it's not. You shader compiler is free to reorder instructions, change things around and otherwise have it's own merry way with your GLSL code; so long as the output is correct that is all that matters.

Secondly it ascribes special powers and capabilities to OOP. Wrong again. There is nothing that OOP does that cannot be done with the current more procedural language; the difference is in how you write the code, not what the code does. Given equivalent and competently written C and C++ code, any half-way decent compiler is going to produce the exact same machine code, and the same applies to a shader compiler. An OOP version of GLSL won't suddenly give you capabilities that you never had before, what it will do is let you express things differently, but you're still restricted to the same hardware capabilities.

Thirdly, you're blurring the lines between the CPU and the GPU. The reality is that these are two completely different processors, with two completely different instruction sets, two completely different specializations. Each excels at a particular task but sucks at the other, and more generalized code that is capable of running on either is going to occupy a weird mid-level of half-OK and half-suck.

Overall, and taken with your other suggestions, I can quite confidently say that OpenGL is not the API for you. You want something that operates at a much higher level, where you don't have to worry about the details of how things work or even of what works and what doesn't. OpenGL used to be like that - back in 1998 or so. Hardware moved on, suddenly the messy details started becoming important, OpenGL originally didn't move with it, and when it did start moving the end result was too deeply infused with the old philosophy and was crap. It's only in more recent years that things have started getting good again.

OpenGL is a relatively thin layer on top of your graphics hardware (much much thinner than CPU-side code is over your CPU), and that doesn't seem to be what you want. You've just made a bad decision and it's not OpenGL that needs to change, it's you. You need a scene graph API, where you can just position things, set some properties and let everything else happen automatically. OpenGL never set out to be that API

tripcore
07-09-2012, 12:32 PM
Janika, what you describe sounds very much like C++ AMP (http://msdn.microsoft.com/en-us/library/hh265137(v=vs.110).aspx). It's for GPGPU however, not rendering, for reasons other people have explained here, and it is "restricted" (albeit nicely) compared to normal C++ 11 with the restrict keyword, but can be mixed with CPU C++ 11 code.

V-man
07-10-2012, 03:03 PM
I think OO is the way to go in shaders and it will solve many problems.

What problems does OO get rid of?



std::list<MyFragment> frags = frag.Clone(4);

frags[0].SetPosition(...);
frags[1].SetPosition(...);
frags[2].SetPosition(...);
frags[3].SetPosition(...);


Wait a minute. You want each fragment to turn into 4 fragments?
What about the depth value for each fragment?
What about the stencil value for each fragment?
What if you are writing to a multisampled buffer?

Groovounet
07-11-2012, 06:46 PM
lol, +1 on the popcorns.

mhagain
07-12-2012, 04:59 PM
Just for the OP's info, here are the worthwhile features that should be added to GLSL:



Explicit uniform locations.
Yes, UBOs go a long way here, but sometimes all you want to do is just set a single vec4. GLSL has had explicit attribute locations for ages, why not for uniforms too? This isn't a hardware limitation (HLSL could always let you set ": register(c0)" for example), it's a design flaw. Add "layout (location=" qualifiers to uniform declarations.
Default values allowed for sampler uniforms.
This is daft - why can't you declare "uniform sampler2D myTexture = 3;" and then just bind the texture object to GL_TEXTURE3 without having to query the uniform location and set it's value after compilation? Sure, it's a one-time-only op, but it just needlessly adds to the boilerplate supporting infrastructure you must write before you can actually start doing interesting and fun things.
User-specified entry-points.
Again, it's daft that you must always use main () as your entry-point for every shader. If you have many shaders that all share the same inputs, outputs and uniforms, you might want to combine them all into a single source file. Same with shared subroutines. YES Alfonse, I know about the const GLchar ** param to glShaderSource, but that means having to split your shader sources across multiple files, load and verify each file individually, and build up the array each time. A new version of glShaderSource that accepts a const GLchar *entrypoint param.


Something of a personal wish-list I admit, but get this stuff fixed and then maybe we can start talking about bolting on OOP pretties.

Janika
07-12-2012, 06:33 PM
Understood. Then what I was asking for is somehow can be accomplished with performance penalty using something like OpenCL or CUDA...

ScottManDeath
07-12-2012, 07:08 PM
Just for the OP's info, here are the worthwhile features that should be added to GLSL:



Explicit uniform locations.
Yes, UBOs go a long way here, but sometimes all you want to do is just set a single vec4. GLSL has had explicit attribute locations for ages, why not for uniforms too? This isn't a hardware limitation (HLSL could always let you set ": register(c0)" for example), it's a design flaw. Add "layout (location=" qualifiers to uniform declarations.
Default values allowed for sampler uniforms.
This is daft - why can't you declare "uniform sampler2D myTexture = 3;" and then just bind the texture object to GL_TEXTURE3 without having to query the uniform location and set it's value after compilation? Sure, it's a one-time-only op, but it just needlessly adds to the boilerplate supporting infrastructure you must write before you can actually start doing interesting and fun things.
User-specified entry-points.
Again, it's daft that you must always use main () as your entry-point for every shader. If you have many shaders that all share the same inputs, outputs and uniforms, you might want to combine them all into a single source file. Same with shared subroutines. YES Alfonse, I know about the const GLchar ** param to glShaderSource, but that means having to split your shader sources across multiple files, load and verify each file individually, and build up the array each time. A new version of glShaderSource that accepts a const GLchar *entrypoint param.


Something of a personal wish-list I admit, but get this stuff fixed and then maybe we can start talking about bolting on OOP pretties.

For point 3, just prefix your shader code with "#define phong_fragment_shader main" as a workaround.

Alfonse Reinheart
07-12-2012, 07:21 PM
Default values allowed for sampler uniforms.
This is daft - why can't you declare "uniform sampler2D myTexture = 3;" and then just bind the texture object to GL_TEXTURE3 without having to query the uniform location and set it's value after compilation? Sure, it's a one-time-only op, but it just needlessly adds to the boilerplate supporting infrastructure you must write before you can actually start doing interesting and fun things.

We already have that. (http://www.opengl.org/registry/specs/ARB/shading_language_420pack.txt) Welcome to 2011; glad you could make it.

Chris Lux
07-13-2012, 12:49 AM
Just for the OP's info, here are the worthwhile features that should be added to GLSL:



...
Default values allowed for sampler uniforms.
This is daft - why can't you declare "uniform sampler2D myTexture = 3;" and then just bind the texture object to GL_TEXTURE3 without having to query the uniform location and set it's value after compilation? Sure, it's a one-time-only op, but it just needlessly adds to the boilerplate supporting infrastructure you must write before you can actually start doing interesting and fun things.
...





layout(binding = 3) uniform sampler2D myTexture;

is already valid since OpenGL 4.2.

edit: Alfonses post just appeared here. sorry for the double post then...

mhagain
07-13-2012, 06:52 AM
Granted that; not up to speed with the full 4.2 spec.

l_belev
07-16-2012, 06:47 AM
Just for the OP's info, here are the worthwhile features that should be added to GLSL:



Explicit uniform locations.
Yes, UBOs go a long way here, but sometimes all you want to do is just set a single vec4. GLSL has had explicit attribute locations for ages, why not for uniforms too? This isn't a hardware limitation (HLSL could always let you set ": register(c0)" for example), it's a design flaw. Add "layout (location=" qualifiers to uniform declarations.
Default values allowed for sampler uniforms.
This is daft - why can't you declare "uniform sampler2D myTexture = 3;" and then just bind the texture object to GL_TEXTURE3 without having to query the uniform location and set it's value after compilation? Sure, it's a one-time-only op, but it just needlessly adds to the boilerplate supporting infrastructure you must write before you can actually start doing interesting and fun things.
User-specified entry-points.
Again, it's daft that you must always use main () as your entry-point for every shader. If you have many shaders that all share the same inputs, outputs and uniforms, you might want to combine them all into a single source file. Same with shared subroutines. YES Alfonse, I know about the const GLchar ** param to glShaderSource, but that means having to split your shader sources across multiple files, load and verify each file individually, and build up the array each time. A new version of glShaderSource that accepts a const GLchar *entrypoint param.


Something of a personal wish-list I admit, but get this stuff fixed and then maybe we can start talking about bolting on OOP pretties.

I would like to see support for #include too. this means you must be able to specify a callback function to be called on each #include which will fetch the included file's text to the GLSL compiler.
currently, if you want to use includes, you have to run your own preprocessor beforehand.

Dark Photon
07-16-2012, 09:04 AM
I would like to see support for #include too. this means you must be able to specify a callback function to be called on each #include which will fetch the included file's text to the GLSL compiler.
currently, if you want to use includes, you have to run your own preprocessor beforehand.

An slightly better solution for this is what Cg (http://developer.nvidia.com) supports. It provides APIs for you to give it the text content of included files beforehand (cgSetCompilerIncludeString) as well as a callback (as you describe) to be called if a #include is seen for a file you haven't given it already (cgSetCompilerIncludeCallback).

As a side-note, it also provides cgSetCompilerIncludeFile, where you can provide a disk pathname to the content for a specific include file (instead of the already-loaded content string directly), but GL wouldn't want to support that (since GL doesn't access the filesystem. the server may be running on a completely different host than the client.).

In fact, if we had this (#include support) as well as what I describe here (http://www.opengl.org/discussion_boards/showthread.php/178176-detecting-if-a-gl_LightSource-is-disabled-in-compatibility-profile?p=1239897&viewfull=1#post1239897) (i.e. using const values and normal GLSL conditional expressions to define "shader permutations", and having the compiler automatically cut-away unreachable code), I think that'd totally eliminate my needs for "sprintf"ing of GLSL shaders together (which is necessary now).

malexander
07-16-2012, 09:29 AM
I would like to see support for #include too. this means you must be able to specify a callback function to be called on each #include which will fetch the included file's text to the GLSL compiler.

There is http://www.opengl.org/registry/specs/ARB/shading_language_include.txt, though the developer is responsible for loading the include files themselves and linking the file to the contents in GL. I haven't used it myself though, as we have a preprocessor class in our project.

l_belev
07-16-2012, 10:21 AM
@ Dark Photon: that would do, yes. I don't mind that way or the other, just want to be able to use includes :)

@ malexander: ARB_shading_language_include would be good enough too, only if it was actually supported..
that reminds me to update my drivers..

malexander
07-16-2012, 06:46 PM
You're right, ARB_shading_language_include is supported in the Nvidia 295 driver I'm using but not AMD Catalyst 12.6 on my other system, making it difficult to recommend as a general solution.

Dark Photon
07-17-2012, 08:26 PM
There is http://www.opengl.org/registry/specs/ARB/shading_language_include.txt, though the developer is responsible for loading the include files themselves and linking the file to the contents in GL. I haven't used it myself though, as we have a preprocessor class in our project.
Ah! Right! Forgot about that for some reason. Thanks for the correction. Not universally supported though. Same here on the preprocessor - handles that and the sprintfing.