Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 3 123 LastLast
Results 1 to 10 of 23

Thread: Modern GLSL style

  1. #1
    Junior Member Regular Contributor
    Join Date
    May 2012
    Posts
    100

    Lightbulb Modern GLSL style

    With the new C++ 11 standard, we would like to see GLSL adopt the C++ standard and introduce classes to the shading language. I can think of many useful applications. One is representing a fragment by a class. For example:

    class MyFragment
    {

    void SetColor(...);
    void SetPosition(...);

    Color color;
    };

    This way we can direct the fragment to any location or even generate other fragments, as I suggested before.

    std::list<MyFragment> frags = frag.Clone(4);

    frags[0].SetPosition(...);
    frags[1].SetPosition(...);
    frags[2].SetPosition(...);
    frags[3].SetPosition(...);

    I think OO is the way to go in shaders and it will solve many problems. For now make it an extension:

    GL_ARB_cpp11withSTL_glsl

  2. #2
    Member Regular Contributor
    Join Date
    Jan 2012
    Location
    Germany
    Posts
    325
    (I'll get the popcorn...)

  3. #3
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    I wonder if this is gonna be in 3D...

  4. #4
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    OK, you seem to be operating under the mistaken belief that if you have C++11, that would teleport you into a magical world where fragment shaders can write to different fragments and other such nonsense.

    Having access to std::vector, std::list, classes, templates, template metaprogramming, or anything else that has to do with C++ means absolute nothing with regard to how a fragment shader works. It doesn't matter what language it uses; what matters is what the actual shader stage does. And that's not going to magically change just because you add C++ features to the shading language.

  5. #5
    Junior Member Regular Contributor
    Join Date
    May 2012
    Posts
    100
    I don't get it. The CPU is the same for all languages, but every language has its own powerful set of features. C++ is more powerful than COBOL, though they execute on the same machine CPU. The same applies to shading languages. Maybe a subset of C++ will do it, but it's how the language works. Look at HLSL, it has the register binding feature which exists on same hardware that GLSL is incapable of doing. I view it syntactically rather than hardware wise.

    Now I'm thinking of a more powerful feature. Instead of having different paths, shader GPU code and C++ CPU code, we can make both run in the same context but dispatched to the right processor or both...This way we can use Visual Studio debugger to debug shaders at run time like we do with code. That's why I suggested using C++ instead of C-like syntax that cannot mix with CPU code.

  6. #6
    Advanced Member Frequent Contributor
    Join Date
    Dec 2007
    Location
    Hungary
    Posts
    985
    ............
    Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
    Technical Blog: http://www.rastergrid.com/blog/

  7. #7
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    600
    Please, read this: http://bps11.idav.ucdavis.edu/talks/...11-houston.pdf (pdf warning). It will open one's eyes on what is reasonable to expect and to not expect from a fragment shader.

    Then, take a look into CUDA (or OpenCL), for using a GPU in a more generic (and flexible) fashion with the cost of potential added development pain. Your last point is goes to the path of GPGPU.

  8. #8
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,212
    This suggestion is broken in so many different ways.

    First of all it assumes that the high-level shader code you write is going to be an exact line-for-line representation of what your compiler will generate and what will actually run on your GPU. Guess what - it's not. You shader compiler is free to reorder instructions, change things around and otherwise have it's own merry way with your GLSL code; so long as the output is correct that is all that matters.

    Secondly it ascribes special powers and capabilities to OOP. Wrong again. There is nothing that OOP does that cannot be done with the current more procedural language; the difference is in how you write the code, not what the code does. Given equivalent and competently written C and C++ code, any half-way decent compiler is going to produce the exact same machine code, and the same applies to a shader compiler. An OOP version of GLSL won't suddenly give you capabilities that you never had before, what it will do is let you express things differently, but you're still restricted to the same hardware capabilities.

    Thirdly, you're blurring the lines between the CPU and the GPU. The reality is that these are two completely different processors, with two completely different instruction sets, two completely different specializations. Each excels at a particular task but sucks at the other, and more generalized code that is capable of running on either is going to occupy a weird mid-level of half-OK and half-suck.

    Overall, and taken with your other suggestions, I can quite confidently say that OpenGL is not the API for you. You want something that operates at a much higher level, where you don't have to worry about the details of how things work or even of what works and what doesn't. OpenGL used to be like that - back in 1998 or so. Hardware moved on, suddenly the messy details started becoming important, OpenGL originally didn't move with it, and when it did start moving the end result was too deeply infused with the old philosophy and was crap. It's only in more recent years that things have started getting good again.

    OpenGL is a relatively thin layer on top of your graphics hardware (much much thinner than CPU-side code is over your CPU), and that doesn't seem to be what you want. You've just made a bad decision and it's not OpenGL that needs to change, it's you. You need a scene graph API, where you can just position things, set some properties and let everything else happen automatically. OpenGL never set out to be that API

  9. #9
    Junior Member Newbie
    Join Date
    Jul 2012
    Posts
    1
    Janika, what you describe sounds very much like C++ AMP (http://msdn.microsoft.com/en-us/libr...=vs.110).aspx). It's for GPGPU however, not rendering, for reasons other people have explained here, and it is "restricted" (albeit nicely) compared to normal C++ 11 with the restrict keyword, but can be mixed with CPU C++ 11 code.

  10. #10
    Super Moderator OpenGL Guru
    Join Date
    Feb 2000
    Location
    Montreal, Canada
    Posts
    4,264
    Quote Originally Posted by Janika View Post
    I think OO is the way to go in shaders and it will solve many problems.
    What problems does OO get rid of?

    Code :
    std::list<MyFragment> frags = frag.Clone(4);
     
    frags[0].SetPosition(...);
    frags[1].SetPosition(...);
    frags[2].SetPosition(...);
    frags[3].SetPosition(...);

    Wait a minute. You want each fragment to turn into 4 fragments?
    What about the depth value for each fragment?
    What about the stencil value for each fragment?
    What if you are writing to a multisampled buffer?
    ------------------------------
    Sig: http://glhlib.sourceforge.net
    an open source GLU replacement library. Much more modern than GLU.
    float matrix[16], inverse_matrix[16];
    glhLoadIdentityf2(matrix);
    glhTranslatef2(matrix, 0.0, 0.0, 5.0);
    glhRotateAboutXf2(matrix, angleInRadians);
    glhScalef2(matrix, 1.0, 1.0, -1.0);
    glhQuickInvertMatrixf2(matrix, inverse_matrix);
    glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
    glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •