Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 14 of 22 FirstFirst ... 41213141516 ... LastLast
Results 131 to 140 of 212

Thread: Official feedback on OpenGL 3.2 thread

  1. #131
    Advanced Member Frequent Contributor
    Join Date
    Apr 2003
    Posts
    666

    Re: Official feedback on OpenGL 3.2 thread

    I suggest creating your own GL3-Context and let it render into Qt's DC. Would that work for you?

  2. #132
    Advanced Member Frequent Contributor
    Join Date
    Dec 2007
    Location
    Hungary
    Posts
    985

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Alfonse Reinheart
    Looking forward would be releasing an OpenGL 4.0 specification now, similar to how Microsoft has released DX11 API documentation. Notice that Microsoft did not release DX10.2; it is DX11.

    When making major hardware changes, you bump the major version numbers. OpenGL 3.x should not promote to core features that do not exist in 3.x-class hardware.

    OpenGL core should support those things exactly and only when they make version 4.0. As pointed out previously, 3.x is for a particular level of hardware; 4.0 is for a higher level of hardware.

    Breaking this model screws everything up. Take 3.2 for example. It includes GLSL 1.5, ARB_sync, and ARB_geometry_shader4 as part of the core, as well as the compatibility/core profile system. If you add on top of this a DX11 feature that can only be made available on DX11 hardware, then anyone using these features on standard 3.x hardware has to use them as extensions of 3.1, not core features of 3.2.

    No. 4.0 should be where DX11 features are supported, not 3.2.
    I never said I'm talking about OpenGL 3.2. In my initial post I just wrote my wish-list for future releases of the API. I never mentioned that it should be 3.x or 4.x. What about tesselation? It is also a DX11 feature and nobody complained about that I've listed it. I think you either misunderstand me or just want to find some mistake in what I've said. Please, I'm just talk about what I would like to see in future versions because I would really like to use them.

    Quote Originally Posted by Alfonse Reinheart
    Think about using texture arrays for materials. How you can assign a particular material for each and every triangle/mesh?
    Why would you? Outside of some form of gimmick, I can't think of a reason why you would need this.
    Eric Lengyel pointed out one example but here's mine: I would like to batch multiple object drawings that (without texture arrays) can be made only using texture atlases (which suck IMO) or use 3D textures but then I cannot use mipmapping. Nowadays rendering is more CPU bound than GPU bound, so batching is one of the main rooms for optimization. That's why I would use it this way.
    Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
    Technical Blog: http://www.rastergrid.com/blog/

  3. #133
    Advanced Member Frequent Contributor
    Join Date
    Dec 2007
    Location
    Hungary
    Posts
    985

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Eric Lengyel
    I would agree that the ATI drivers are much better than they have been in recent times, but I wouldn't go as far as calling them "quite good" yet. The problem with compressed array textures I mention above is only one of several open bug reports that we have on file with ATI right now.
    OK! You're right, unfortunately they don't put too much effort in OpenGL drivers in the favour of DX ones. I also hate this because I'm a "weird animal" as an OpenGL user and ATI supporter
    That's why I also appreciate Khronos' schedule because at least it forces ATI/AMD to adopt new OpenGL features. They will do it, because they want to use the "OpenGL 3.x compliant" sticker on their cards

    Quote Originally Posted by Eric Lengyel
    Another particularly annoying bug is that changing the vertex program without changing the fragment program results in the driver not properly configuring the hardware to fetch the appropriate attribute arrays for the vertex program. This forces us to always re-bind the fragment program on ATI hardware.
    As I see you're using assembly shaders. I liked them much more a few years ago as well, because there is much more possibility to optimize with those, but unfortunately ATI is very weak in it, at least they do not put to much effort to develop assembly shaders any further, they are just concentrating on GLSL. Again I think because this is the advertised way of using shaders.

    Quote Originally Posted by Eric Lengyel
    ATI still has a way to go before they catch up to Nvidia's stability.
    Yes, totally agree. Just I believe in them anyway, because ATI's history started a long time ago, when they just came up by purchasing small hardware companies and using their interesting ideas to catch up with the big players. That's why I like them and I think they still come up with some innovative ideas. OK, I'm not objective but who is?

    Anyway, these whole stuffs were just my personal ideas. Maybe I'm not the most competent contributor to the topic. I'm just a so called garage OpenGL developer, because, even if I'm a professional software developer, unfortunately I'm currently working in the telecommunication industry. There isn't much need for OpenGL developers in Hungary

    So sorry about sticking to my opinion and thanks for the many replies. That's exactly what I meant to achieve: to get some objective feedback about my vision.
    Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
    Technical Blog: http://www.rastergrid.com/blog/

  4. #134
    Junior Member Regular Contributor
    Join Date
    Jul 2000
    Location
    Roseville, CA
    Posts
    159

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by aqnuep
    As I see you're using assembly shaders. I liked them much more a few years ago as well, because there is much more possibility to optimize with those, but unfortunately ATI is very weak in it, at least they do not put to much effort to develop assembly shaders any further, they are just concentrating on GLSL. Again I think because this is the advertised way of using shaders.
    I still use assembly shaders whenever possible because the compile times are so much faster, and the C4 Engine generates shaders on the fly. I love Nvidia for the awesome effort they put into maintaining assembly support for all the new hardware features. The C4 Engine can also generate GLSL fragment shaders, and those are used for any particular shader requiring a feature that is only exposed through GLSL on ATI hardware (or only works correctly in GLSL on ATI hardware, like texture fetch with bias).

    Another big advantage to using assembly shaders is the ability to have global parameters. I still can't believe those were left out of GLSL. According to issue #13 of GL_ARB_shader_objects, this functionality was considered useful, but was deferred for some idiotic reason. So now, if I need to render 50 objects with 50 different shaders, and they all need to access the same light color, I'm forced to specify that light color 50 times as a parameter for all 50 shaders. Brilliant.

  5. #135
    Advanced Member Frequent Contributor
    Join Date
    Dec 2007
    Location
    Hungary
    Posts
    985

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Eric Lengyel
    I still use assembly shaders whenever possible because the compile times are so much faster, and the C4 Engine generates shaders on the fly. I love Nvidia for the awesome effort they put into maintaining assembly support for all the new hardware features. The C4 Engine can also generate GLSL fragment shaders, and those are used for any particular shader requiring a feature that is only exposed through GLSL on ATI hardware (or only works correctly in GLSL on ATI hardware, like texture fetch with bias).
    Yes, this makes sense of course. I read about your engine in the past. I abandoned assembly shaders especially because of the fact that they are only up-to-date with NVIDIA drivers. BTW it's also the ARB's fault that they aren't maintained because there's no real vendor-independent extension for them since a long time ago.

    Quote Originally Posted by Eric Lengyel
    Another big advantage to using assembly shaders is the ability to have global parameters. I still can't believe those were left out of GLSL. According to issue #13 of GL_ARB_shader_objects, this functionality was considered useful, but was deferred for some idiotic reason. So now, if I need to render 50 objects with 50 different shaders, and they all need to access the same light color, I'm forced to specify that light color 50 times as a parameter for all 50 shaders. Brilliant.
    Using uniform buffers shall solve the problem from now on. But I'm not familiar with the exact use case so maybe I'm wrong.
    Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
    Technical Blog: http://www.rastergrid.com/blog/

  6. #136
    Senior Member OpenGL Guru
    Join Date
    Dec 2000
    Location
    Reutlingen, Germany
    Posts
    2,042

    Re: Official feedback on OpenGL 3.2 thread

    kRouge, skynet thanks for the infos.

    "I suggest creating your own GL3-Context and let it render into Qt's DC. Would that work for you?"

    I haven't thought this through, but as i see it, that could work. I am working on Windows exclusively, so i assume i would create the 3.x context myself after Qt initialized the widget and then somehow bind it to the same window?

    Maybe i'll try it in a few days. But i think it would be more appropriate to put the discussion into another thread then.

    Actually i was hoping that Trolltech adds flags for 3.x context creation to Qt soon, but i can't find any information, whether it is planned to do so.

    Jan.
    GLIM - Immediate Mode Emulation for GL3

  7. #137
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    600

    Re: Official feedback on OpenGL 3.2 thread

    I, err, don't think you can make the GL context yourself, let me elaborate:

    If you want to do QGLWidget, you must let Qt make the context for you, in doing so, the painter back end will then use GL to draw.

    if you make the context yourself you will need to use a QWidget and duplicate alot of the QGLWidget code yourself: swapping buffers, etc; worse you will need to get into some more hackery if you want to use QPainter to your widget, because chances are it will do ungood but interesting things..


    or just hack Qt your self, and *ick* rebuild it on Windows... ewwww...

  8. #138
    Junior Member Newbie
    Join Date
    Nov 2007
    Posts
    22

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Eric Lengyel
    Quote Originally Posted by Scribe
    As you say, any potential performance losses are minimal in the worst case to performance gains in best case.
    I should mention that there is at least one case in which alpha test hardware is very important: when rendering a shadow map with alpha-tested geometry. Since the shader used for this is almost always very short (one cycle on a lot of hardware), adding a KIL instruction to the shader will cut performance by as much as 50%. The equivalent alpha test is free.

    Also, since the alpha test must be supported by the driver for the foreseeable future, I don't think IHVs are going to drop hardware support for it any time soon. It's not difficult to implement in hardware, and it would be silly to burden the driver with recompiling a shader just because the alpha test was enabled/disabled or the alpha function was changed.
    I would be interested to see just how much room alpha testing hardware requires on a gpu. If you could get an extra shader core in then that at least explains why you'd want to remove it. I mean a 50% slowdown in creating shadow map vs performance increase on all other pixels ops may balance out. Would be nice to know the answer to this one.

    Also in future generations such as the roumored *cough* 6 times!? faster dx11 GT300 *cough* ray tracing etc may look more attractive and even devs may start wanting shader core power over other fixed function features?

    On a side note I'm seriously struggling to believe x6!

  9. #139
    Senior Member OpenGL Guru
    Join Date
    Dec 2000
    Location
    Reutlingen, Germany
    Posts
    2,042

    Re: Official feedback on OpenGL 3.2 thread

    "On a side note I'm seriously struggling to believe x6!" - me too.

    kRouge: Yes, i thought Qt should create the "main" context (2.1), then via extensions I create the 3.x context and use that for further rendering, no?
    GLIM - Immediate Mode Emulation for GL3

  10. #140
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    600

    Re: Official feedback on OpenGL 3.2 thread

    it would work except that you need to change the GL context handle that Qt is using to the one you get back from wgl/glxCreateContextAttribARB, whose value (and type!) is hidden away in the _source_ files of Qt, when you lookinto Qt header you will often see this pattern:

    class QClassPrivate
    class QClass //and then some icky Qt quasi macros for moc
    {
    public:
    //yada-yada-yada.
    protected:
    //yada-yada-yada

    //some more yada-yada-yada for slots and signals.

    private:
    QClassPrivate *d;
    };

    and the class definition for QClassPrivate will be different depending on the platform Qt is compiled for.

    Qt's abstraction for a GL context is QGLContext, and as expected does not expose the actual context handle from the windowing system much less let you change it, so sighs.

    In Qt's defense though, it is the same API for all of the following (well mostly same Qt API, with the caveat if the platform supports that API functionality):

    X11 with Desktop GL using glX to make context
    X11 with GLES1 or GLES2 using egl to make context
    Windows with desktop GL using wgl to make context
    WindowsCE with GLES1 or GLES2 using *I think* egl to make context

    an example caveat: QGLShader and QGLShaderProgram are not supported in GLES1 (duh).


    um, err, you are right we should make a Qt GL thread for this, sighs... well if you want to write more on this (and for me to write as well) start a new thread and I will write there. My apologies to those wanting to read on GL 3.2 feedback and not Qt.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •