Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 5 of 5

Thread: OGL reder quality

  1. #1
    Junior Member Newbie
    Join Date
    Jun 2013
    Posts
    12

    OGL reder quality

    Hi all,
    is it possible to configure OGL by software so as to get the same render quality on different computers/graphic adapters ? It seems to be obvious, that the simple software switches like GL_NICEST, GL_LINE_SMOOTH or GL_POLYGON_SMOOTH are not enough. Is there a way to switch to the best render quality without using the hardware setup?
    Regards
    Mike
    Last edited by towsim; 06-04-2013 at 03:09 AM.

  2. #2
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,099
    is it possible to configure OGL by software so as to get the same render quality on different computers/graphic adapters ?
    Let's think about this.

    First of all, OpenGL implementations are huge state machines (not so huge nowadays, but still). In effect, you already configure a lot of stuff - antialiasing of primitives in legacy GL is one of them (you change the specific state with glEnable() in this case). Now, in legacy GL you have control over a lot of features that are on or off a a given point in time. You have no control over how the feature you just enabled is implemented - except if you use an open-source software implementation where you could theoretically alter the GL's behavior. The thing is, a software renderer is slow. So if you want the best in terms of performance, you'll take an implementation that uses the underlying graphics hardware to the fullest extent. In this case, the GL is a black box - except for the Intel DRI driver on Linux which is open-source (can't tell right now if that's also the case for the Windows driver) - and still, I wouldn't dare fiddling with an implementation if I'm not nearly 100% sure about what I'm doing.

    Secondly, OpenGL, first and foremost, is a specification. It becomes something concrete and usable when you implement it. Such implementations are offered by hardware vendors, e.g. AMD, Intel, NVidia and sometimes as pure software implementations, e.g MESA. Implementation details vary - mainly because hardware varies as well and implementations have to account for certain details because of the hardware they try to communicate with. In an ideal world, every GPU out there would work the same way and we'd have a single, bug-free, super awesome GL implementation. But as always, reality diverges substantially. It doesn't matter how hard you try and how correct your code is, you'll never get the exact same result on different hardware. However, the results are either very, very close or someone's doing something wrong, e.g. due to a driver bug.

    It seems to be obvious, that the simple software switches like GL_NICEST, GL_LINE_SMOOTH or GL_POLYGON_SMOOTH are not enough.
    Well, that's why we havemultiple antialiasing algorithms which are available in hardware. You can either use standard multi-sampling available since OpenGL 1.5 or you can go for any post-processing algorithm you can express using shader, e.g. FXAA, MLAA and many more. All antialiasing algorithms have different trade-offs in regards to performance (mainly ALU ops and bandwidth), memory consumptions and visual outcome. However, any of the above is prefereable to legacy point, line and polygon smoothing - the mere fact that you can be absolutely sure that any of the above are actually done hardware should be a convincing enough argument. With the legacy stuff, you can't be sure. On the other hand, you'll need shader-capable hardware and drivers. But really, if you're not stuck with a legacy codebase and start to write new code, you should be keelhauled if you still go for legacy GL today.

  3. #3
    Junior Member Newbie
    Join Date
    Jun 2013
    Posts
    12
    Thank you for that detailed answer. I think my question is more simple as is looks. There are applications on the marked which show a perfect render quality while the hardware setup says for all important switches 'Application-controlled'. This is what I am aiming for. You mentioned multi-sampling as a possibility. I think, I will try this to see if it give the expected result.
    Thank you,
    Mike

  4. #4
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,099
    Ahh... you mean as in controlled by driver settings? I don't trust anything with potentially high performance penalty (like many AA techniques) hiding behind a control panel combo-box. Plus, what if you don't care about what the control panel says - as in the case of a purely post-processing AA approach over which the driver has no control? I don't quite see the "advantage" of being able to force AA off, on or being app controlled if applications, especially modern games, pride themselves on having the most awesome AA algorithm ever, which can be fully customized inside the game. Personally I don't give a damn about that kind of settings - plus, they're not cross-platform.

    EDIT: Yes, it's cool for games that don't have native AA support! If it's already out there, AA support won't be patched in and you want more quality, driver-side AA is ok.

  5. #5
    Junior Member Newbie
    Join Date
    Jun 2013
    Posts
    12
    I had a little success! What I wanted to achieve was a acceptable render quality at least if the graphic adapter is on factory settings. Means, almost all switches are set to 'application controlled'. The success came from GLEW. Once installed and integrated, I was able to set OGL to multisampling with a former determined pixel format index. The graphic looks like a charm on all different computers. So I can sleep at night again...
    Thank you,
    Mike

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •