Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 3 of 3 FirstFirst 123
Results 21 to 30 of 30

Thread: sampler-Variables in Uniform-Blocks

  1. #21
    Junior Member Regular Contributor
    Join Date
    Nov 2012
    Location
    Bremen, Germany
    Posts
    167
    That is it - it would make things slower if one did not care instead of impossible. That's what I want.

  2. #22
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    That's what a performance trap is: something that looks convenient, but is in reality slow and should never be used. Like immediate mode. Or client-side vertex arrays. Notably, both of which are gone.

    The API is not there to be convenient; it's there to provide access to the hardware, with minimal overhead.

  3. #23
    Junior Member Regular Contributor
    Join Date
    Nov 2012
    Location
    Bremen, Germany
    Posts
    167
    The statement "are gone" is somewhat misleading. The learning/optimization curve of OpenGL is not to be cut any soon. I prefer things to be quickly codeable first and quickly to execute later on.

    it's there to provide access to the hardware, with minimal overhead.
    OpenGL is not a hardware-driver in my reading. Windows has a great GUI despite the fact it is an Operating System. One can criticize the fact that one cannot get the OS without the GUI but not that the GUI is shipped with the OS, if you get my reading.
    Last edited by hlewin; 01-31-2013 at 04:38 AM.

  4. #24
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    Quote Originally Posted by hlewin
    The statement "are gone" is somewhat misleading.
    They are - at least for everyone doing modern OpenGL right and caring about performance. BTW, even though GL_ARB_compatibility permits using all the old nonsense, it's just the syntax that's still there. Under the hood, all that crap is emulated using current hardware facilities.

    Quote Originally Posted by hlewin
    I prefer things to be quickly codeable first and quickly to execute later on.
    Since when is something really elaborate quickly codeable in OpenGL? Correct OpenGL usage needs knowledge, effort and in most cases time. It doesn't matter if you save time coding when the result runs several times slower than the semantic equivalent you put more effort into.

    Quote Originally Posted by hlewin
    OpenGL is not a hardware-driver in my reading.
    No, OpenGL is a specification. Your OpenGL implementation, however, is part of the driver and it implements an interface to the graphics hardware - hopefully with minimal overhead, like Alfonse suggested.

    Quote Originally Posted by hlewin
    Windows has a great GUI despite the fact it is an Operating System. One can criticize the fact that one cannot get the OS without the GUI but not that the GUI is shipped with the OS, if you get my reading.
    I don't know about the others, but I don't get it.

  5. #25
    Junior Member Regular Contributor
    Join Date
    Nov 2012
    Location
    Bremen, Germany
    Posts
    167
    They are - at least for everyone doing modern OpenGL right and caring about performance. BTW, even though GL_ARB_compatibility permits using all the old nonsense, it's just the syntax that's still there. Under the hood, all that crap is emulated using current hardware facilities.
    Which is a good Thing as using the old crap makes learning OpenGL quite a lot easier. And as you say the principles stay roughly the same. For my taste the compatibility spec goes not far enough to provide for means of a simple fade from a beginners-tutorial as downloadable everywhere to a state-of-the art application.

    Since when is something really elaborate quickly codeable in OpenGL? Correct OpenGL usage needs knowledge, effort and in most cases time. It doesn't matter if you save time coding when the result runs several times slower than the semantic equivalent you put more effort into.
    It matters for example when using declaratory elements of the language binding. See the example above. When scatching things I do not want to care about alignment-requirements of bind-buffer-range. That can be optimized if things have been implemented and if a bottleneck occurs. I feel it's unnecessary to be forced to write hardware-friendly, optimized code in the first place. Who cares about the Need for 100, let it be 1000 readbacks from the gpu per Frame? That's something one Needs to care about when writing bleeding-edge stuff. bleeding-edge for about 6 month until the next gpu-generation Comes out. I have no Problems wasting 10000 clock-cycles per Frame. I have Problems wasting some work-hours having to cope with offset-alignment-requirements.

  6. #26
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    Which is a good Thing as using the old crap makes learning OpenGL quite a lot easier.
    Is the dark side stronger?

    No. Quicker. Easier, more seductive.

    Just because something is easy doesn't make it good. I have never seen a fixed-function-based tutorial really explain how things actually work in the code, what all those parameters to various functions mean and so forth. Whereas you can't write shader-based code without knowing what you're doing.

    Users learn to use gluPerspective without having the slightest clue what it means. They learn to use glTexEnv without knowing what it's doing. They memorize and regurgitate glBlendFunc parameters to achieve some effect without any idea what it is really doing. And all the while, they think they are "learning" computer graphics, when in reality, they're just copy-and-pasting bits of code that worked before into some other place.

    And when they encounter a problem, because the Frankenstein's code that they've assembled from 20 different tutorials doesn't integrate well, they ask here. Without the slightest clue what's broken or how to fix it.

    It may take longer to learn via shaders, and you may not be able to see glamorous results quickly. But when you learn it, you learn it. You aren't just copying bits of code around; you're understanding what you are doing.

  7. #27
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    Quote Originally Posted by hlewin
    Thing as using the old crap makes learning OpenGL quite a lot easier.
    Nonsense. A lot of the stuff you needed to do with legacy OpenGL simply does not apply to modern OpenGL.

    Quote Originally Posted by hlewin
    When scatching things I do not want to care about alignment-requirements of bind-buffer-range.
    When I registered on this forum almost 3 years ago it was because I stumbled over the buffer offset alignment for uniform buffers. Ok, so it's not too intuitive. However, when you're doing OpenGL there's stuff that's implementation dependent. Knowing that and how to deal with it is sometimes essential. In any case, there's the spec you can read. And don't tell me you don't have to read other specs or API docs or documentation in general during your workday. If you don't want to read the spec you can ask here or other places and people will help you. Still, nobody's going to change the spec just because some parts of it are an inconvenience to you.

    Quote Originally Posted by hlewin
    I feel it's unnecessary to be forced to write hardware-friendly, optimized code in the first place.
    Who forces you? YOU need to force yourself if you want fast code. By your logic, writing code that uses cache lines well is wasted. Or making sure data is properly aligned so memory accesses work properly. Or utilizing SIMD instructions. Or inline assembly. Etc, etc.... Oh well ... It's cool to first make code correct and then fast but disregarding platform specific quirks is simply unwise to be diplomatic.

    Quote Originally Posted by hlewin
    Who cares about the Need for 100, let it be 1000 readbacks from the gpu per Frame?
    Ehm, everyone who's not completely insane? Do you have any idea what that much readbacks will do to your program's performance?

    Quote Originally Posted by hlewin
    That's something one Needs to care about when writing bleeding-edge stuff.
    So your argument is, unless one writes a high-end renderer for use in next-gen AAA games, performance simply doesn't matter?

    Since we're straying very far from your original proposel, let me finally urge you to consider the following: If you don't want to write high-performance code that's ok and if you're happy with the result, good for you. Still, I'm pretty confident that most experienced or semi-experienced OpenGL devs like to make things fast and they want and need an API to cater to that desire. At least that's the case for me. OpenGL is not designed to provide maximum convenience, it's supposed to provide a means to write high-performance rendering applications - and performance usually comes with a price. This includes decisions on the hardware-level which may not be transparent to the application developer but still necessary to keep the performance up. If that means I have to sacrifice some convenience than sign me up. Wishing for changes to be adopted that result in implementations being slower compared to predecessors is simply unacceptable.

    Bringing suggestions to improve OpenGL is always good if they're valid, but your suggestion has been dismissed by several very experienced people (myself not included) during a long discussion. It's time to let it go.

  8. #28
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    595
    This is .. almost fun to watch.

    At any rate, what hlewin wants is already available as an NVIDIA only extension. As stated before, that extension assumes point blank that all the GPU needs in the shader when accessing a texture is a 64-bit value.... what he fails to grasp is that other hardware may or may not operate that way.

    As a side note, the NVIDIA extension offers several distinct advantages over glBindTexture jazz:
    1. Avoid glBindTexture, pass the 64-bit address directly. This is the same avoid binding savings that NVIDIA's original bindless offers
    2. With NVIDIA's bindless texture, the need for texture atlas utterly disappears. You no longer need to make sure you are using no more than N textures, you can use them all (subject to VRAM room!). What one uses to choose the texture can then be fitted to anything: attributes, buffer object {be it uniform or texture buffer objects} (the former which is what he wants so badly)..


    In theory one could imagine that an integer computed/determined in a shader could be used to specify what texture unit to use; but I do not really buy that either since it forces an implementation to have a separate thing orthogonal to the fragment shader to do the sampling (which I guess is the case for NVIDIA).

    I'd still like to see NVIDIA's bindless for buffer object data somehow come to core in some form, but I do not think I will; it assumes too much: that data behind a buffer object all that one needs is a 64-bit value.

  9. #29
    Advanced Member Frequent Contributor
    Join Date
    Dec 2007
    Location
    Hungary
    Posts
    985
    Quote Originally Posted by kRogue View Post
    At any rate, what hlewin wants is already available as an NVIDIA only extension.
    Not exactly. Bindless texture works because it introduced opaque handles represented by a 64-bit integer to accomplish getting samplers from buffers. What hlewin want is that a non-opaque API concept, the texture unit index to be enough for the shader to create samplers out of them. Furthermore, bindless textures require one more important additional step: making the texture resident.

    Also, what he wants is the GL implementation to parse the buffer and automatically update API values to opaque, implementation dependent values automagically. That's the non-sense part.
    Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
    Technical Blog: http://www.rastergrid.com/blog/

  10. #30
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    595
    Quote Originally Posted by aqnuep View Post
    Not exactly. Bindless texture works because it introduced opaque handles represented by a 64-bit integer to accomplish getting samplers from buffers. What hlewin want is that a non-opaque API concept, the texture unit index to be enough for the shader to create samplers out of them. Furthermore, bindless textures require one more important additional step: making the texture resident.

    Also, what he wants is the GL implementation to parse the buffer and automatically update API values to opaque, implementation dependent values automagically. That's the non-sense part.
    The need to make it resident I already noted, what he was originally after was sampler in a buffer object. NVIDIA bindless does give that. The rest of what he was going on about a GL implementation needing to check the buffer object, etc I think was him just getting painted into a corner... you can definitely emulate using what texture unit to use stored in a buffer object just by having an additional array (in a separate block) indexed by texture unit with values as that texture/sampler pair is bound there.

    But I confess, the idea of storing what texture unit(instead of what texture) to use in a buffer object sounds almost useless.. like Alfhonse originally state, the vast majority of times the texture unit to use for a sampler uniform is static for the life time of a GL program.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •