Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 2 of 2 FirstFirst 12
Results 11 to 14 of 14

Thread: Further separation of sampler objects

  1. #11
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,137
    Quote Originally Posted by Alfonse Reinheart View Post
    Yeah, that "hitch" is what makes it not backwards compatible.
    At this stage you come across as though you're just looking for excuses to be disagreeable.

    Were VAOs backwards compatible with client-side arrays? Is glVertexAttribPointer backwards compatible with other gl*Pointer calls? Do VBOs modify the meaning of all gl*Pointer calls?

    It doesn't need to be backwards compatible, it's modified behaviour, a simple glEnable is all that's needed to tell the driver "OK, I'm using this modified behaviour now, so if I call glBindSampler (31, samplerNum) don't assume that you need to fix up the world-of-crazy I've just fed you but give me a nice clean error instead".

  2. #12
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    Were VAOs backwards compatible with client-side arrays? Is glVertexAttribPointer backwards compatible with other gl*Pointer calls? Do VBOs modify the meaning of all gl*Pointer calls?
    These are all backwards compatible changes because old code still works; that's what it means to be backwards compatible. Yes, VAOs do work with client-side arrays; the spec is quite clear on this. Texture storage is backwards compatible with the old glTexImage functions because those functions still exist. Creating a new way to do things is how you maintain backwards compatibility.

    There's a difference between user code doing something (which was impossible before the extension) that changes how a function works, and user code doing nothing at all which was perfectly legal before and is now broken. glBindSampler takes a texture unit. It ranges from 0 to GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS. That's the specification. If you change what glBindSampler takes, such that it now binds to a different range of legal values, perfectly functioning code become broken.

    That is the definition of backwards incompatible changes: when old code that worked before stops working.

    It doesn't need to be backwards compatible, it's modified behaviour, a simple glEnable is all that's needed to tell the driver "OK, I'm using this modified behaviour now, so if I call glBindSampler (31, samplerNum) don't assume that you need to fix up the world-of-crazy I've just fed you but give me a nice clean error instead".
    So, you want a glEnable that switches what the parameter of glBindSampler means. And you consider this to not "add ugliness and confusion to the API"?

    The reason why the gl*Pointer thing with VBOs is such a terrible API is because the very meaning of a function changes based on information not provided in that function's signature (that, and the need to pretend an integer is a pointer). The reason why glActiveTexture/glBindTexture is confusing is because you're using two different functions to do one simple thing: bind a texture to a texture unit. Again, there's the non-local information: glBindTexture's meaning changes based on the most recent glActiveTexture call. And now you're proposing to take a simple, obvious command like glBindSampler and inflict the same API cruft upon it.

    This sort of thinking is exactly how OpenGL got into the API hell it's in now. glActiveTexture was added because it was the easiest, backwards-compatible solution to multitexturing. The whole GL_ARRAY_BUFFER thing was added because it was the easiest, backwards-compatible solution to VBOs. In both cases, they overloaded existing APIs, keying off of a switch from a new API, so that they didn't have to create entirely new functions.

    Isn't that what your whole DSA crusade is about? Making is so that non-local information doesn't affect how commands operate? For someone so gung-ho about wanting DSA everywhere, it's interesting that you're willing to deliberately inflict more of this kind of API on OpenGL.

  3. #13
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,137
    I'm not saying I want a glEnable, I'm suggesting it as one possible approach that could solve this, admittedly in an ugly way, but it's better than just being negative about everything. That's what this is all about - identifying if this is a problem, identifying if it's worth solving and talking around possible solutions. Suggesting something positive rather than constantly being first to jump in with bad vibes.

  4. #14
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    I'm not saying I want a glEnable, I'm suggesting it as one possible approach that could solve this
    ... if you didn't actually want it, why would you suggest it? It's like you're saying things but you never actually mean what you're saying. If you suggest something, I'm going to take a wild leap and assume that you actually want what you've suggested and will respond accordingly.

    That's what this is all about - identifying if this is a problem, identifying if it's worth solving and talking around possible solutions.
    But I don't believe it's worth solving. Even if you were to find a way to implement it without sacrificing backwards compatibility or making the API make less sense, I don't believe that the idea itself has any real merit. It's exposing a possible hardware limitation that was always there but never seemed to bother any real applications before now.

    This is a solution looking for a problem. An OpenGL suggestion is useful only if it solves a real problem for users. You have yet to demonstrate that this does. The absolute most it buys you is saving a single texture unit. Running out of texture units per-stage is hardly a pressing concern for OpenGL developers.

    Every piece of hardware has its own idiosyncrasies. We shouldn't be modifying OpenGL for the sole purpose of exposing some limitation on one hardware platform. AMD approved ARB_sampler_objects, just like everyone else; indeed, over half of the credited Contributors to the extension are from AMD. If they felt that it was an onerous burden and wanted to expose a secondary sampler limit, they could have changed it then to match their hardware better.

    AMD seems to have tamed the "world-of-crazy" in their drivers, so what's the problem?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •