Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 18 of 63 FirstFirst ... 8161718192028 ... LastLast
Results 171 to 180 of 623

Thread: The ARB announced OpenGL 3.0 and GLSL 1.30 today

  1. #171
    Senior Member OpenGL Guru knackered's Avatar
    Join Date
    Aug 2001
    Location
    UK
    Posts
    2,833

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    of course we had all this under the hood, so to speak, with the now-depreciated built-in uniforms, so we are in fact now worse off regarding this problem.
    Why didn't they just give us the ability to specify our own named cross-shader uniforms using the same driver path as the built-ins. If the uniform upload happened each time a draw call is made for a particular shader, then so be it - but it would be an acceleration opportunity in the future.
    Just plain short-sightedness no matter which way you look at it.

  2. #172
    Junior Member Regular Contributor
    Join Date
    Jul 2007
    Location
    Alexandria, VA
    Posts
    211

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    @Timothy Farrar

    Your blog post listing the new core features of OpenGL 3.0 is a nice collection of what most developers actually care about. I expect a similar list from the BOF tomorrow.

    The list would be more useful if it compared the new "features" of 3.0 to DX10 (and 10.1 and 11).

    The BIG problem everyone has is that the 3.0 feature set is not much different from 2.1 + extensions. There's just a slightly greater chance that ATI will put out a driver supporting those features.

    Does this new 3.0 really change the way you're going to develop GL code? Does 3.0 resolve any "fast-path" issues?

    If you are a developer on Windows what would be the deciding factors for choosing OpenGL over D3D?

    I think the ARB should work evolving existing OpenGL down the known path and really pushing its evolution in parallel. If they had done this then maybe we would have had this "3.0" last year (labeled 2.2) and a new 2.3 today along with the *real* 3.0 as well.

    I've been on a project that had great visions of recreated itself faster, better, more X, more Y. It failed not just because it underestimated the time required to do a whole rewrite but because they didn't evolve the current version in parallel. They put the current version into "patch mode" while concentrating on the new version. (It didn't help that all the experienced team members had left the company by this point)

    My point is that this didn't have to be a failure. This "3.0" should have been out LONG ago, there shouldn't have been The Great Silence and the ARB should own up to these failures.

    You honestly don't see what was lost with this "upgrade"?

  3. #173
    Junior Member Regular Contributor
    Join Date
    Oct 2007
    Location
    Madison, WI
    Posts
    163

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    Vertex texture fetch is definitely not slow on supported ATI hardware, as well as Geforce 8 series and beyond. Legacy yes, but not any more. Also correct me if I am wrong here, but uniform buffers couldn't be back ported to the legacy hardware anyway (lack of hardware support).

    Not that I am disagreeing with the usefulness of uniform buffers with respect to fixed non-divergently indexed constants.

    Another thing to consider here is what your performance bottlenecks are. Are you and others actually bottlenecked by your uniform usage? Say you had uniform buffers, would your application run any faster?

  4. #174
    Senior Member OpenGL Guru
    Join Date
    Mar 2001
    Posts
    3,576

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    Vertex texture fetch is definitely not slow on supported ATI hardware, as well as Geforce 8 series and beyond.
    It's slower than the 1 cycle it takes to use an actual uniform; therefore, it's slow.

    lack of hardware support
    Sure they can. The implementation lies. They do it all the time.

    You create a buffer object for the purpose of storing uniforms (there's a special "hint" for that). The implementation, instead of allocating video memory, allocates system memory. You upload to it. The implementation then uses that system memory buffer to update the actual uniforms when shaders using that buffer are rendered.

    It's dead simple.

    Another thing to consider here is what your performance bottlenecks are.
    Um, no, I don't. The API is clearly wasting time with me constantly updating uniforms that, as far as my code is concerned. Whether it is a significant weight on overall throughput isn't the issue; the issue is that my code has to create instancing which wastes both my time and the APIs.

    And you don't need to profile things when "smart" implementations like nVidia's recompiles your shader because you changed a uniform from 0.0 to 0.5. Uniform buffers and program instancing would cut out all of that nonsense.

  5. #175
    Advanced Member Frequent Contributor Mars_999's Avatar
    Join Date
    Mar 2001
    Location
    Sioux Falls, SD, USA
    Posts
    519

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    Quote Originally Posted by Timothy Farrar
    Quote Originally Posted by Mars_999
    Can someone explain to me a few items here,

    2. What are you all talking about when glMatrixMode() and what about using glTranslatef() glRotatef() ect... do we need to keep track of matrices ourselves now? Like DX?
    Looks like most (if not all) fixed function vertex stuff is going away in the future including glMatrixMode(), etc. Best to read the spec, on page 404 (section E.1. PROFILES AND DEPRECIATED FEATURES OF OPENGL 3.0). So if you want to do matrix math you do it application side outside GL. Then either pass the matrix in as a uniform, fetch it via vertex texture fetch, or have it sent to you via vertex buffer (ie if you had one matrix per vertex).

    As for profiles, I think you may find this useful,
    http://www.opengl.org/registry/specs...te_context.txt

    HGLRC wglCreateContextAttribsARB(HDC hDC, HGLRC hshareContext, const int *attribList);

    "If the WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB is set in WGL_CONTEXT_FLAGS_ARB, then a <forward-compatible> context will be created. Forward-compatible contexts are defined only for OpenGL versions 3.0 and later."
    So let me get this cleared up, basically GL3.0 will be like DX in this regard as you will have to keep track and do the matrix maths yourself.

  6. #176
    Member Regular Contributor
    Join Date
    May 2002
    Posts
    269

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    Quote Originally Posted by Korval
    (seriously, I have no idea why anyone cares about integer textures)
    I do care. If you try to use ordinary texture to store integer data, you are entering gray area. Say, you are using 8-bit per component texture format. To unpack sampled value to integer range in shader, would you multiply it by 255 or by 256? If the fixedpoint->floatingpoint conversion rules from spec were your guidance, you should use the former. In practice, you get wrong result in your shader, and it's the latter that actually works.

  7. #177
    Senior Member OpenGL Guru knackered's Avatar
    Join Date
    Aug 2001
    Location
    UK
    Posts
    2,833

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    I don't understand why you're having a hard time accepting this, mars. Your shader might not even use matrices.

  8. #178
    Senior Member OpenGL Guru
    Join Date
    Mar 2001
    Posts
    3,576

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    If you try to use ordinary texture to store integer data
    See right there? That's my issue: why would you want to store integer data in a texture?

    Unless you're doing GPGPU stuff (in which case, a graphics library shouldn't care about your needs).

  9. #179
    Junior Member Regular Contributor
    Join Date
    Oct 2007
    Location
    Madison, WI
    Posts
    163

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    Quote Originally Posted by pudman
    The list would be more useful if it compared the new "features" of 3.0 to DX10 (and 10.1 and 11).
    I'm still working on a list of what is missing, which isn't much when you think about DX10. As for DX11, its spec isn't even finished yet.

    The BIG problem everyone has is that the 3.0 feature set is not much different from 2.1 + extensions. There's just a slightly greater chance that ATI will put out a driver supporting those features.
    Exactly, now that those 2.1 + extensions have been ratified as core, we can finally see driver support from other vendors. This alone is very important.

    Does this new 3.0 really change the way you're going to develop GL code? Does 3.0 resolve any "fast-path" issues?
    Sure it does, finally have cross platform support for a majority of the current GPU features. There are a tremendous amount of things made possible by unified shaders and other related functionality. As for fast path, I'm personally targeting DX10 level and up, and in IMO the fast path on that hardware is very well defined if you are keeping up with the hardware design, simpling having hardware support in the API is enough for me. Regardless of API (DX,GL,PSGL,GCM) you are always going to have to profile on the target hardware to know performance.

    If you are a developer on Windows what would be the deciding factors for choosing OpenGL over D3D?
    Think the fact that GL3 provides DX10 level features on XP (assuming ATI and Intel build GL3 drivers for XP), would be enough. For smaller developers, having a larger market share (ie add in Apple as well) could be very important.

    My point is that this didn't have to be a failure. This "3.0" should have been out LONG ago, there shouldn't have been The Great Silence and the ARB should own up to these failures.

    You honestly don't see what was lost with this "upgrade"?
    Personally I see no reason to complain about that which cannot be changed and just is. Be happy for what you have, and do the best to use it to your advantage.

  10. #180
    Member Regular Contributor
    Join Date
    May 2002
    Posts
    269

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    Quote Originally Posted by Korval
    See right there? That's my issue: why would you want to store integer data in a texture?

    Unless you're doing GPGPU stuff (in which case, a graphics library shouldn't care about your needs).
    I'm using a texture to store indices, which then I use to access certain indexed resource in shader. Sort of permutation. And I'm using it to render shadows, not GPGPU stuff.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •