Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 5 of 22 FirstFirst ... 3456715 ... LastLast
Results 41 to 50 of 212

Thread: Official feedback on OpenGL 3.2 thread

  1. #41
    Intern Newbie
    Join Date
    Jun 2003
    Location
    Australia
    Posts
    49

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Heiko
    Quote Originally Posted by dpoon
    Noticed the following whilst reading the GLSL 1.50 specs with changes (GLSLangSpec.1.50.09.withchanges.pfd).

    On page 45:
    The fragment language has the following predeclared globally scoped default precision statements:
    precision mediump int;
    precision highp float;

    The float precision predeclaration is new in GLSL 1.50 and is not marked in magenta.
    Are you sure? I already used that in GLSL 1.30.
    All I meant in my original post was that in the GLSL 1.50 spec they've added the precision preclaration for the float type to the global scope of the fragment language. In previous versions of the GLSL spec only the int type was predeclared in the global scope of the fragment language. So in the GLSL 1.50 spec the precision predeclaration for the float type should have been highlighted in magenta.

    GLSLangSpec.Full.1.30.08.withchanges.pdf (page 36):
    The fragment language has the following predeclared globally scoped default precision statement:
    precision mediump int;
    GLSLangSpec.Full.1.40.05.pdf (page 37):
    The fragment language has the following predeclared globally scoped default precision statement:
    precision mediump int;
    GLSLangSpec.1.50.09.withchanges.pdf (page 45):
    The fragment language has the following predeclared globally scoped default precision statements:
    precision mediump int;
    precision highp float;

  2. #42
    Intern Contributor
    Join Date
    Apr 2001
    Location
    Sofia, Bulgaria
    Posts
    65

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by mfort
    It is hard to argue on this. Only driver guys can tell. But I see no reason why driver needs to make a copy. Probably it starts some DMA to copy the data from system memory to GPU memory.
    Which will cause glTexImage not to return untill the copying is done. Which is another thing that I want to avoid. And then there is glTexSubImage, which depending on how the driver works may stall waiting for a sync or create a copy of the data...

    Quote Originally Posted by mfort
    I am using buffers glBufferData(…, NULL) for streaming several hundreds of megabytes per seconds without problems.
    glBufferData(…, NULL) also has a cost - it uses two buffers on the card where in some cases it could use only one (wait for the buffer to be no longer needed and then - DMA copy the data and signal the related sync object ). When you "stream hundreds of megabytes per second" - this may have an impact.

    I also use glBufferData(…, NULL) currently. Which is a pity as in many cases only a small part of the mesh has changed. I will probably separate the mesh in chunks, among other reasons - to avoid sending a million verts when only 1000 have changed. Will also test glSubBufferData vs changing parts of the mapped buffer vs updating the whole buffer. Then the app will use selectively whatever of the 3 approaches works better in the current case, but hey - is that not ugly. There has to be a better way to do this.

  3. #43
    Junior Member Newbie
    Join Date
    Nov 2007
    Posts
    22

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by GeLeTo
    Quote Originally Posted by mfort
    It is hard to argue on this. Only driver guys can tell. But I see no reason why driver needs to make a copy. Probably it starts some DMA to copy the data from system memory to GPU memory.
    Which will cause glTexImage not to return untill the copying is done. Which is another thing that I want to avoid. And then there is glTexSubImage, which depending on how the driver works may stall waiting for a sync or create a copy of the data...

    Quote Originally Posted by mfort
    I am using buffers glBufferData(…, NULL) for streaming several hundreds of megabytes per seconds without problems.
    glBufferData(…, NULL) also has a cost - it uses two buffers on the GPU where in some cases it could use only one (wait for the buffer to be no longer needed and then - DMA copy the data and signal the related sync object ). When you "stream hundreds of megabytes per second" - this may have an impact.

    I also use glBufferData(…, NULL) currently. Which is a pity as in many cases only a small part of the mesh has changed. I will probably separate the mesh in chunks, among other reasons - to avoid sending a million verts when only 1000 have changed. Will also test glSubBufferData vs changing parts of the mapped buffer vs updating the whole buffer. Then the app will use selectively whatever of the 3 approaches works better in the current case, but hey - is that not ugly. There has to be a better way to do this.
    I recently coded a library for loading bit-mapped TTF fonts into openGL textures (and VBOs). Even loading a single Font showed a noticeable difference in loading times when using glMapBuffer vs glBufferSubData. Running on the latest nVidia drivers the latter is completely useless for small chunks of data, I'm unsure how well it scales up with larger data chunks.

  4. #44
    Junior Member Newbie
    Join Date
    Nov 2007
    Posts
    22

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by Stephen A
    C# wrappers for OpenGL 3.2 available here. Also usable by VB.Net, F# and the rest of the Mono/.Net family of languages.
    It's quite embarrassing when there's better wrappers available for C# over C++! People seem reluctant to integrate openGL 3.1/3.2 into their wrappers and APIs, why is this? Are the changes that difficult for them make? (that's not me being rude).

    Quote Originally Posted by Y-tension
    Very well done Khronos!!Now let's see some drivers!..NVIDIA (officially) released 3.1 just a few weeks ago.
    Again, great release!
    nVidia have already released openGL 3.2 drivers. Its more AMD/ATI that are the problem in pushing the post 3.0 spec

  5. #45
    Member Regular Contributor
    Join Date
    Nov 2003
    Location
    Czech Republic
    Posts
    317

    Re: Official feedback on OpenGL 3.2 thread

    I feel we should move to another thread.

    Quote Originally Posted by GeLeTo
    Which will cause glTexImage not to return untill the copying is done. Which is another thing that I want to avoid. And then there is glTexSubImage, which depending on how the driver works may stall waiting for a sync or create a copy of the data...
    Not sure about glTexImage, I am using it only to "create" texture. BTW. At least in NV, this is almost no-op. It does almost nothing. The real hard work is done when I call glTexSubImage for the first time (even with PBO is use).

    glTexSubImage with PBO is async for sure. It returns immediately (in less then 0.1 ms)


    Quote Originally Posted by GeLeTo
    glBufferData(…, NULL) also has a cost - it uses two buffers on the card where in some cases it could use only one (wait for the buffer to be no longer needed and then - DMA copy the data and signal the related sync object ). When you "stream hundreds of megabytes per second" - this may have an impact.
    Yes, it has some cost. But this way you can trade memory for CPU clocks. The driver can make second buffer to avoid waiting for PBO until it is available.

  6. #46
    Advanced Member Frequent Contributor
    Join Date
    Dec 2007
    Location
    Hungary
    Posts
    985

    Re: Official feedback on OpenGL 3.2 thread

    I agree that OpenGL is going to the right direction. Especially the ARB_sync extension is nice.
    I am pretty surprised by the way that ARB_geometry_shader4 is core from now because it's a deprecated feature. I think it was put into core just because D3D supports it. I would rather go into the direction of the tesselation engine provided by AMD/ATI since HD2000 series cards. That is a much more flexible functionality and it's already or will be soon supported by D3D. The same things can be done with it as with geometry shaders and even much more.
    This geometry shader thing is only present because at the time HD2000 came out, NVIDIA's G8x cards weren't able to do such thing.

    P.S.: This buffer object performance related discussion gone out of control by the way so you should better continue it in a much more appropriate place
    Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
    Technical Blog: http://www.rastergrid.com/blog/

  7. #47
    Senior Member OpenGL Pro Aleksandar's Avatar
    Join Date
    Jul 2009
    Posts
    1,072

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by dpoon
    All I meant in my original post was that in the GLSL 1.50 spec they've added the precision preclaration for the float type to the global scope of the fragment language. In previous versions of the GLSL spec only the int type was predeclared in the global scope of the fragment language. So in the GLSL 1.50 spec the precision predeclaration for the float type should have been highlighted in magenta.
    Have you read GLSLangSpec.1.40.07 (May 1st 2009)?
    GLSLangSpec.1.40.07 (Pg.36)

    The fragment language has the following predeclared globally scoped default precision statements:
    precision mediump int;
    precision highp float;
    And I don't know why it is important at all? Because...
    GLSLangSpec.1.40.07 (pg.35) / GLSLangSpec.1.50.09 (pg.44)

    4.5 Precision and Precision Qualifiers

    Precision qualifiers are added for code portability with OpenGL ES, not for functionality. They have the
    same syntax as in OpenGL ES, as described below, but they have no semantic meaning, which includes no
    effect on the precision used to store or operate on variables
    .
    Only Catalyst drivers require precision qualifiers in fragmen shaders. But, maybe even that will be changed when OpenGL 3.1 support comes.

  8. #48
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948

    Re: Official feedback on OpenGL 3.2 thread

    nVidia have already released openGL 3.2 drivers.
    Beta drivers don't count.

  9. #49
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    578

    Re: Official feedback on OpenGL 3.2 thread

    I am glad that NV_clamp_depth finally got into core (and an ARB back port extension too!) Also we have geometry shaders too.

    Too bad that GL_EXT_separate_shader_objects did not in some form make it to core (and there is a bit that is kind of troubling, in it one uses gl_TexCoord[] to write to, but GL3.2 says that is deprecated, so to use GL_EXT_separate_shader_objects does one need to make a compatible context then?)

    Also a little perverse is that context creation now has another parameter, so now we have:

    forward compatible GLX_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB for the attribute GLX_CONTEXT_FLAGS_ARB
    and
    GLX_CONTEXT_CORE_PROFILE_BIT_ARB/GLX_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB for the attribute GLX_CONTEXT_PROFILE_MASK_ARB
    (and similarly under windows).

    I also have the fear that for each version of GL, the description of GLX_ARB_create_context/GLX_ARB_create_context_profile will grow, i.e. a case for each GL version >= 3.0 (shudders)


    So now I dream of:
    1) direct state access for buffer objects, i.e. killing off bind to edit buffer objects.
    2) decoupling of filter and texture data and a direct state access API to go with it.
    3) nVidia's bindless API: GL_NV_shader_buffer_load and GL_NV_vertex_buffer_unified_memory

    nVidia's bindless graphics is pretty sweet IMO.

  10. #50
    Administrator Contributor
    Join Date
    Jan 2002
    Location
    Mt. View, CA
    Posts
    97

    Re: Official feedback on OpenGL 3.2 thread

    Quote Originally Posted by kRogue
    I also have the fear that for each version of GL, the description of GLX_ARB_create_context/GLX_ARB_create_context_profile will grow, i.e. a case for each GL version >= 3.0 (shudders)
    Well, we kinda had to introduce an enabling mechanism to select the profile, once we introduced profiles. If you don't specify a profile or version or context flags in the attribute list, then something (hopefully) reasonable still happens: you get the core profile of the latest version supported by the driver. There is some redundancy in that a forward-compatible compatibility profile is exactly equivalent to a compatibility profile, since nothing is deprecated from the compatibility profile, but I think the attributes all make sense.

    We don't have any more profiles under discussion, if that's a concern. Conceivably OpenGL ES and OpenGL could eventually merge back together through the profile mechanism, but that's a long way into the future if it happens at all.
    Jon Leech
    Khronos API Registrar
    ARB Ecosystem TSG Chair
    GL/EGL/GLES Spec Editor
    ...and suchlike

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •