Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 58 of 63 FirstFirst ... 8485657585960 ... LastLast
Results 571 to 580 of 623

Thread: The ARB announced OpenGL 3.0 and GLSL 1.30 today

  1. #571
    Junior Member Regular Contributor
    Join Date
    Oct 2007
    Location
    Madison, WI
    Posts
    163

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    So as for GL 3.1 what can we expect? My predictions (/wild speculation),

    (1.) Some time 2009, perhaps late Q1?

    (2.) Working NVidia, and ATI GL3 drivers. Apple perhaps with Snow Leopard in January (10.6 going to have OpenCL right?). Intel perhaps not because of Larrabee not being out yet.

    (3.) Geometry shaders, texture buffer objects, and instancing extensions get rolled into the core spec.

    (4.) Uniform (parameter) buffer objects as an extension. Question as to if this functionality gets supported on older hardware which need to patch shaders after uniform changes.

    (5.) Unlikely, but perhaps something to do with texture fetch from multisample textures. Maybe only a vendor extension.

    Other things I would personally like to see in GL.

    (1.) Roll in fence support (NV_fence/APPLE_fence) into the core spec.

    (2.) Support for fences to be shared across contexts!

    (3.) Would be nice to have support for keeping a buffer mapped with MapBufferRange( .. MAP_UNSYNCHRONIZED_BIT .. ) mapped during drawing. So one could keep filling in dynamic VBO ring buffers from other threads easily.

    (4.) Would be nice to have support for MapBufferRange( .. MAP_UNSYNCHRONIZED_BIT + MAP READ BIT .. ) with the same functionality of being mapped during drawing. Combined this in the form of a readback ring buffer, with shared context fences and transform feedback, then GPU readback could become a lot more useful.

  2. #572
    Member Regular Contributor
    Join Date
    Apr 2006
    Location
    Irvine CA
    Posts
    299

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    The "mapped during drawing" thing is not very likely to happen. MapBufferRange was designed to work in at least two different ways in the underlying implementation for certain kinds of traffic.

    So the first way is, the memory the CPU sees while mapped is memory that can be simultaneously observed by the GPU (i.e. GART/AGP mapped or PCIe DMA wired). And in a world like that, you *might* be able to make draw-while mapped work - as long as the draw call knows to flush CPU write caches so the shared RAM has the right bits in it.

    But the second way, specifically in cases where the app has requested write-only with range or buffer invalidation, allows the driver to provide a piece of scratch memory that may have no residence in the actual buffer, as a destination for the writes.

    The magic happens at FlushMappedBufferRange or unmap time - as long as the written bytes are transferred into the GPU visible buffer before the draw gets underway, all is well. This was set up in this fashion to accommodate systems where establishing DMA mappings fluidly to arbitrary pieces of system memory is cumbersome - it also allows for idioms where you are mapping tiny pieces of a huge buffer, the tiny pieces can be doled out from a ring buffer of scratch space and re-used over time with a fixed DMA mapping.

    But in the second world, you have to have that flush or unmap event happen so the signal of when to transfer the newly written bytes is received by the driver. I suppose this raises the question of "well, would flushing of a range be sufficient on its own, without the actual unmap - turn unmap into a no-op"? - that one I don't have an immediate answer for.

    I liked the rest of the list though !

  3. #573
    Junior Member Regular Contributor
    Join Date
    Feb 2000
    Location
    Santa Clara, CA
    Posts
    172

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    Yes, its a decent list and probably not far off.

    I'm thinking about the proposal of "flush as good as unmap". We've already gone past my comfort zone (that happened when we introduced the unsynchronized bit - you can blame/thank Rob for pushing hard for this feature), so its possible that we're no worse off with this proposal. The problem is historically developers don't get this kind of thing right, or they think they do and it breaks on a new driver or chip, forcing us to add heuristics to try to fix the broken app without slowing everyone else down. (In an ideal world the developer would just fix the app so the driver doesn't have to - but the PC space is less than ideal. In this day and age, if you ship a title without a self-patching mechanism, shame on you!)

    The other problem with allowing an app to hold a mapping indefinitely is that it messes with memory management. We cannot relocate a buffer which is mapped.

  4. #574
    Junior Member Regular Contributor
    Join Date
    Oct 2007
    Location
    Madison, WI
    Posts
    163

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    Quote Originally Posted by Rob Barris
    I suppose this raises the question of "well, would flushing of a range be sufficient on its own, without the actual unmap - turn unmap into a no-op"? - that one I don't have an immediate answer for.
    And this (using flush) is exactly what I had in mind. Map buffer. one thread draws using ring buffer, other thread(s) can write and flush into that buffer without the need to be concerned with needing an unmap at any point where the primary thread is issuing draw calls. Of course the mapped buffer would also have to be shared across contexts.

  5. #575
    Junior Member Regular Contributor
    Join Date
    Oct 2007
    Location
    Madison, WI
    Posts
    163

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    One thing we would possibly get with this functionality is easier porting of stuff we do on consoles to the PC/Mac.

    Quote Originally Posted by Michael Gold
    The other problem with allowing an app to hold a mapping indefinitely is that it messes with memory management. We cannot relocate a buffer which is mapped.
    It would certainly complicate memory management. I'd guess (with no knowledge of the XP or Vista driver model), that the user side mapping shouldn't be a tremendous problem in itself, assuming the driver can change the page table physical mapping (for a fixed user space address) when it needs to move stuff around (say when user changes primary display resolution). In the worst case, ie the driver is out of usable memory (which should never happen, but needs to not crash in case of), perhaps the driver could map all the previously mapped user space pages to a single common junk physical page and then return error later on a modified version of FlushMappedBufferRange().

  6. #576
    Junior Member Regular Contributor
    Join Date
    Jul 2000
    Location
    Roseville, CA
    Posts
    159

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    Quote Originally Posted by Michael Gold
    Don't want a deprecated feature to be removed? Speak up!
    While I agree with the decision to deprecate almost everything on the list in Appendix E, there were a few items whose removal didn't make sense to me: quads, alpha test, and the luminance / luminance-alpha / intensity internal formats. I'll try to explain why it's good to have each of these around.

    1) Quads. I use these a lot for particle systems, fire, and other special effects where you end up rendering a bunch of disjoint quads that have their vertex positions recalculated every frame. It just makes sense to dump the four corners of each quad into a buffer and let the hardware render them as quads. Quads are supported in hardware by all Nvidia chips and all recent ATI chips. If quads go away, then I have to either (a) dump six vertices into the vertex buffer for each quad instead of four, or (b) construct an index buffer in addition to the vertex buffer that wasn't necessary before. In both cases, we lose speed and increase memory requirements.

    2) Alpha test. This one isn't a huge deal, but why should we have to add a texkill instruction to our fragment programs when all hardware already supports a separate alpha test that doesn't eat up an extra cycle in the fragment processor? Taking away alpha test just slows alpha-tested rendering down a little bit without simplifying the driver in any significant way.

    3) L / LA / I internal formats. I have found these formats to be very convenient, and I know that they are supported in all modern hardware through a programmable swizzle/crossbar in the texture units. Consider the following scenario: you're using a fragment shader that reads an RGB color from a texture map. Now you want to change the texture map (perhaps in the middle of a game) while using the same exact shader, but the new texture map is grayscale, so you decided to store it in a single-channel format to save space. If you load it into GL as a luminance texture, then the shader continues to work correctly without modification because the texture unit distributes the single channel to all three RGB channels for free. If you take away the luminance-based formats, then you're forced to build a completely different shader for the single-channel case. This creates an extra hassle for the software developer without really simplifying the driver at all. (Setting the texture units swizzle register is dirt simple.)

    I noticed that all three of these items represent features that are not supported in DirectX 10. Were they deprecated just to achieve some kind of feature parity between the two libraries?

  7. #577
    Junior Member Regular Contributor
    Join Date
    Aug 2007
    Location
    USA
    Posts
    243

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    Why not just use a triangle strip with 4 vertices instead of a quad?

    EDIT: I suppose you'd need glMultiDrawArrays or glMultiDrawElements, nevermind...

    I was also going to mention the geometry shader, but that isn't core.

  8. #578
    Senior Member OpenGL Guru
    Join Date
    Mar 2001
    Posts
    3,576

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    you can blame/thank Rob for pushing hard for this feature
    Thanks, Rob Though I wish you could have gotten them to approve the Fences necessary to guarantee safety...

  9. #579
    Junior Member Regular Contributor
    Join Date
    Feb 2000
    Location
    Santa Clara, CA
    Posts
    172

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    Quote Originally Posted by Timothy Farrar
    I'd guess (with no knowledge of the XP or Vista driver model), that the user side mapping shouldn't be a tremendous problem in itself, assuming the driver can change the page table physical mapping (for a fixed user space address) when it needs to move stuff around (say when user changes primary display resolution).
    In order to do this reliably, moving the data and remapping the PTEs need to happen atomically. I don't see a way to do this for arbitrarily large buffers.

  10. #580
    Senior Member OpenGL Guru
    Join Date
    Mar 2001
    Posts
    3,576

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    If quads go away, then I have to either (a) dump six vertices into the vertex buffer for each quad instead of four, or (b) construct an index buffer in addition to the vertex buffer that wasn't necessary before. In both cases, we lose speed and increase memory requirements.
    Unless the hardware can't actually support it anymore and thus emulates it.

    I noticed that all three of these items represent features that are not supported in DirectX 10. Were they deprecated just to achieve some kind of feature parity between the two libraries?
    Not so much feature parity but driver parity. Each step away from the low-level is another step away from making drivers easy to write.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •