Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Thread: Extracting a value from a fragment shader

  1. #1
    Junior Member Newbie
    Join Date
    Sep 2009
    Posts
    14

    Extracting a value from a fragment shader

    Hi all,

    I am implementing hdr light adaptation, until I found a small technical problem.
    I need to calculate this value inside a fragment shader where I make other things too:

    Attachment 1016

    I have the LumAverage value (from a 1x1 sampler2D texture), but how can I export the LumAdapted calculated value in order to use it in the next frame (it will be LumAdapted_i-1 in the next frame)?

    Thanks for you time

  2. #2
    Member Regular Contributor
    Join Date
    Jan 2011
    Location
    Paris, France
    Posts
    250
    You can use Buffer Objects http://www.opengl.org/wiki/Buffer_Object for to handle this
    @+
    Yannoo

  3. #3
    Advanced Member Frequent Contributor
    Join Date
    Mar 2009
    Location
    Singapore
    Posts
    800
    I would suggest using render to texture with FBO having two attachments with ping pong technique. While u draw to one you read from another. At each render, you swap the read/write pathways. Google search for ping pong FBO for instance I got this result. http://www.pixelnerve.com/v/2010/07/...ong-technique/
    Regards,
    Mobeen

  4. #4
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    You can also use shader image load/store if you've got GL4 AMD/NV hardware for this purpose.

  5. #5
    Intern Contributor
    Join Date
    Mar 2010
    Location
    Winston-Salem, NC
    Posts
    62
    Quote Originally Posted by thokra View Post
    You can also use shader image load/store if you've got GL4 AMD/NV hardware for this purpose.
    Actually, only "OpenGL 3.0 and GLSL 1.30 are required." However I don't have it in my NVIDIA 313.30 driver on Linux. GeForce 8600M GT card, which is 3.x only HW. Need to check what NVIDIA has / hasn't done with beta drivers. Just because it's possible, doesn't mean they did it. EDIT: Last year when NVIDIA announced 4.3 support, they listed some stuff on their OpenGL Driver Support page that would be made available for 3.x as well, but ARB_shader_image_load_store wasn't on the list.
    Last edited by Brandon J. Van Every; 04-26-2013 at 04:17 PM.

  6. #6
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    That's what the extension requires, not the hardware. The hardware needed to do image load/store will also be able to implement all of the other 4.x stuff.

  7. #7
    Intern Contributor
    Join Date
    Mar 2010
    Location
    Winston-Salem, NC
    Posts
    62
    Wow that's a hair splitter. I haven't thought about what it actually does at all, just reading through posts about optimization and on-GPU programming techniques. Is there some HW out there that *could* do it, with only a 3.x API? I'm wondering how many times this hair splitting distinction comes up in industry practice.

    Ok... reading the extension spec, there's lots of stuff about changing the OGL 3.2 spec to accommodate the extension. Why talk about all that, if 4.x HW is going to be required, and thus a 4.x API will be available? I don't get it; you sure about this "HW distinction" thing you propose? A plain reading would say this *could* work on 3.x HW.
    Last edited by Brandon J. Van Every; 04-26-2013 at 06:40 PM.

  8. #8
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    What "hair splitting distinction"? It's only the difference between "specification" (ie: a paper document) and "hardware" (ie:... hardware). That's not splitting hairs.

    Is it possible for someone to make hardware that doesn't support any other OpenGL 4.x features except for image load/store? Yes. Would anyone actually do it? No. In the real world, hardware is mostly defined by what Direct3D requires. And D3D 11 is what provides image load/store functionality, but it also provides tessellation, more advanced feedback output, and various other OpenGL 4.x features.

    Making D3D 10 hardware that could do image load/store would be a waste of time and silicon, since D3D 10 can't use it and it doesn't implement the other stuff that D3D 11 requires. So only OpenGL users could ever use it.

    The specification says that it requires GL 3.0 because the specification talks about concepts that were introduced in GL 3.0. Therefore, it would be textually impossible to apply the extension to pre-GL 3.0 hardware.

    In short, don't look at the extension specification to try to gauge what hardware supports it. The specification is about text. If you want to know what hardware supports specific functionality, use the OpenGL Extension Viewer's database (although it seems to be somewhat out-of-date, as it claims that nobody supports the more recent extensions).

  9. #9
    Intern Contributor
    Join Date
    Mar 2010
    Location
    Winston-Salem, NC
    Posts
    62
    That extension has a huge amount of verbiage, initiated by some people at NVIDIA, but also engaged by some AMD people as well, about all sorts of changes that include 3.x in many places. Why are they bothering to spend the man-hours to hammer out all that language for 3.x, if there's no intent to ever have it actually work on 3.x? I have trouble believing that people are putting in the paid hours "just for kicks," that there was never any intention at any point in industry history to actually do it. It may not have gotten done, it may have gotten killed for political or competitive reasons, but I have trouble with your thesis that only 4.x HW would do this and anything else is a complete waste of time. Some people who worked on this spec clearly thought otherwise and hammered out the language for it. Maybe it turned out to be quixotic, but impossible?

    Surely this wasn't all hammered out just to benefit someone writing a software rasterizer or GL emulator someday?

    Here's a theory: this work started when 3.x was going to be one thing, but it turned into something else, sliding features into the 4.x era, but they didn't re-verbalize the extension work in progress...

    That's one theory I could devise out of several. Maybe I should just try asking the people who worked on the spec, if it continues to concern me someday.

    Anyways the spec as written is a real mess and hard to understand what it's aiming at.
    Last edited by Brandon J. Van Every; 04-26-2013 at 07:14 PM.

  10. #10
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    Why are they bothering to spend the man-hours to hammer out all that language for 3.x, if there's no intent to ever have it actually work on 3.x? I have trouble believing that people are putting in the paid hours "just for kicks," that there was never any intention at any point in industry history to actually do it.
    I have trouble believing that the effort in writing the extension against GL 4.x would be any different from writing it against 3.x. The only "verbiage" that would change would be removing many entries in the "Dependencies on" section. And that's hardly where the spec spends most of its "verbiage".

    Simply put: no greater effort was spent writing this spec against 3.x. It's just not that important, compared to the actual meat of the text.

    I have trouble with your thesis that only 4.x HW would do this and anything else is a complete waste of time.
    You can believe me or not; that won't change the fact that you won't see one iota of hardware that could image load/store that can't also support GL 4.x.

    Well, you might in the mobile space. But even then, you'd be looking at some new extension written against OpenGL ES 3.0, not this one exactly.

    Here's a theory: this work started when 3.x was going to be one thing, but it turned into something else, sliding features into the 4.x era, but they didn't re-verbalize the extension work in progress...
    Or alternatively, the ARB doesn't care what version something is written against. Again, extension specifications should never be seen as a guide to what versions of OpenGL will ever implement that extension. At best, it's only a guide to what versions could.

    Uniform_buffer_object is written against OpenGL 2.1, even though no such hardware can support it. ARB_blend_func_extended is written against 3.2, but only claims that OpenGL 1.0 and ARB_fragment_shader are required.

    You're just misinterpreting the information in the spec, assuming that it has anything to do with what actual hardware will support it.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •