Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 10 of 11

Thread: Extracting a value from a fragment shader

Hybrid View

  1. #1
    Junior Member Newbie
    Join Date
    Sep 2009
    Posts
    14

    Extracting a value from a fragment shader

    Hi all,

    I am implementing hdr light adaptation, until I found a small technical problem.
    I need to calculate this value inside a fragment shader where I make other things too:

    Attachment 1016

    I have the LumAverage value (from a 1x1 sampler2D texture), but how can I export the LumAdapted calculated value in order to use it in the next frame (it will be LumAdapted_i-1 in the next frame)?

    Thanks for you time

  2. #2
    Member Regular Contributor
    Join Date
    Jan 2011
    Location
    Paris, France
    Posts
    250
    You can use Buffer Objects http://www.opengl.org/wiki/Buffer_Object for to handle this
    @+
    Yannoo

  3. #3
    Advanced Member Frequent Contributor
    Join Date
    Mar 2009
    Location
    Singapore
    Posts
    800
    I would suggest using render to texture with FBO having two attachments with ping pong technique. While u draw to one you read from another. At each render, you swap the read/write pathways. Google search for ping pong FBO for instance I got this result. http://www.pixelnerve.com/v/2010/07/...ong-technique/
    Regards,
    Mobeen

  4. #4
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    You can also use shader image load/store if you've got GL4 AMD/NV hardware for this purpose.

  5. #5
    Intern Contributor
    Join Date
    Mar 2010
    Location
    Winston-Salem, NC
    Posts
    62
    Quote Originally Posted by thokra View Post
    You can also use shader image load/store if you've got GL4 AMD/NV hardware for this purpose.
    Actually, only "OpenGL 3.0 and GLSL 1.30 are required." However I don't have it in my NVIDIA 313.30 driver on Linux. GeForce 8600M GT card, which is 3.x only HW. Need to check what NVIDIA has / hasn't done with beta drivers. Just because it's possible, doesn't mean they did it. EDIT: Last year when NVIDIA announced 4.3 support, they listed some stuff on their OpenGL Driver Support page that would be made available for 3.x as well, but ARB_shader_image_load_store wasn't on the list.
    Last edited by Brandon J. Van Every; 04-26-2013 at 04:17 PM.

  6. #6
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    That's what the extension requires, not the hardware. The hardware needed to do image load/store will also be able to implement all of the other 4.x stuff.

  7. #7
    Intern Contributor
    Join Date
    Mar 2010
    Location
    Winston-Salem, NC
    Posts
    62
    Wow that's a hair splitter. I haven't thought about what it actually does at all, just reading through posts about optimization and on-GPU programming techniques. Is there some HW out there that *could* do it, with only a 3.x API? I'm wondering how many times this hair splitting distinction comes up in industry practice.

    Ok... reading the extension spec, there's lots of stuff about changing the OGL 3.2 spec to accommodate the extension. Why talk about all that, if 4.x HW is going to be required, and thus a 4.x API will be available? I don't get it; you sure about this "HW distinction" thing you propose? A plain reading would say this *could* work on 3.x HW.
    Last edited by Brandon J. Van Every; 04-26-2013 at 06:40 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •