Extracting a value from a fragment shader

Hi all,

I am implementing hdr light adaptation, until I found a small technical problem.
I need to calculate this value inside a fragment shader where I make other things too:

[ATTACH=CONFIG]1016[/ATTACH]

I have the LumAverage value (from a 1x1 sampler2D texture), but how can I export the LumAdapted calculated value in order to use it in the next frame (it will be LumAdapted_i-1 in the next frame)?

Thanks for you time :wink:

You can use Buffer Objects http://www.opengl.org/wiki/Buffer_Object for to handle this

I would suggest using render to texture with FBO having two attachments with ping pong technique. While u draw to one you read from another. At each render, you swap the read/write pathways. Google search for ping pong FBO for instance I got this result. http://www.pixelnerve.com/v/2010/07/20/pingpong-technique/

You can also use shader image load/store if youā€™ve got GL4 AMD/NV hardware for this purpose.

Actually, only ā€œOpenGL 3.0 and GLSL 1.30 are required.ā€ However I donā€™t have it in my NVIDIA 313.30 driver on Linux. GeForce 8600M GT card, which is 3.x only HW. Need to check what NVIDIA has / hasnā€™t done with beta drivers. Just because itā€™s possible, doesnā€™t mean they did it. EDIT: Last year when NVIDIA announced 4.3 support, they listed some stuff on their OpenGL Driver Support page that would be made available for 3.x as well, but ARB_shader_image_load_store wasnā€™t on the list.

Thatā€™s what the extension requires, not the hardware. The hardware needed to do image load/store will also be able to implement all of the other 4.x stuff.

Wow thatā€™s a hair splitter. I havenā€™t thought about what it actually does at all, just reading through posts about optimization and on-GPU programming techniques. Is there some HW out there that could do it, with only a 3.x API? Iā€™m wondering how many times this hair splitting distinction comes up in industry practice.

Okā€¦ reading the extension spec, thereā€™s lots of stuff about changing the OGL 3.2 spec to accommodate the extension. Why talk about all that, if 4.x HW is going to be required, and thus a 4.x API will be available? I donā€™t get it; you sure about this ā€œHW distinctionā€ thing you propose? A plain reading would say this could work on 3.x HW.

What ā€œhair splitting distinctionā€? Itā€™s only the difference between ā€œspecificationā€ (ie: a paper document) and ā€œhardwareā€ (ie:ā€¦ hardware). Thatā€™s not splitting hairs.

Is it possible for someone to make hardware that doesnā€™t support any other OpenGL 4.x features except for image load/store? Yes. Would anyone actually do it? No. In the real world, hardware is mostly defined by what Direct3D requires. And D3D 11 is what provides image load/store functionality, but it also provides tessellation, more advanced feedback output, and various other OpenGL 4.x features.

Making D3D 10 hardware that could do image load/store would be a waste of time and silicon, since D3D 10 canā€™t use it and it doesnā€™t implement the other stuff that D3D 11 requires. So only OpenGL users could ever use it.

The specification says that it requires GL 3.0 because the specification talks about concepts that were introduced in GL 3.0. Therefore, it would be textually impossible to apply the extension to pre-GL 3.0 hardware.

In short, donā€™t look at the extension specification to try to gauge what hardware supports it. The specification is about text. If you want to know what hardware supports specific functionality, use the OpenGL Extension Viewerā€™s database (although it seems to be somewhat out-of-date, as it claims that nobody supports the more recent extensions).

That extension has a huge amount of verbiage, initiated by some people at NVIDIA, but also engaged by some AMD people as well, about all sorts of changes that include 3.x in many places. Why are they bothering to spend the man-hours to hammer out all that language for 3.x, if thereā€™s no intent to ever have it actually work on 3.x? I have trouble believing that people are putting in the paid hours ā€œjust for kicks,ā€ that there was never any intention at any point in industry history to actually do it. It may not have gotten done, it may have gotten killed for political or competitive reasons, but I have trouble with your thesis that only 4.x HW would do this and anything else is a complete waste of time. Some people who worked on this spec clearly thought otherwise and hammered out the language for it. Maybe it turned out to be quixotic, but impossible?

Surely this wasnā€™t all hammered out just to benefit someone writing a software rasterizer or GL emulator someday?

Hereā€™s a theory: this work started when 3.x was going to be one thing, but it turned into something else, sliding features into the 4.x era, but they didnā€™t re-verbalize the extension work in progressā€¦

Thatā€™s one theory I could devise out of several. Maybe I should just try asking the people who worked on the spec, if it continues to concern me someday.

Anyways the spec as written is a real mess and hard to understand what itā€™s aiming at.

Why are they bothering to spend the man-hours to hammer out all that language for 3.x, if thereā€™s no intent to ever have it actually work on 3.x? I have trouble believing that people are putting in the paid hours ā€œjust for kicks,ā€ that there was never any intention at any point in industry history to actually do it.

I have trouble believing that the effort in writing the extension against GL 4.x would be any different from writing it against 3.x. The only ā€œverbiageā€ that would change would be removing many entries in the ā€œDependencies onā€ section. And thatā€™s hardly where the spec spends most of its ā€œverbiageā€.

Simply put: no greater effort was spent writing this spec against 3.x. Itā€™s just not that important, compared to the actual meat of the text.

I have trouble with your thesis that only 4.x HW would do this and anything else is a complete waste of time.

You can believe me or not; that wonā€™t change the fact that you wonā€™t see one iota of hardware that could image load/store that canā€™t also support GL 4.x.

Well, you might in the mobile space. But even then, youā€™d be looking at some new extension written against OpenGL ES 3.0, not this one exactly.

Hereā€™s a theory: this work started when 3.x was going to be one thing, but it turned into something else, sliding features into the 4.x era, but they didnā€™t re-verbalize the extension work in progressā€¦

Or alternatively, the ARB doesnā€™t care what version something is written against. Again, extension specifications should never be seen as a guide to what versions of OpenGL will ever implement that extension. At best, itā€™s only a guide to what versions could.

Uniform_buffer_object is written against OpenGL 2.1, even though no such hardware can support it. ARB_blend_func_extended is written against 3.2, but only claims that OpenGL 1.0 and ARB_fragment_shader are required.

Youā€™re just misinterpreting the information in the spec, assuming that it has anything to do with what actual hardware will support it.

[QUOTE=Alfonse Reinheart;1250298]
Simply put: no greater effort was spent writing this spec against 3.x.[/quote]

I considered that possibility. To which I say,

Or alternatively, the ARB doesnā€™t care what version something is written against.

itā€™s damn confusing for anyone trying to follow this stuff, all this lack of control over what version of the API is really in play at any given time. I wish the ARB would be more regimented about such things, instead of the ā€œfree for allā€ descriptive style I see in this particular spec. It reminds me of another thread about ā€œAPIā€ documentation that you and ManDay got into recently. If extension specs arenā€™t ā€œAPI documentation,ā€ well, it sure would be nice for someone to clean up and produce the real item for the rest of us! This particular extension is particularly bad, among the various extensions Iā€™ve perused. More typically thereā€™s a page or two of stuff about a narrowly focused area, and thatā€™s that.

But I get your general point. Donā€™t rely on theoretical docs to describe actual real world capabilities. Thankfully I already believed in that. Iā€™m a little miffed that the 3.x HW on my laptop is even less capable than I thought, and that no HW of a similar era is likely to be any better.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.