glGetMinmax, FBOs, and HDR textures

Hello all,
I’m trying to write a High Dynamic Range application that utilises floating point textures attached to FBOs. Ideally I want quick access to important information in the FBO attached texture (like the min, avg, and max) so I can perform a range of selectable effects from tone-mapping to auto-exposure.

For the average I’ve settled for using the high level mipmaps as a quick approximation. The minimum and maximum values have been a little less trivial to obtain. I wanted to keep reading the texture from the graphics card as a last resort and I can’t figure out how to use gl[Get]Minmax on the FBO or texture.

Is it possible to easily find the minumum and maximum of a texture (attached to an FBO if that makes a difference) and to have the values in their full dynamic range? (ie. not clamped [0,1] or [0,255]…)

Thanks for the help,
Richiek

I did once a HDR demo, but not with advanced exposure control, I just used the average value.

I discovered with you this glGetMinmax function that looks great… but your hardware need to support GL_ARB_imaging that doesn’t look like to be widely supported.

You also need to call glMinmax to define a minmax table… but you are right the documentation is not very explicit. As they say, it operates on “incoming pixels” (not sure to understand what they are talking about.

So, I assume that, when you are drawing in your texture that is attached to the fbo color attachment point you do this:

glEnable(GL_MINMAX);
glMinmax(GL_MINMAX, GL_RGB[16 32]F_ARB, GL_FALSE);

//bind fbo and draw into texture

glGetMinmax(GL_MINMAX, GL_FALSE, [GL_HALF_FLOAT_ARB GL_FLOAT], minmaxData);

I put GL_FALSE because otherwise it looks like that minmax processing would discard drawing operations in the texture.

And minmaxData is an array to store min/max values.

Thanks for the reply.
I tried the code suggestion but it still doesn’t return anything of use.
Here’s the code snippit:


  glEnable(GL_MINMAX);
  glMinmax(GL_MINMAX, GL_RGB, GL_FALSE);

  renderFullScreenQuad();

  float minmaxData[6] = {0,0,0,0,0,0};
  glGetMinmax(GL_MINMAX, GL_FALSE, GL_RGB, GL_FLOAT, minmaxData);

  cout << "Min: " << minmaxData[0] << ", " << minmaxData[1] << ", " << minmaxData[2] << endl;
  cout << "Max: " << minmaxData[3] << ", " << minmaxData[4] << ", " << minmaxData[5] << endl;

I had to change your GL_RGB[16|32]F_ARB suggestion as it caused an error. Other than that, the code always produces the following output:
Min: 3.40282e+38, 3.40282e+38, 3.40282e+38
Max: -3.40282e+38, -3.40282e+38, -3.40282e+38
… which are the initial values I assume, so nothing seems to be working.

Does anyone have any other suggestions?
Thanks,
Richiek.

What kind of error did you get?

You can’t use GL_RGB as an internal format since you work on HDR rendering. You don’t have another choice, you need GL_RGB16f_ARB of GL_RGB32F_ARB depending on what precision you want.

the values you get are the default one that opengl set, i.e, the greatest color value you can set in the min color and the smallest one in max color. This initialization is done if I remember correctly (from the spec) in the glMinmax function.

So it seems that these values stay unchanged because you set the wrong internal format. You don’t have another choice but solve this problem setting GL_RGB[16 32]F_ARB.

But maybe we are all wrong from the beginning, this was a supposition from my part, it should be great to have help from a person that has already done this.

EDIT:

I have found an example code here

It is a JOGL code example, but JAVA syntax it is pretty much like C one. This may help you. I have also noticed that they put true, in glGetMinmax whereas I have set false.

I think you should set true, like in the JOGL code, apparently this reset all entries in the minmax table.

The error that occurs for me is: “invalid enumerant”. This only happens when I replace GL_RGB with GL_RGB[16|32]F_ARB and occurs after the glMinmax(…) and glGetMinmax(…) calls.

I will continue to play around with this until I find something that works! (hopefully soon. I’m seriously thinking of abandoning the use of min and max values)

Many thanks,
Richiek.

Ok, but I realize re-reading the glGetminmax reference page that I am wrong on one point.

the format parameter in glGetMinmax is the data format, so you are aright it is GL_RGB. But in glMinmax, this is the internal format, so here, you need GL_RGB16F_ARB or GL_RGB32F_ARB.

So your invalid enumerant error is logic after the glGetMinmax call.

If have also found that the specification of these minmax operation is given in the opengl specification (search “minmax”) since GL_ARB_imaging doesn’t have a separate spec file. Thus, I strongly advice you to read the opengl spec at minmax section.

And I am wondering, running quickly through the spec, if these min/max operations don’t work only on a glReadPixels or glDrawPixels pixels flow… which is not very efficient.

I am pretty sure most of the ARB_imaging is not hardware accelerated.

Well if that’s the case then I suppose I shouldn’t worry about exploring this further, seeing as though I was aiming to do this for the sake of having it hardware accelerated.

Thanks for all your efforts. It might be nice to have some closure on this for the sake of principle, but I won’t be so hell-bent on having it a feature of my application.

Thanks again,
Richiek.

Can someone confirm this?

GL_ARB_Imaging has appeared on the GL extension list for some time for GeForce 6,7,8.
ATI do not expose the ARB_Imaging on their products (tested: R9700,X1600,4850). I’ll have to recheck with the recent Cat 9.2 release as this brings a load of GL extensions - GL 3 support.
I’ll add a separate post if it does.

So does GL_ARB_imaging mean all the imaging subset is accelerated? I’d guess and say so (subject to obscure texture formats).

I believe ZbuffeR is right,

Most hardware don’t accelerate GL_ARB_imaging.
Just check nVidia SDK 9.5, there is an example of GL_ARB_imaging, its slow like hell :).

Ido

http://www.ziggyware.com/readarticle.php?article_id=226

So you suggest to calculate the min/max values based on ping-ponging between 2 textures?

I already do that with success, but I thought the glGetMinmax() function could do that more efficiently with less overhead…

Try and benchmark if you have a doubt. It is probably very hardware dependent.

So you suggest to calculate the min/max values based on ping-ponging between 2 textures?

I already do that with success, but I thought the glGetMinmax() function could do that more efficiently with less overhead… [/QUOTE]
sorry, my mind reading isn’t working at the moment. I found you a tutorial to the only way of doing it efficiently - no need to thank me…oh, you didn’t.