FBO, FLOAT, HDR

I want to implement HDR in my demo, but I’ve faced some problems.
I am using FBO for render to texture, and currently NV_TEX_RECTANGLE + FLOAT format.

Is it possible to switch to more elegant way using ATI_texture_float + ARB_texture_non_power_of_two?

If yes, then is this a correct way to do that?
glTexImage2D(GL_TEXTURE_2D, 0, RGB_FLOAT16_ATI, 1280, 1024, 0, GL_RGB, GL_FLOAT, NULL);
And then attach this tex as target for FBO?

What’s the deal about

Issues

1. Should we expose a GL_FLOAT16_ATI pixel type so that the 16 bit
   float textures can be directly loaded?
   RESOLUTION:  This will be exposed in a separate extension.

in ATI_texture_float specs?

Have you tried using the ARB float format?
GL_RGB16F_ARB, GL_RGBA16F_ARB, GL_LUMINANCE16F_ARB, GL_LUMINANCE32F_ARB etc…
They seem to go quite well with GF 6800NU.

Yeah, it works, thnx.

Didn’t knew there was an ARB extension, I still have pretty recent NV texture format docs on table,where ATI_texture_float is last :slight_smile:

Glad I could be of help :slight_smile:
PS: post a link to your demo whenever you’re done :slight_smile:

Btw, anyway know a ‘real’ function that convert a 32bit float to 16bit float ?
Tried google but each time it gives me back to 32bit float to 16bit int ?

Looked at this file OpenSG source but the code seems very complex (using lookup table etc…)

Cheers;

Originally posted by M/\dm/
:
[b]Yeah, it works, thnx.

Didn’t knew there was an ARB extension, I still have pretty recent NV texture format docs on table,where ATI_texture_float is last :slight_smile: [/b]
GL_ARB_float_pixel, GL_APPLE_float_pixel, GL_ATI_texture_float are using the same tokens : so just check for one of the three extensions (I think the two first can defines vertex attributes as well).

Btw, for 32bit floating point -> 16 floating point : I don’t need to bother : Just have to specify the internal format to GL_RGB_16F and texture format as GL_RGB_32F (HDR format) and OpenGL will do the conversion for me :wink:

Originally posted by execom_rt:
Btw, anyway know a ‘real’ function that convert a 32bit float to 16bit float ?

FWIW, if you need this function someday, I saw once that OpenEXR lib has such a function. Maybe there are others around though !

Download nv sdk and check this file…
“c:\Program Files\NVIDIA Corporation\SDK 8.5\LIBS\inc\half\half.h”

yooyo

I have another question about HDR, that is, how to get avg luminance.

Currently I am rendering my scene with all parallax maps, refractions and reflections to float texture via fbo. Then I use (for now) uniform parameter modifyable from within program and render fullscreen quad with this texture and HDR shader enabled.

My next goal is to get avg luminance from rendered image so I can get accurate results.

So far I’ve seen these suggestions:
[ul][li]Using occlusion querries and killing fragments (in some luminance range) in shader to get scene histogram. (Multiple passes)[]Readback to CPU, calculations there (slow readbacks??? pbo?)[]Convert scene from HDR (float tex?) to luminance and downscale to 1x1[/ul][/li]So far I think 3rd is the best option, but I wonder about it’s realization.
As far as I understand I must create new framebuffer object, bind luminance texture (float?) to it and use GenMipmaps to get all chain when rendering my quad with float tex (scene) on it.
Then render my quad with HDR shader by reading value from smallest luminance texture in mipmap chain (how?)???
Or is there a better way??? And how do you achieve smooth transitions, like in FarCry???

Originally posted by execom_rt:
[b]Looked at this file OpenSG source but the code seems very complex (using lookup table etc…)

Cheers;[/b]
That’s the same code as OpenEXR , which is the gold standard of 16 bit floats. :wink:

I tried to get AVG luminance today, but something seems to be wrong, it’s changing values quite dramaticaly even with small scene changes.

Currrently I am trying to get it this way:
Gen FBO and attach float16 texture of size (1,1);
Set viewport to (0,0,1,1)
Render fullscreen quad in ortho with scene texture applied while FBO is bound;
Render fullscreen quad with my scene tex bound and pass (1,1) luminance tex to HDR shader.

HDR Shader:

uniform sampler2D col_tex;
uniform sampler2D avg_lum;
uniform float e;

void main ()
{
vec3 pixcol = texture2D(col_tex, gl_TexCoord[0].xy);
vec3 lum_t = texture2D(avg_lum, vec2(0.f, 0.f));
float lum = lum_t.r*0.2125f+lum_t.g*0.7154f+lum_t.b*0.0721f;
gl_FragColor.rgb = e/lum*pixcol;
gl_FragColor.a = 1.0;
}

FBO creation routine:

glGenTextures(1, &avg_lum_tex);
glBindTexture(GL_TEXTURE_2D, avg_lum_tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, RGB16F_ARB, 1, 1, 0, GL_RGB, GL_FLOAT, NULL);

glGenFramebuffers(1, &fb_lum);
glBindFramebuffer(GL_FRAMEBUFFER_EXT, fb_lum);
glFramebufferTexture2D(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, avg_lum_tex, 0);
glBindFramebuffer(GL_FRAMEBUFFER_EXT, 0);

Part of render code (color_tex is 2D float texture with scene rendered to it)

glBindFramebuffer(GL_FRAMEBUFFER_EXT, fb_lum);
glViewport(0, 0, 1, 1);
SetOrtho();
    
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glActiveTexture(GL_TEXTURE0_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, color_tex);
glBegin(GL_QUADS);
glMultiTexCoord2f(GL_TEXTURE0, 0.0f, 0.0f);
glVertex2f(-1.0f, -1.0f);
glMultiTexCoord2f(GL_TEXTURE0, 1.0f, 0.0f);
glVertex2f( 1.0f, -1.0f);
glMultiTexCoord2f(GL_TEXTURE0, 1.0f, 1.0f);
glVertex2f( 1.0f,  1.0f);
glMultiTexCoord2f(GL_TEXTURE0, 0.0f, 1.0f);
glVertex2f(-1.0f,  1.0f);
glEnd();
glDisable(GL_TEXTURE_2D);
glBindFramebuffer(GL_FRAMEBUFFER_EXT, 0);

glViewport(0, 0, scr.x, scr.y);
SetOrtho();
    
glActiveTexture(GL_TEXTURE0_ARB);
glBindTexture(GL_TEXTURE_2D, color_tex);
glActiveTexture(GL_TEXTURE1_ARB);
glBindTexture(GL_TEXTURE_2D, avg_lum_tex);
glUseProgramObject(ShaderHDR);
glUniform1i(glGetUniformLocation(ShaderHDR, "base"), 0);
glUniform1i(glGetUniformLocation(ShaderHDR, "avg_lum"), 1);
glUniform1f(glGetUniformLocation(ShaderHDR, "e"), HDR_alpha);
glBegin(GL_QUADS);
glMultiTexCoord2f(GL_TEXTURE0, 0.0f, 0.0f);
glVertex2f(-1.0f, -1.0f);
glMultiTexCoord2f(GL_TEXTURE0, 1.0f, 0.0f);
glVertex2f( 1.0f, -1.0f);
glMultiTexCoord2f(GL_TEXTURE0, 1.0f, 1.0f);
glVertex2f( 1.0f,  1.0f);
glMultiTexCoord2f(GL_TEXTURE0, 0.0f, 1.0f);
glVertex2f(-1.0f,  1.0f);
glEnd();
glUseProgramObject(0);
glActiveTexture(GL_TEXTURE0_ARB);

Try something like this (I’m using HDR cubemap)

uniform samplerCube s0;
uniform float exposure;

void main ()
{
	vec4 i = textureCube(s0, vec3(gl_TexCoord[0]));

	gl_FragColor.r = 1.0 - 1.0 / (exposure * i.r + 1.0);
	gl_FragColor.g = 1.0 - 1.0 / (exposure * i.g + 1.0);
	gl_FragColor.b = 1.0 - 1.0 / (exposure * i.b + 1.0);
}

Or, better but slower …

	gl_FragColor.r = 1.0 - exp(-exposure * i.r);
	gl_FragColor.g = 1.0 - exp(-exposure * i.g);
	gl_FragColor.b = 1.0 - exp(-exposure * i.b);

Since you can’t have bilinear filtering, I’m also using a ‘bloom’ effect, in order to smooth everything.

Originally posted by M/\dm/
:
y next goal is to get avg luminance from rendered image so I can get accurate results.
Basically, what you should try is :

  • Render scene into the texture. : It’s your ‘Source Texture’.

  • Convert the texture to grayscale (+ and average it (2x2 block average) (read 4 texels, average it).
    Do it a 4-5 times , in order to get a good blurred version (create 4-5 textures like 81x81, 64x64, 27x27, 9x9, 3x3, 1x1 (final luminance value))

: It’s your ‘Luminance Texture’.

  • Average the ‘Source Texture’ again, but just once, using 3x3 block average) and compose it with your ‘Luminance Texture’.

  • Additionaly, create a bloom pass for a ‘final touch’.

Hope it’s clear enough.

Look at this web page web page

Really good HDR demo with nice effects.

I’ll take a look, thnx.

But can’t I use linear filtering available for fp16 textures in hw (6800GT)? Theoreticaly that should give the same results if minified to 1x1…

You could implement that on Geforce 6, see if it works of course, as vendor specific optimisation.
I didn’t tried the mipmap generation thing. I use the method cited above. The mipmap generation might be slower that the ‘pixel shader’ version.