PDA

View Full Version : Saving Depth Buffer to Texture



VASIMR
12-03-2012, 07:39 PM
Hello,

I am attempting to save the both the frag and depth buffers to a texture for use with subsequent render passes. To verify that the depth buffer is correctly being rendered to the texture by displaying the texture to the screen (as well as saving it as a TGA), where the result is a texture completely populated with values of 1.0. After looking at the documentation and several examples, my code is as follows:



glActiveTexture(GL_TEXTURE0);


// ----- THE DEPTH TEXTURE --------
glGenTextures(1, &depthTextureId);
glBindTexture(GL_TEXTURE_2D, depthTextureId);
glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT, 1024, 780, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);


GLfloat border[] = {1.0f, 0.0f,0.0f,0.0f };
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER );
glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, border);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LESS);


glBindTexture(GL_TEXTURE_2D, 0);




// ------ THE RENDER TEXTURE ------
glGenTextures(1, &renderTex);
glBindTexture(GL_TEXTURE_2D, renderTex);


glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1024 , 780, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);


glBindTexture(GL_TEXTURE_2D, 0);

// ---- setup the FBO ------


glGenFramebuffers(1, &fboHandle);
glBindFramebuffer(GL_FRAMEBUFFER, fboHandle);


glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTextureId, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, renderTex, 0);



GLenum drawBuffs[] = {GL_COLOR_ATTACHMENT0};
glDrawBuffers(1, drawBuffs);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);


The first pass code is as follows:


glEnable(GL_DEPTH_TEST);


glViewport(0,0, 1024, 780);


// make the FBO active
glBindFramebuffer(GL_FRAMEBUFFER, fboHandle);


// do a gl clear, and set the color to white
glClearColor( 1.0f, 1.0f, 1.0f, 1.0f );
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);


glm::mat4 projection = glm::perspective<float>(90, double(m_ActiveMovie->m_screen.Width())/double(m_ActiveMovie->m_screen.Height()), 0.10, 600.0);


glm::mat4 view = glm::lookAt<float>(glm::vec3(0,0,300), // eye
glm::vec3(0,0,0), // center
glm::vec3(0,1,0) // up
);

/* Draw several objects with shaders that work correctly at around z = +/- 20 */

glBindFramebuffer(GL_FRAMEBUFFER, 0);

glBindTexture(GL_TEXTURE_2D, 0);


glDisable(GL_DEPTH_TEST);


glFlush();
glFinish();





And the resultant textures are rendered to the screen by drawing a quad over the screen and using:


glClearColor( 1.0f, 1.0f, 1.0f, 1.0f );
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

/* Draw some background objects */

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, renderTex);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, depthTextureId);

m_Shaders[SWF_RENDER].SetUniformI("Tex1", pVar);

/* Draw quad to screen and swap buffers */

Where pVar lets me switch between the two textures at run time. The renderTex displays correctly, however, like I said, the depthTextureId does not. Additionally, I should mention that the final render shader is using a Sampler2D.

If anyone has any ideas as to why this is not working, please let me know. Thanks.

_x57_
12-04-2012, 01:38 AM
I do not know what is going wrong there, since it might be the drawing itself, the texture setup or the rendering afterwards... i'd suggest you install a copy of the gdebugger from http://www.gremedy.com/ (http://www.gremedy.com/)- it can show you the exact contents of your textures (buffers) etc during execution. Pause the execution before / after rendering and check that the depthbuffer is really modified and that the contents are really 1.000000 and not only close to it. It may be perfectly normal that all depth values are > 0.99.
For this i would reduce the buffer size to about 100x100 max to ease the debug and minimize texture-scrolling.

Also, check for glGetError(s) and FBO completeness.

VASIMR
12-04-2012, 12:21 PM
Thanks for the reply. I tried gdebugger and was able to verify that the depth information is indeed writing to the depthTex. Additionally, as you suggested, the depth values are around 0.989..
With that said, I need to somehow convert that depth texture into an RGB texture with "normalized". I attempted to use glTexParameterf(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE); , which converted the RGB values from 255 to 254, however, the static color buffer is reading it as 255. Also, I should mention that the frustum matrix is set with a near value of 0.1 and a far values of 600, with the majority of the content sitting somewhere around 300.
Any ideas on how I should go about reading the depth information from the depthTex?
Thanks.

tonyo_au
12-04-2012, 04:55 PM
Personally I have an extra texture with type GL_R32F where I store the depth in the shader to keep it as accurate as possible

What happens if you use


float buffer[...

glGetTexImage(GL_TEXTURE_2D,0,GL_DEPTH_COMPONENT,G L_FLOAT,buffer);


I would have thought this would keep the full value of the depth buffer. ( I believe it is normally a 24bit int)

VASIMR
12-04-2012, 05:51 PM
Personally I have an extra texture with type GL_R32F where I store the depth in the shader to keep it as accurate as possible

What happens if you use


float buffer[...

glGetTexImage(GL_TEXTURE_2D,0,GL_DEPTH_COMPONENT,G L_FLOAT,buffer);


I would have thought this would keep the full value of the depth buffer. ( I believe it is normally a 24bit int)


Thanks for the reply. I attempted to do the following:



float * buffer = new float[1024*780];
glGetTexImage(GL_TEXTURE_2D,0,GL_DEPTH_COMPONENT,G L_FLOAT,buffer);
glBindTexture(GL_TEXTURE_2D, spriteTex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, 1024, 780, 0, GL_R32F, GL_FLOAT, buffer);
glBindTexture(GL_TEXTURE_2D, 0);
delete [] buffer;


Which returns an error of GL_INVALID_ENUM on the glTexImage2D step. I also tried the above using GL_R16F. Any suggestions?
Additionally, should there not be a more efficient way to use the depth texture directly (instead of copying it)?

_x57_
12-05-2012, 02:38 AM
I would have thought this would keep the full value of the depth buffer. ( I believe it is normally a 24bit int)


@tonyo_au: The default depth buffer format on most devices should be 24 bit FIXED point format. You could use a 32bit float, but be careful with mixing / comparing them.#

@VASMIR:
If i understand this correctly, what you want is to copy the depth texture from one texture to another. In general you should not download it to CPU and then re-up it to GPU as you are trying. As you suggested there is a more efficient way: use glCopyTexImage2d or even better glCopyTexSubImage2d (if you copy more than once this prevents reinitializing new glTexImage2d's again and again).

However, if you just want your depth texture stick around a little longer, e.g. to do shadow mapping or s.th. in the shader, there seems to be no need to copy anything at all. Just remember your depth texture id and create a new blank depth buffer and attach it to your fbo to continue. Bind the remembered depth texture id to a texture unit and use that one to access the texture in the shader. Is that what you want?

VASIMR
12-05-2012, 07:15 AM
@VASMIR:
If i understand this correctly, what you want is to copy the depth texture from one texture to another. In general you should not download it to CPU and then re-up it to GPU as you are trying. As you suggested there is a more efficient way: use glCopyTexImage2d or even better glCopyTexSubImage2d (if you copy more than once this prevents reinitializing new glTexImage2d's again and again).

However, if you just want your depth texture stick around a little longer, e.g. to do shadow mapping or s.th. in the shader, there seems to be no need to copy anything at all. Just remember your depth texture id and create a new blank depth buffer and attach it to your fbo to continue. Bind the remembered depth texture id to a texture unit and use that one to access the texture in the shader. Is that what you want?

Specifically what I am looking to do is save the depth buffer in RGBF16/F32 (or RF16/F32 format if that will work) so that I can send it to a sampler2D to be used in the edge detection stage of an MLAA pass. Additionally, I wanted to have the ability to discard any pixels that have a depth value corresponding to the far plane (for drawing semi-transparent scenes to a texture - i.e. GUI).

tonyo_au
12-05-2012, 02:38 PM
If you want precise work setup a separate texture in RF32, clear it to 1.0f and set glFragCoord.z in there - this works well for me.

VASIMR
12-05-2012, 03:23 PM
If you want precise work setup a separate texture in RF32, clear it to 1.0f and set glFragCoord.z in there - this works well for me.
I was considering doing that, however, I don't think it needs to be that precise. As I said, I am only planning on using it for MLAA and discarding fragments on the far plane. I figured that I could probably save a few function calls by taking the depth buffer directly in that case.

I tried your solution of just writing to a separate textures (I appended a depth render buffer instead), and am having difficulty writing to it. Would the fragment shader be something like:


layout (location = 0) out vec4 FragColor;
layout (location = 1) out float depthOut;


void main()
{
depthOut = gl_FragCoord.z;
...
FragColor = col1;
}

(where there is other code in place of the ... depending on which shader I am using - I should probably combine them all into one with a subroutine switch)
This ends up producing a texture completely valued with 1.0, and attempting to replace gl_FragCoord.z with 0.5 ( depthOut = 0.5; ) ends up producing the same result.

tonyo_au
12-05-2012, 06:55 PM
This basically what I am doing. I actually have a GL_RGBA32 because I collect a few things so I have a vec4 and store gl_FragCoord.z in one component and other info in the others.
If you bind the texture to an attachment point and do not write to it in the shader, my driver seems to write 0 to each component. I would make sure you only bind it to an attachement
point for shaders that write to it.

VASIMR
12-05-2012, 07:26 PM
All of the shaders that I am calling while the FBO is active are writing to location 1, however, the texture doesn't seem to change. I tried even revering the texture back to RGB8 form, and am writing vec3(0.5,0.5,0.5) to it in both shaders, yet it still displays 1.0,1.0,1.0. I think the bindings are correct:


glActiveTexture(GL_TEXTURE0);
// ----- THE DEPTH TEXTURE --------
glGenTextures(1, &depthTextureId);
glBindTexture(GL_TEXTURE_2D, depthTextureId);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, 1024, 780, 0, GL_RGB, GL_UNSIGNED_BYTE, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

// ------ THE RENDER TEXTURE ------
glGenTextures(1, &renderTex);
glBindTexture(GL_TEXTURE_2D, renderTex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 1024 /*512*/, 780/*512*/, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

// ---- setup the FBO ------
// Create a frame buffer
glGenFramebuffers(1, &fboHandle);
glBindFramebuffer(GL_FRAMEBUFFER, fboHandle);

// create the depth buffer
glGenRenderbuffers(1, &rboId);
glBindRenderbuffer(GL_RENDERBUFFER, rboId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, 1024, 780); //512, 512);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboId);


glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, renderTex, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, depthTextureId, 0);



GLenum drawBuffs[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, drawBuffs);

Any ideas as to why the FBO would be writing to color attachment 0, and not color attachment 1 as well (checked with gdebugger)? Does something specifically need to be enabled to allow for multiple color attachments?

VASIMR
12-05-2012, 11:13 PM
I found the issue. The shader correctly writes to the second texture if I set it to be RGBA as well (RGBA, RGBA8, RGBAF32, RGBF16, etc.). With that said, is this a requirements with FBOs, where all of the GL_COLOR_ATTACHMENTs must be of the same color type?

tonyo_au
12-05-2012, 11:55 PM
They certainly don't have to be the same type - mine aren't - but maybe they have to be vec4?

VASIMR
12-06-2012, 12:57 AM
It seems that you are correct. I changed both the vec tags and the color type at the same time, and I had seen an example fragment shader mixing vec3 and vec4 (a deferred shader which used vec4 for the fragment output, and vec3 for the FBO bindings). With that said, I guess the stipulation is that the layout slots corresponding to the FBO bindings must either all be vec3 or all be vec4?

Thanks for the help, everything seems to work now.

tonyo_au
12-06-2012, 04:04 AM
I wonder if some other people can comment on valid combinations of FBO attachments

_x57_
12-07-2012, 03:11 AM
From the Gl4.1 spec, sec. 4.4:
All attachments must have the same layering (1d, 2d, 3d texture, cubemap...), the same sample number and the vague restriction

"The combination of internal formats of the attached images does not violate
an implementation-dependent set of restrictions."

Concluding from the first two i would bet that the format dimensionality (vec4, vec3,...) also is a restriction for most drivers.

However, the fbo should report FRAMEBUFFER_UNSUPPORTED in this case... did you check this?