Fragment shader applied to a texture attached to a FBO

Hi,

I actually worte a fragment shader that permits me to dump z-buffer into color channels and display it.

Here is the code


void main()
{
    float ndcDepth  =
    (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
    (gl_DepthRange.far - gl_DepthRange.near);
    float clipDepth = ndcDepth / gl_FragCoord.w;
    float z = ((clipDepth * 0.5) + 0.5) * 10.0f;
    gl_FragColor = vec4(z,z,z,gl_FragColor.w); 
} 

My final aim is to transmit depth values to cuda.

So I create a fbo and attach to it a Texture2D,


GLuint gen_texture()
{
	GLuint color_texture;
	glGenTextures(1, &color_texture);
	glBindTexture(GL_TEXTURE_2D, color_texture);
	glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, 1024, 768, 0,GL_RGB, GL_UNSIGNED_BYTE, 0);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

	return color_texture;
}

GLuint  gen_fbo(GLuint textureid)
{
	GLuint my_fbo = 0;
	glGenFramebuffers(1, &my_fbo);
	glBindFramebuffer(GL_FRAMEBUFFER, my_fbo);
	glFramebufferTexture2D(GL_FRAMEBUFFER,
	                       GL_COLOR_ATTACHMENT0,
	                       GL_TEXTURE_2D,
	                       textureid,
	                       0);

	GLenum DrawBuffers[1] = {GL_COLOR_ATTACHMENT0};
	glDrawBuffers(1, DrawBuffers); // "1" is the size of DrawBuffers

	if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
	printf("fail fb
");

	return fbo;

}

but I have none idea of how and when i should apply my fragment shader to it.

Any help would be appreciated :slight_smile:

You don’t “apply” fragment shaders to things. You render with a fragment shader. So the rendering operations that generate depth values are when you need to be using your fragment shader.

Sorry I didn’t understood your answer,

So the rendering operations that generate depth values are when you need to be using your fragment shader.

My fragment shader is here to dumper depth values to color channels it doesn’t generate depth.

My fragment shader is here to dumper depth values to color channels it doesn’t generate depth.

I didn’t say that your fragment shader would generate the depth values. I said that your “rendering operations” would.

gl_FragCoord.z is generated by the rasterizer when you render an object.

If all you have is a depth buffer with depth values and you want to convert that into a color buffer (with linearized color values), the fragment shader you posted won’t do that. Since it doesn’t fetch the depth values from the depth texture that was previously used to render with.

Is that what you’re trying to do?

Yes exactly :slight_smile:

Now I change my fragment shader to this :


layout(location = 0) out vec4 color_out;

void main()
{
float ndcDepth  =
    (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
    (gl_DepthRange.far - gl_DepthRange.near);
float clipDepth = ndcDepth / gl_FragCoord.w;
float z = ((clipDepth * 0.5) + 0.5);
 color_out = vec4(z,z,z,gl_FragColor.w); 
} 

after that I think that this shader render to my fbo texture


GLuint  gen_fbo(GLuint textureid)
{
	GLuint my_fbo = 0;
	glGenFramebuffers(1, &my_fbo);
	glBindFramebuffer(GL_FRAMEBUFFER, my_fbo);
	glFramebufferTexture2D(GL_FRAMEBUFFER,
	                       GL_COLOR_ATTACHMENT0,
	                       GL_TEXTURE_2D,
	                       textureid,
	                       0);

	GLenum DrawBuffers[1] = {GL_COLOR_ATTACHMENT0};
	glDrawBuffers(1, DrawBuffers); // "1" is the size of DrawBuffers

	if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
	printf("fail fb
");

	return fbo;

}

GLuint gen_texture()
{
	GLuint color_texture;
	glGenTextures(1, &color_texture);
	glBindTexture(GL_TEXTURE_2D, color_texture);
	glTexImage2D(GL_TEXTURE_2D, 0,GL_RGBA, win.width, win.height, 0,GL_RGBA, GL_UNSIGNED_BYTE, 0);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

	return color_texture;
}

But something is strange… as you can see my Image2D is made of GL_UNSIGNED_BYTE.
When I use gl_readpixels the rgb values are float, is it due to the conversation or is it because I’m not using my Fbo at all.

Maybe my question is not clear, if you want me to highlight a point of my problem it will be a pleasure :slight_smile:

[QUOTE=tkostas;1279546]But something is strange… as you can see my Image2D is made of GL_UNSIGNED_BYTE.
When I use gl_readpixels the rgb values are float, is it due to the conversation or is it because I’m not using my Fbo at all.[/QUOTE]
The internal (GPU-side) and external (CPU-side) formats don’t have to be the same; the implementation will convert them as required.

glReadPixels() will return the data in whatever format you request. If the type parameter is GL_UNSIGNED_BYTE, you’ll get unsigned bytes, if the type parameter is GL_FLOAT, you’ll get floats. The texture’s internal format doesn’t matter (beyond the fact that you’ll lose information if you request the data in a format that has either less precision or a smaller range than the internal format).

[QUOTE=GClements;1279548]The internal (GPU-side) and external (CPU-side) formats don’t have to be the same; the implementation will convert them as required.

glReadPixels() will return the data in whatever format you request. If the type parameter is GL_UNSIGNED_BYTE, you’ll get unsigned bytes, if the type parameter is GL_FLOAT, you’ll get floats. The texture’s internal format doesn’t matter (beyond the fact that you’ll lose information if you request the data in a format that has either less precision or a smaller range than the internal format).[/QUOTE]

You were absolutely right, I finally achieve to transfer my texture from opengl to cuda and the values seems to be the right ones.