reading from a frame buffer from the FS

Hi, I would like to know if it is possible to read values from a frame buffer from a shader. What I want to know is if I can do a first pass where I render my geometry with a shader that writes the values in a buffer, and then render a second pass making another shader read values from this buffer.
Thank you!

You can’t read from the destination buffer. You can however render to a render target, then in a second pass read in the corresponding pixel from the render target, as long as you render to another render target.

This sounds (if I understood you correctly) exactly what I was looking for. Could you please point me toward some reference or better yet, tutorial?
Thank you!

cignox1

You should read FBO specification first:
http://oss.sgi.com/projects/ogl-sample/registry/EXT/framebuffer_object.txt

Tutorial:
http://download.developer.nvidia.com/dev…mebuffer_object

PS: googling on “FBO tutorial” helped me, why it didn’t help you? )))))))

Thank you. Actually, I already use FBO for shadow mapping and for texture projection. What I meant is that I don’t know how to access a FBO from the shader while rendering on to the normal frame buffer…

Access is simple - as you know, GL can’t allow you to read from colorbuffer (depth, stencil, accum, etc…), so, you should somehow copy it to texture: via glCopyTexSubImage2d(), via using framebuffer objects - it doesn’t matter. The goal is to get the texture ))

Using a texture was something I hoped to avoid… I find so sad that there is not a way to read data from a buffer without using textures :frowning:
This is what I want to do: currently I pass two shadow maps to the shader that uses them to calculate the shadow intensity and thus the fragment color. I would like to split this in two passes: in the first I render the scene passing the two shadow maps to a shader. For each pixel, this shader stores the shadow cast by the first light in R, by the second light in G, and so on with B and A.
In the second pass, this buffer is used to compute the contribute of each light to the fragment.
The advantage is that one can use 4 shadow maps using no texture units, leaving them free for texturing. In the worst case, I will use only one texture unit for 4 shadows maps. In addition, one should be able to exploit the first pass for a early z culling too, but I don’t know exactly how this works.

The problem is that if I have to use a texture, am I not limited by squared, power of two textures? And how do I get the correct pixel coordinate from the shader?

I didn’t understand you exactly…

You mean, you have 2 lights and 2 shadowmaps? But shadowmap is a texture )) So, 4 shadowmaps equals to 4 texture units being busy…

Each shadowmap is a texture, so I don’t see any benefit from your explanations… How do you want to access all 4 shadowmaps, while only 1 is bound?
I’m sure, that it’s impossible, cause it’s impossible ))

Or may be I don’t understand, what you was talking about…

And what about texture size limitations…

There is an extension for non-power-of-two textures (actually, there are 2 extensions, but the first one is not so friendful in usage - no mip-maps, no wrapping, and non-normalized texture coordinates):
http://oss.sgi.com/projects/ogl-sample/registry/ARB/texture_rectangle.txt
http://oss.sgi.com/projects/ogl-sample/registry/ARB/texture_non_power_of_two.txt

First extension is supported on GeForce2 and above.
Second - GeForce6 and above.

The tutorial for their usage may be found at nVidia SDK.

To access them, if they are not-power-of-two, use simple [0…1] semantics, but with half-pixel correction, because 1st texel’s center is not 0, but 0.5/width. This is the same for the last texel also )) And for the all texels ))

Well, I’ll try to explain it better: once I have the shadowmaps, I render my scene. In the shader, I calculate the shadow value using:

vec3 shadowUV = projCoords[light].xyz / projCoords[light].q;
float mapScale = 1.0 / 1024.0;
shadowColor = shadow2D(shadowMap, shadowUV);

Then I use shadowColor to modulate the lighting contribute for this light. If I have two light casting shadows, I have two shadowmaps and so on.

But what if I split this in two steps? in the first step, I compute shadowColor and put it as glFragColor. In the second I read the this buffer and use that value to modulate the light contribute. But in the first step I can potentially pack four shadows in the four RGBA components: each light (each shadow map) contribute is stored as a 8bit value.
In the second pass I only need this buffer. This means that I have all the texture units ( minus one if I need it for the first pass result, as it seems) for texturing.
I a sort of deferred shading, where I only compute in a step the shadows on each pixel and in the other the pixel color.

Good, now it works :slight_smile: Now I’m able to render up to 4 shadow maps in the first pass, and use all them in the second using only one texture unit, still being able to use coloured lights :slight_smile: