Render-To-Texture (FBO) and Texel fetch problem

Hi all,

Here’s what I try to do:

  1. Render color buffer to texture through FBO
    The colors encode the world positions
    of my vertices!!
  2. Readback the texture.
    Therefore I need to transform my
    vertex coordinate from world to window
    space and from window space to texture space.
    (gl_ModelViewProjectionMatrix and shift
    coordinates by .5 in x and y direction)

My problem:
Reading back the values is not acurate. it
sometimes differs 0.02 per color value

What I’ve tried:

  1. Render a color gradient and check the
    texel fetch. That is fine.
  2. Tried different internal representations
    of the texture (32bit float for instance)
  3. Tried different window sizes. (power-of-two)
  4. Disables fragment opertations (alpha test etc)

Seems that I’m stuck for the moment, so does anyone have an idea. Perhaps it has something
to do with the calculation of the window coordinates?

Thanks in advance,

Kind regards,

Ropel

shift coordinates by .5 in x and y direction
Why is that ? Did you try without this shift ?

0.02 *256 = 5.12

Well, even with 8bit color it should be better. Strange.

In glsl the point (0,0) is in the center of the screen (window coordinates). Since textures are clamped to 0 - 1 we need to convert this window space to coordinate space like this:

vec2 pTexture = (pWindow+vec2(1.0))/2.0;

Furthermore we need to land on the middle of the texel for glsl, so we have also a really small offset within the texel.

pTexture = pTexture + 0.5*vec2(1/window_width,1/window_height);

This works fine when I test this lookup on a color gradient texture.

Can you explain me why I should do that shift?
I does not yet make sense to me?

Why don’t you just read back the z-values from the depth-buffer and use those to calculate the world-space positions? Or does this not work in your scenario?

Jan.

Thank you for your reply. That indeed would seem
a logical approach. However I’d prefer to take the other approach, since then the performance is dependent on the #vertices instead of the #fragments. (Furthermore is seems to be the most logical approach in my case).

Regards,

Ropel

Hang on a second. You render to FBO (whatever format is of little interest, but as you want float in the end…), then you readback converting from RGB -> window -> texture space?

Couldn’t you do this all in a vertex (or if needed, fragment) program, then just readback the buffer as-is with all the “magic” already done by the GPU?

Just an idea.

Thanks for the suggestion. Thing is, I use the colors in my renderbuffer as a lookup table to verify if my vertex should be rendered.

I managed to track down my problem, which seemed not to be in the RTT and lookup scheme.

Anyway thanks for the help!

Ok, I still have some problems here.
I really hope someone can help me out!

What I want to do is:

  1. RTT using FBO:
    render to texture with 8bits RGB I need only the color buffer. But for depth ordering we need to also render the depth buffer (24 bits)

  2. Readback using shader. (See my first post)

Since my the values I read back are different I tried to debug:

  1. Create an artificial texture to check if reading a location from the buffer gives any errors. This works perfectly fine.

  2. In order to see if my conversion to texture coordinates is oké I rendered the 2d positions with the texture as white point primitives. Locations are also good.

So what remains is that there is somehow an error in the way the FBO renders the color buffer. I seem to have an error per color of about 0.02, which I think is way too much, even with 8 bits precision.

Someone any idea why this rendering scheme is not exact?!

Dithering?

Ah, yes thank you. I tried that earlier. But I guess I put the disable in the wrong place.