Different results when rendering to FBO or the Default Framebuffer

Hello,

My fragment shader gets different values whether I render to an FBO (either with a Texture or a Renderbuffer bound to it) or to the default Framebuffer. I have set up two draw commands which are identical, except for the framebuffer they render to. Also I boiled it down to a vertex array with a single triangle, my vertex shader is pass through (not even perspective matrix), and my fragment shader just writes/looks at the FragCoord.z.

However, FragCoord.z still differs in these two occurances. Even the interpolated value of a vertex attribute differs. To be exact, the floats in the FS differ in up to 5(!) bits in the mantissa.

I have read the specs to try to find out if there are special options for the rasterizer depending on the render target but I don’t find any.

My graphics card is an ATI HD5870 with the latest (Windows) drivers. I don’t know if this problem also occurs on Nvidia cards or under Linux.

How do you determine that the two resulting values are different? For example, you say that gl_FragCoord gets different results. What process do you use to determine that they are different?

Also, are you sure that you’re rendering to the exact same size of framebuffer? How are you creating your window?

First, I forgot to mention, that I use 32bits Depth buffer for my OpenGL application. I set pfd.cDepthBits=32 when creating my window and this also checks out when I use glGetFramebufferAttachmentParameteriv().
For my renderbuffer and texture I render to I use GL_DEPTH_COMPONENT32F as the internalformat. But it doesn’t work either when I use GL_DEPTH_COMPONENT32 (without F).

Yes, gl_FragCoord.z is different; x and y are the same.

How do I determine that? By two measures.

[ol]
[li]First by glReadPixels(…) or glGetTexImage(…) after I rendered my triangle. But I wasn’t completely sure if the value in the FS gets converted to some different format on its way to the FBs. So I did the following:
[/li][li]When rendering to the Default framebuffer: I transmit Information one bit after the other. First I’m casting the variable in question (a float) to an uint by the GLSL function floatBitsToUint(). Then I’m painting the pixel under inspection either red or green whether bit number bitpos is 0 or 1 (bitpos is a uniform variable).
[/li][li]When rendering to the FBO I just have the possibility to check for exact equality and writing two completely different values to the FBO. After the draw call I can compare the value in the FBO by glReadPixels(…) again. Checking for exact equality I do again by casting to uint before (you never know…).
[/li][/ol]

The results are: The value of the variable in the FS and the value that is written to the FBs is identical down to the last bit. That means the FS gets different values, depending on the rendertarget.

Well, that exactly is my last hope. But I think I have checked all possibilities.
I don’t change the viewport between draw calls. When I glGetIntegerv() the Viewport directly after window creation I get (624, 442), which are the numbers I use for the creation of the renderbuffer or the texture. These numbers also match, when I count the pixels of the drawable area of my window.

Try flipping screen-space Y upside down when drawing to one of the rendertargets. And flip again while you’re comparing.

If that produces identical results, then we can talk about why. :wink:

[QUOTE=arekkusu;1279101]Try flipping screen-space Y upside down when drawing to one of the rendertargets. And flip again while you’re comparing.

If that produces identical results, then we can talk about why. ;)[/QUOTE]

What the …?! That worked!

I’ve spent more than 3 days on this and now it’s about such an obscure glClipControl() option?
Even now that I know the solution I can’t find anything on the web.

You have to tell me why it worked! Is it a rounding error because Windows and OpenGL use different points of origins internally?

So, yeah. You’re running into “stupid coordinate system tricks”. Consider these two bits of info:

  1. GL prefers the mathematical “bottom left” origin, while practically every other image API, windowing system, and hardware (CRT scan out, etc) uses upper left. So, internally to the driver, there might be some Y-flipping going on with some drawable surfaces. Exactly how and for which surfaces is “implementation dependent”, so don’t try to guess.

  2. Read the GL spec, about rasterization and attribute evaluation “at pixel center”. How does that really work? There’s GL_SUBPIXEL_BITS of precision, that somehow affect attribute quantization during rasterization. Well, let’s say there are 8 subpixel bits, and your Z coordinate is going to be interpolated “at pixel center”. The actual Y location might change by 1/256 pixel depending on the surface Y origin, resulting in slightly different rasterization results. Fun, right?!

If you like, you can do another experiment. Try slowly shifting your triangle around in tiny subpixel increments, so the edges crawl along, lighting up different pixels. At a certain subpixel location, the edge just crosses pixel center, and satisfies the rasterization fill rules to light up a pixel. That exact same location might not light up the same pixel, with a surface using an (opaque to you) different Y origin. Good times!

Of course, if exact rasterization is really critical to you, you can always introspect the rasterization behavior up front, and then set up Y flips appropriately.

[QUOTE=arekkusu;1279105]
2) Read the GL spec, about rasterization and attribute evaluation “at pixel center”. How does that really work? There’s GL_SUBPIXEL_BITS of precision, that somehow affect attribute quantization during rasterization. Well, let’s say there are 8 subpixel bits, and your Z coordinate is going to be interpolated “at pixel center”. The actual Y location might change by 1/256 pixel depending on the surface Y origin, resulting in slightly different rasterization results. Fun, right?![/QUOTE]
In the latest (4.5) OpenGL Specification GL_VIEWPORT_SUBPIXEL_BITS is exactly mentioned once. And that is in a very short paragraph at the end of the sentence. Querying that symbol doesn’t even seem to work in my GL environment.
I feel like this should be more prominent. D3D has the rasterisation part nailed down tighter I think. Link
Nevertheless, what’s really strange, imo, is that the pixel centre isn’t representable exactly. If anything should be exact in the rasterisation process it should be the centre of the pixel that is being rasterised, right?

[QUOTE=arekkusu;1279105]
Of course, if exact rasterization is really critical to you, you can always introspect the rasterization behavior up front, and then set up Y flips appropriately.[/QUOTE]

Oh, it is just inconvenient that this bit of inaccuracy disturbs the nice, slick algorithm I have in mind. I could redraw the final image with a cleared depth buffer as well.
However, since there is a lot of, what’s called, deferred shading in the videogame field where you do need this precision, I wonder why this problem hasn’t arisen more often.

Nevertheless, thank you very much for your help.

PS:
An afterthought. I said, up to 5 bits difference in the result. These 5 bits occur in fact, when you think about it: float has a 23 bit mantissa, minus 10 for my window size (~1000) and again 8 (your number) for subpixel bits gives exactly 5!

I’d say it’s because deferred shading implementations usually don’t touch the default framebuffer until the very end of the frame.