Problem with 'edge' of hill and fog

Hi I have a visual artefact in my scene that I don’t quite understand. Some background information:

In the pre-render I write to 3 buffers:
-color
-normal
-position

then in the render I render a quad and read the those values and calculate lighting and fog (based on distance from camera to fragment).

also to detect the ‘sky’ I now check for zero-length normals. Further, I made the fog color the same as the sky color.

Now the following artefact shows on the edges of the hills of my terrain:

[ATTACH=CONFIG]436[/ATTACH]

(click on the image to see the larger version. You can see in the middle that there is a hard line where the edge of the hill ends and the sky begins. In the ‘non-deffered’ rendering method I did not see this artefact.)

It kinda seems like on the edge you see no fog, or it is fog combined with a black color… in other words: I’m not sure what is happening there, so I also don’t know how to fix it.

Anyone has an idea what it could be? Or maybe even know a way to fix it?

Thanks!

Are you using nearest/point sampling when reading from those textures?

Yes I am, is that good or bad?

Yes its good.
Because when sampling on the edge of the mountain its important to not read the pixels that belonging to the sky, and with linear sampling you do just that.

Thanks for the info. I did make sure the textures have the same size as the viewport/window so it should be a one on one pixel readout. But as the filtering of the buffers as textures for the quad is not the problem, I’m not sure how to continue investigating it.
Any idea what else could be the cause of this or how I could find out what goes wrong?

Thanks!

Well it could be anything but when I have similar problems I usually render the values from those buffers to get a visual view of the content.
result = max( vec3( 0.0 ), normal.xyz );
…and so on.
But you probably already do that.

Yeah I’m doing that, but it’s really puzzeling. I tried finding the edge problem in one of the source buffers, but if I render those buffers as color values (making sure they are in range) then the hills are ok, there are no weird anomalies around the hill edges.
Also I thought they might be misaligned, but even if I combine different values from buffers (like a normal and a position) there are still no anomalies around the hill edges.

What I found is that the weird thing is only visible in the ‘volume fog’ that I made. It samples from a 3d texture like 25 times to simulate moving fog.

But this value is only based on the camera position (a uniform) and the position of the fragment.
I also use a variable distance, which is

float dist = distance(camPos,position);

Now the weird thing is: the dist variable doesn’t show any weird thing around the hill edges, the position variables itself are fine too, but the totalFog variable is doing the weird thing. But the fog variable IS based on the position of the camera and the position of the fragment. I really don’t understand how that is possible.

In the follow pic you can see the ‘totalfog’ coded in the red channel, and the ‘dist’ in the green channel. Both based on camPos and Position, but the dist is fine, and the total fog isn’t.

[ATTACH=CONFIG]438[/ATTACH]

ANY idea what could cause this? Or any idea how I could further investigate? Really stuck here…

thanks!

What texture format do you have for your position-buffer?

The format I use is GL_RGB32F_ARB. I use OSG, so this is what I use:

osg::TextureRectangle* positionsRect;
positionsRect = new osg::TextureRectangle;
positionsRect->setTextureSize(screenWidth, screenHeight);
positionsRect->setInternalFormat(GL_RGB32F_ARB);
positionsRect->setSourceFormat(GL_RGB);
positionsRect->setSourceType(GL_FLOAT);
positionsRect->setFilter(osg::Texture2D::MIN_FILTER,osg::Texture2D::NEAREST);
positionsRect->setFilter(osg::Texture2D::MAG_FILTER,osg::Texture2D::NEAREST);

Well it seems that you know what your’re talking about.
Just one question… how do you test that normal is zero?

I use this:

if (length(normal)==0) //sky
{
	gl_FragColor = vec4(0.90, 0.82, 0.47, 1);
	return;
}

But I noticed the strange effects are also on hilltops that are not on the edge with the sky, but are just hills in the terrain, so it doesn’t seem to be related to the terrain/sky transition…

[QUOTE=STTrife;1251368]I use this:

if (length(normal)==0) //sky
{
    gl_FragColor = vec4(0.90, 0.82, 0.47, 1);
    return;
}

But I noticed the strange effects are also on hilltops that are not on the edge with the sky, but are just hills in the terrain, so it doesn’t seem to be related to the terrain/sky transition…[/QUOTE]

Maybe it will not solve your problem, but generally when comparing a floating point number to zero, it is better to test like this

#define EPSILON 0.0001f //Any small value greater than ULP



//no need for abs(...) since the result of length is always greater than zero  

if (length(normal)<EPSILON) //sky
{   
 gl_FragColor = vec4(0.90, 0.82, 0.47, 1);  
  return;
}
else
{
 ...  
//set another color 
 ...
}

Is your G-buffer MSAA? How are you applying fog: per-sample or per-pixel?

well it should be no aliassing. I use OSG for rendering, and I read this in a OSG mailing list

“You have to explicitly enable multisampling for FBO’s when settting up
the RTT Camera, so should it be off by default.”
.
And indeed when I add the texture as a render target there are arguments how many samples it should take, and by default they are set to zero. And if I set them to something bigger I indeed get big problems on the edge of the hills and the sky (it actually starts mixing them).

The fog is a per pixel, calulated in the deferred stage. Also it has both linear and 3d fog. The linear fog is based on the ‘dist’, which is calculated with length(camPos, position), and that works fine. The 3d fog is also calculated with camPos and Position, but this time they are used to sample n times from a 3d texture, and then the edges off the hills give wrong effect!
That is so weird. If you want I can post the shader source here. It worked fine in a non deferred situation. Also, it seems strange that I can’t find the error in any of the buffers, or in the linear fog.

Ok, scratch that idea then. I could see how you might produce this with MSAA, but sounds like that’s not it.

Exactly my thought, it almost HAS to do with something like that, because it happens at the edges of objects. Any other ideas or good ways to look into it are appriciated!

I finally found it. In the 3D texture that I use for fog, I set the min-filter to LINEAR (or NEAREST is also fine). It was default set to GL_LINEAR_MIPMAP_LINEAR by OpenSceneGraph.
Can someone explain what exactly the effect is of using GL_LINEAR_MIPMAP_LINEAR? I thought it was just about how it sampled from the texture, within a single fragment, but it seems to have an effect too that somehow affects how the shader works in general, and can affect adjacent pixels as well? Cause in my (simple) view, when rendering the quad for deferred rendering, the shader program is run exactly once for each pixel. But then I don’t see how I could get those artifacts? Anyone able to explain this? Or link me to a relevant source?

I can help you out a little, having hit/fixed similar problems before. Will try to keep this high-level and intuitive so it’s easier to get the jist of it.

When sampling from a texture using a texture access function which uses default LOD selection, when minifying, LINEAR_MIPMAP_LINEAR minfilter tells the shader to interpolate a result within the 1…2 “closest match” MIP levels, and then to interpolate those results together based on where the “ideal” MIP level is between the “closest match” levels.

Now where “adjacent pixels” comes in is in the computation of the “ideal” and “closest match” levels (this is the default LOD selection piece). Think of neighboring pixels as being processed in parallel on the GPU. When you request default LOD selection, the GPU uses texcoord deltas between adjacent pixels in order to estimate the texture coordinate derivatives w.r.t. to the screen pixels. If the texture coordinates you’re feeding in are not contiguous across adjacent pixels, then you get huge texture derivatives and it messes up the default LOD selection.

Also note that you don’t need MIPmaps for this problem to occur. Anistropic texture filtering uses texture derivatives as well and can result in similar problems even when there are no MIPs.

Another example where this “discontiguous texcoords causing huge texture derivatives and artifacts” problem can occur is when looking up into cascaded shadow maps and you’re on a split between two maps. You end up having to take some special steps here to avoid the discontinuity.

hm hm trying to understand it here, most of it seems to make sense. But the part about adjecent pixels: the texture coordinates you talk about, are those the UV parameters on the primitive, or are those the values that I pass to the texture3D function in my shader?
If it’s in the shader, what If I don’t use the texture3D function in all pixels (using an if statement)?
But the problem is starting to make sense, so in case of mip mapping or anti aliassing, you cannot just assume that your shader program is just using information from a single pixels, but calls to the samplers are also dependend on the samplers called in adjacent pixels (or something like that?)
Thanks for helping me understand this, interesting stuff…

The latter.

If it’s in the shader, what If I don’t use the texture3D function in all pixels (using an if statement)?

That can be a problem too. Let me just point you to two good blog posts on this:

But the problem is starting to make sense, so in case of mip mapping or anti aliassing, you cannot just assume that your shader program is just using information from a single pixels, but calls to the samplers are also dependend on the samplers called in adjacent pixels (or something like that?)

You got it.