Compute Shader MSAA

I’ve run into a problem when doing MSAA on a compute shader with a tiled deferred shading setup. Basically I’m trying to implement it using the same article everyone else has: Deferred Rendering for Current and Future Rendering Pipelines and I seem to be doing something wrong. Currently I’m not resolving the aliasing I’m just trying to detect edges. I have a really simple setup. I start by rendering the geometry into my FBO with multisampled textures, and I calculate the z derivative using dfdx/dfdy like this:

Prepass fragment shader:

#version 450 

#define SCREEN_WIDTH 1280.0f
#define SCREEN_HEIGHT 720.0f

//Vertex to fragment values
in vec3 v2fNormal;

uniform vec3 diffuseMaterial;

uniform mat4 inverseProjectionMatrix;
uniform mat4 inverseViewMatrix;

layout(location = 0) out vec3 albedo;
layout(location = 1) out vec3 normal;
layout(location = 3) out vec3 positionZGrad;

vec3 unProject(vec2 fragmentPos, float depth)
{
	vec4 point = inverseProjectionMatrix * vec4(fragmentPos.x/SCREEN_WIDTH * 2.0f - 1.0f, fragmentPos.y/SCREEN_HEIGHT * 2.0f - 1.0f, 2.0f * depth - 1.0f, 1.0f);
	
	return (vec3(point.x,point.y,point.z)/point.w);
}

vec3 reconstructPosition(vec3 pixelPos)
{
	vec2 fragmentPosition = vec2(float(pixelPos.x), float(pixelPos.y));
	return unProject(fragmentPosition, pixelPos.z);
}

void main()
{
    albedo = diffuseMaterial;
    normal = v2fNormal;

    vec3 viewPosition = reconstructPosition(gl_FragCoord.xyz);

    float dx = dFdxCoarse(viewPosition.z); 
    float dy = dFdyCoarse(viewPosition.z); 
    positionZGrad = vec3(dx, dy, 0.0f);
}

So I’m in view space, and the output from positionZGrad looks correct if I render it with positionZGrad = vec3(abs(dx), abs(dy), 0.0f) it looks like this:

http://i.stack.imgur.com/lw2NQ.png (I kept getting an error when I tried to upload the images)

Now in my compute shader the first thing I do is to just get all the samples and calculate their values like this:


....	
        SurfaceData surfaceSamples[MSAA_SAMPLES];
	ComputeSurfaceDataFromGBufferAllSamples(pixelPosition, surfaceSamples);
....

Using these functions:

SurfaceData ComputeSurfaceDataFromGBufferSamples(ivec2 pixelPosition, uint sampleIndex)
{
	//Fetch per sample data from the MSAA textures
	float depthFloat = texelFetch(depthTexture, pixelPosition,int(sampleIndex)).x;
	vec3 normal = texelFetch(normalTexture, pixelPosition,int(sampleIndex)).xyz;
	vec3 albedo = texelFetch(albedoTexture, pixelPosition,int(sampleIndex)).xyz;
	vec3 positionZGrad = texelFetch(positionZGradTexture, pixelPosition, int(sampleIndex)).xyz;


	vec2 gbufferDim = vec2(SCREEN_WIDTH,SCREEN_HEIGHT);

	vec2 screenPixelOffset = vec2(2.0f, -2.0f) / gbufferDim;
    vec4 positionScreen = vec4((pixelPosition.x + 0.5f)/SCREEN_WIDTH * 2.0f - 1.0f, (pixelPosition.y + 0.5f)/SCREEN_HEIGHT * 2.0f - 1.0f, 2.0f * depthFloat - 1.0f, 1.0f);
    vec4 positionScreenX = positionScreen + vec4(vec2(screenPixelOffset.x, 0.0f),0.0f,0.0f);
    vec4 positionScreenY = positionScreen + vec4(vec2(0.0f, screenPixelOffset.y),0.0f,0.0f);

	vec4 positionView = inverseProjectionMatrix * positionScreen;
	positionView /= positionView.w;

	vec4 positionViewDX = (inverseProjectionMatrix * positionScreenX - positionView);
	positionViewDX /= positionViewDX.w;

	vec4 positionViewDY = (inverseProjectionMatrix * positionScreenY - positionView);
	positionViewDY /= positionViewDY.w;

	positionViewDX.z += positionZGrad.x;
	positionViewDY.z += positionZGrad.y;

	SurfaceData data;

	data.positionView = positionView.xyz;
	data.positionViewDX = positionViewDX.xyz;
	data.positionViewDY = positionViewDY.xyz;
	data.normal = normal;
	data.albedo = albedo;

	return data;
}
void ComputeSurfaceDataFromGBufferAllSamples(ivec2 pixelPosition, out SurfaceData surface[MSAA_SAMPLES])
{
	for(uint i = 0 ; i < MSAA_SAMPLES; ++i)
		surface[i] = ComputeSurfaceDataFromGBufferSamples(pixelPosition, i);
}

And then after I’ve done everything I wanted to see if it was working so I check if this pixel needs per sample shading like this:

	bool perSampleShading = RequiresPerSampleShading(surfaceSamples);
	if(perSampleShading)
		imageStore(finalImage, pixelPosition, vec4(vec3(1.0f,0.0f,0.0f), 1.0f));

Using this function:

// Check if a given pixel can be shaded at pixel frequency (i.e. they all come from
// the same surface) or if they require per-sample shading
bool RequiresPerSampleShading(SurfaceData surface[MSAA_SAMPLES])
{
    bool perSample = false;

    const float maxZDelta = abs(surface[0].positionViewDX.z) + abs(surface[0].positionViewDY.z);
    const float minNormalDot = 0.99f;        // Allow ~8 degree normal deviations

    for (uint i = 1; i < MSAA_SAMPLES; ++i) 
	{
        // Using the position derivatives of the triangle, check if all of the sample depths
        // could possibly have come from the same triangle/surface
        perSample = perSample || abs(surface[i].positionView.z - surface[0].positionView.z) > maxZDelta;

        // Also flag places where the normal is different
        perSample = perSample || dot(surface[i].normal, surface[0].normal) < minNormalDot;
    }
    return perSample;
}

Which is pretty much the same as the lauritzen example, just calculate if theres deviations in normals for a group of MSAA samples and check the depth for discontinuities. The only problem is the output I get isn’t right at all:

http://imgur.com/cPMaU0r

An entire surface gets flagged which is weird, and as I move around different surfaces become flagged, and I’m unsure where the problem lies. All my view space normals/colors/positions are constructed correctly and they’re working. However if I just check where the normals are different I still get the same output which is weird to me since I’m just rendering 2 cubes and all the normals of every surface has a 45 degree angle between them. Also when I went through the lauritzen code he’s commenting that he’s using “view space” positions, however when I ported his functions to opengl I had to divide by w to get the correct view space positions so I’m guessing my z derivatives aren’t correctly used

EDIT 1:

It definitely is something wrong with the screen space derivatives, I’ve done a simple test like this:


// Check if a given pixel can be shaded at pixel frequency (i.e. they all come from
// the same surface) or if they require per-sample shading
bool RequiresPerSampleShading(SurfaceData surface[MSAA_SAMPLES])
{
    bool perSample = false;

    const float maxZDelta = abs(surface[0].positionViewDX.z) + abs(surface[0].positionViewDY.z);
    const float minNormalDot = 0.99f;        // Allow ~8 degree normal deviations

    for (uint i = 1; i < MSAA_SAMPLES; ++i) 
	{
        // Using the position derivatives of the triangle, check if all of the sample depths
        // could possibly have come from the same triangle/surface
    //    perSample = perSample || abs(surface[i].positionView.z - surface[0].positionView.z) > maxZDelta;

        // Also flag places where the normal is different
    //    perSample = perSample || dot(surface[i].normal, surface[0].normal) < minNormalDot;
			if(surface[i].normal != surface[0].normal)
		return true;
    }

    return perSample;
}

Which returns this output: http://i.imgur.com/aJ5ZYRr.png

So yeah, the problem seems to be with the derivatives

So I’ve tried loading the sponza model and checking the output and I think something is off with the multisampling or something. If I just check if the normals don’t equal each other for a fragment I get this output: http://i.imgur.com/zs91Rz5.png which actually isn’t terrible but it’s not accurate either. And when I use lauritzens method of finding using

perSample = perSample || dot(surface[i].normal, surface[0].normal) < minNormalDot;

I get this ouput: http://i.imgur.com/0bGx5C2.png where it says that a flat surface like a wall or floor has different normals, which should be impossible, and I have no clue what to make of it.