PDA

View Full Version : GLSL, reading wrong value inside a fragment shader for a bounded depth texture



elect
03-26-2015, 06:36 AM
I am applying a slightly modified version of the classic depth peeling algorithm, basically I am rendering all the opaque objects first and then I use that depth as minimum depth, because since they are opaque, it doesnt make sense to not discard fragment deeper than them.
I first tested it on a small test case (https://github.com/elect86/depthPeeling/tree/master/DepthPeeling/src/depthPeeling/dpGl3OfficialOpaque) and it works flawless.

Now I am applying this algorithm to my main application, but for some unknown reasons, it doesnt work and I am going crazy, the main problem is that I keep reading the value 0 for the opaque depth texture bounded in the fragment shader of the next stage

To sum up, this is the fbo for the opaque stuff:


opaqueDepthTexture = new int[1];
opaqueColorTexture = new int[1];
opaqueFbo = new int[1];

gl3.glGenTextures(1, opaqueDepthTexture, 0);
gl3.glGenTextures(1, opaqueColorTexture, 0);
gl3.glGenFramebuffers(1, opaqueFbo, 0);

gl3.glBindTexture(GL3.GL_TEXTURE_RECTANGLE, opaqueDepthTexture[0]);

gl3.glTexImage2D(GL3.GL_TEXTURE_RECTANGLE, 0, GL3.GL_DEPTH_COMPONENT32F, width, height, 0,
GL3.GL_DEPTH_COMPONENT, GL3.GL_FLOAT, null);
gl3.glTexParameteri(GL3.GL_TEXTURE_RECTANGLE, GL3.GL_TEXTURE_BASE_LEVEL, 0);
gl3.glTexParameteri(GL3.GL_TEXTURE_RECTANGLE, GL3.GL_TEXTURE_MAX_LEVEL, 0);

gl3.glBindTexture(GL3.GL_TEXTURE_RECTANGLE, opaqueColorTexture[0]);

gl3.glTexImage2D(GL3.GL_TEXTURE_RECTANGLE, 0, GL3.GL_RGBA, width, height, 0,
GL3.GL_RGBA, GL3.GL_FLOAT, null);
gl3.glTexParameteri(GL3.GL_TEXTURE_RECTANGLE, GL3.GL_TEXTURE_BASE_LEVEL, 0);
gl3.glTexParameteri(GL3.GL_TEXTURE_RECTANGLE, GL3.GL_TEXTURE_MAX_LEVEL, 0);

gl3.glBindFramebuffer(GL3.GL_FRAMEBUFFER, opaqueFbo[0]);

gl3.glFramebufferTexture2D(GL3.GL_FRAMEBUFFER, GL3.GL_DEPTH_ATTACHMENT, GL3.GL_TEXTURE_RECTANGLE,
opaqueDepthTexture[0], 0);
gl3.glFramebufferTexture2D(GL3.GL_FRAMEBUFFER, GL3.GL_COLOR_ATTACHMENT0, GL3.GL_TEXTURE_RECTANGLE,
opaqueColorTexture[0], 0);
checkBindedFrameBuffer(gl3);

Here I just clear the depth (default to 1), I even commented out the opaque rendering:



/**
* (1) Initialize Opaque FBO.
*/
gl3.glBindFramebuffer(GL3.GL_FRAMEBUFFER, opaqueFbo[0]);
gl3.glDrawBuffer(GL3.GL_COLOR_ATTACHMENT0);

gl3.glClearColor(1, 1, 1, 1);
gl3.glClear(GL3.GL_COLOR_BUFFER_BIT | GL3.GL_DEPTH_BUFFER_BIT);

gl3.glEnable(GL3.GL_DEPTH_TEST);

dpOpaque.bind(gl3);
{
// EC_Graph.instance.getRoot().renderDpOpaque(gl3, dpOpaque, new MatrixStack(), properties);
}
dpOpaque.unbind(gl3);
And I have a double confirmation from this



FloatBuffer fb = FloatBuffer.allocate(1 * GLBuffers.SIZEOF_FLOAT);
gl3.glReadPixels(width / 2, height / 2, 1, 1, GL3.GL_DEPTH_COMPONENT, GL3.GL_FLOAT, fb);
System.out.println("opaque fb.get(0) " + fb.get(0));


If I change the clearDepth to 0.9 for example, I get 0.9, so this is ok.
Now I initialize the minimum depth buffer, by rendering all the geometry having alpha < 1 and I bind the previous depth texture, the one used in the opaque rendering, to the



uniform sampler2D opaqueDepthTexture;

I temporarily switched the rendering of this passage to the default framebuffer



/**
* (2) Initialize Min Depth Buffer.
*/
gl3.glBindFramebuffer(GL3.GL_FRAMEBUFFER, 0);
gl3.glDrawBuffer(GL3.GL_BACK);
// gl3.glBindFramebuffer(GL3.GL_FRAMEBUFFER, blendFbo[0]);
// gl3.glDrawBuffer(GL3.GL_COLOR_ATTACHMENT0);

gl3.glClearColor(0, 0, 0, 1);
gl3.glClear(GL3.GL_COLOR_BUFFER_BIT | GL3.GL_DEPTH_BUFFER_BIT);

gl3.glEnable(GL3.GL_DEPTH_TEST);

if (cullFace) {

gl3.glEnable(GL3.GL_CULL_FACE);
}
dpInit.bind(gl3);
{
gl3.glActiveTexture(GL3.GL_TEXTURE1);
gl3.glBindTexture(GL3.GL_TEXTURE_RECTANGLE, opaqueDepthTexture[0]);
gl3.glUniform1i(dpInit.getOpaqueDepthTextureUL(), 1);
gl3.glBindSampler(1, sampler[0]);
{
EC_Graph.instance.getRoot().renderDpTransparent(gl 3, dpInit, new MatrixStack(), properties);
}
gl3.glBindTexture(GL3.GL_TEXTURE_RECTANGLE, 0);
gl3.glBindSampler(1, 0);
}
dpInit.unbind(gl3);



This is the dpInit Fragment Shader:



#version 330

out vec4 outputColor;

uniform sampler2D texture0;
in vec2 oUV;

uniform sampler2D opaqueDepthTexture;
/*
* Layout {lighting, normal orientation, active, selected}
*/
uniform ivec4 settings;

const vec3 selectionColor = vec3(1, .5, 0);
const vec4 inactiveColor = vec4(.5, .5, .5, .2);

vec4 CalculateLight();

void main()
{
float opaqueDepth = texture(opaqueDepthTexture, gl_FragCoord.xy).r;
if(gl_FragCoord.z > opaqueDepth) {
//discard;
}

vec4 color = (1 - settings.x) * texture(texture0, oUV) + settings.x * CalculateLight();

if(settings.w == 1) {

if(settings.z == 1) {

color = vec4(selectionColor, color.q);

} else {

color = vec4(selectionColor, inactiveColor.w);
}
} else {

if(settings.z == 0) {

color = inactiveColor;
}
}
outputColor = vec4(color.rgb * color.a, 1.0 - color.a);
outputColor = vec4(.5, 1, 1, 1.0 - color.a);

if(opaqueDepth == 0)
outputColor = vec4(1, 0, 0, 1);
else
outputColor = vec4(0, 1, 0, 1);
}

Ignore the middle, the important is just at the begin, where I read the red component of the previous depth texture and then I compare at the end, and the geometry I obtain is red, this means the value I read in the opaqueDepthTexture is 0...
The question is why?
After the dpInit rendering, if I bind again the opaqueFbo and read the depth, it is always the clearDepth, so 1 as default or .9 if I cleared it with .9, so it works.
The problem is really that I read the wrong value in the dpInit FS from a bound depth texture.. why?
For clarification, this is the sampler:



private void initSampler(GL3 gl3) {

sampler = new int[1];
gl3.glGenSamplers(1, sampler, 0);

gl3.glSamplerParameteri(sampler[0], GL3.GL_TEXTURE_WRAP_S, GL3.GL_CLAMP_TO_EDGE);
gl3.glSamplerParameteri(sampler[0], GL3.GL_TEXTURE_WRAP_T, GL3.GL_CLAMP_TO_EDGE);
gl3.glSamplerParameteri(sampler[0], GL3.GL_TEXTURE_MIN_FILTER, GL3.GL_NEAREST);
gl3.glSamplerParameteri(sampler[0], GL3.GL_TEXTURE_MAG_FILTER, GL3.GL_NEAREST);
}

Ps: checking all the components, I see the opaqueDepthTexture has always the following values (0, 0, 0, 1)

uwi2k2
03-27-2015, 11:17 AM
hi there,

taking a look at you FS i see that it will cause problems on some GPU“s.
There are uniforms that dont have a effect on the final result of the shader ..
so those are 'optimized away' that causes problems with the other remaining uniforms ...
you should run a test for glErrors after every step in the shader creation and after every getuniformlocation call..

cu
uwi

elect
03-31-2015, 12:57 AM
hi there,

taking a look at you FS i see that it will cause problems on some GPU“s.
There are uniforms that dont have a effect on the final result of the shader ..
so those are 'optimized away' that causes problems with the other remaining uniforms ...
you should run a test for glErrors after every step in the shader creation and after every getuniformlocation call..

cu
uwi

Thanks for the tip, I inserted an additional uniform int debug in order to avoid the compiler optimizing my shaders

I set it on runtime

Anyway, I found it, in the init FS

uniform sampler2D opaqueDepthTexture;
should be
uniform sampler2DRect opaqueDepthTexture;

I spent so much time on something so easy...