ATI card and gl_InstanceID problem

So I have a picking shader that changes the colors of instanced items when I select them…

here is the vertex shader

#version 410
layout (location = 0) in vec3 position;
layout (location = 4) in vec4 trans1;
layout (location = 5) in vec4 trans2;
layout (location = 6) in vec4 trans3;
layout (location = 7) in vec4 trans4;

mat4 transform;
uniform mat4 projCamMat;

flat out int referenceIndex;

void main()
{
	transform[0] = vec4(trans1.x, trans2.x, trans3.x, trans4.x);
	transform[1] = vec4(trans1.y, trans2.y, trans3.y, trans4.y);
	transform[2] = vec4(trans1.z, trans2.z, trans3.z, trans4.z);
	transform[3] = vec4(trans1.w, trans2.w, trans3.w, trans4.w);
	gl_Position = projCamMat * transform * vec4(position, 1.0);
	referenceIndex = gl_InstanceID;
}

and here is the fragment shader

#version 410

#extension GL_EXT_gpu_shader4 : enable

flat in int referenceIndex;

out uvec3 FragColor;

uniform uint objectIndex;

void main()
{
    FragColor = uvec3(objectIndex, referenceIndex, gl_PrimitiveID + 1);
} 
  • basically all of my objects render their indexes in the form of a color to a frame buffer and I read pixels from that.

Everything works fine on Nvidia cg compilers - when I click objects they highlight like they should… However on two ATI cards with completely updated drivers the picking shader
doesn’t work correctly - it compiles and links - yes I checked for error messages - but the picking just doesn’t work. I did some investigating and found that basically no values are
ever given to referenceIndex and objectIndex - they always contain uninitialized values…

I’m not sure but I think the problem comes from gl_InstanceID not working correctly - so my question is does anyone see anything else that might cause this problem and - if the problem is gl_InstanceID does anyone know a fix for this? By the way the compiler on the ATI cards are both GLSL 4.2

What draw commands do you actually use? I’m interested what gl_InstanceID range should you expect in the first place. One potential misunderstanding could be the use of the baseInstance parameter of some draw calls. Don’t forget that gl_InstanceID always starts from 0, even if you specify a baseInstance > 0.

Probably this is not the problem in your case but I want to make sure, considering that you said that referenceIndex seems to be uninitialized, which I understand as it having random value for every vertex. Am I right?

Also, what GPU did you test it on?

Thanks for the reply.

So I have checked gl_InstanceID and I don’t think it is the problem. I put 0 in all of the FragColor components to check if I got 0,0,0 in my structure to hold the Pixel color… here is the code where I read the pixel from the frame buffer

NSPickingFrameBuffer::ObjectIndex NSPickingFrameBuffer::getObjectIndex(int x, int y)
{
    glBindFramebuffer(GL_READ_FRAMEBUFFER, FBO);
    glReadBuffer(GL_COLOR_ATTACHMENT0);

    ObjectIndex index;
    glReadPixels(x, y, 1, 1, GL_RGB_INTEGER, GL_UNSIGNED_INT, &index);
    glReadBuffer(GL_NONE);
    glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);

    return index;
}

and here is my ObjectIndex structure

	struct ObjectIndex 
	{
		unsigned int objectIndex;
		unsigned int referenceIndex;
		unsigned int primIndex;

        ObjectIndex()     
		{
            objectIndex = 0;
            referenceIndex = 0;
            primIndex = 0;
        }
	};

So when I put 0 in all components of FragColor the index returned by getObjectIndex should also contain all 0’s but indeed it contains wrapped around values for unsigned ints (like 1.0004e009) that can change depending on where I click. Sometimes the referenceIndex variable is 0 but the objectIndex variable is always a wrapped around unsigned int value. Like I said before - this function works fine on Nvidia cards but fails on the AMD radeon HD 7870 and ATI Radeon HD 5870. It probably would fail on other ATI cards and work on other Nvidia cards but I only have so many video cards.

So now I think there is an error with reading the pixels from the buffer in glReadPixels - this is the only way I can explain getObjectIndex returning a structure containing wrapped around unsigned int values for objectIndex always - especially since I initialize it to 0.

Any more insights? Thanks a bunch!

Yes, the problem could be with glReadPixels and not with actually what gets written to the texture by the fragment shader. You can test this by instead sampling in a second pass from the integer texture and output something to the screen based on it to figure out whether the values in the integer texture are correct. This will tell you that you did everything right in your shaders and that the problem arises when you try to read back the data with glReadPixels.

I remeber i had a similar problem, but I don’t remeber what was the cause exactly. Try switching to RGBA maybe and play with this code (works for me on ATI & NVIDIA):

Creating fbo:


	glGenFramebuffers(1, &fboID);
	glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID);
	// texture for objects indices
	glGenTextures(1, &textureColorID);
	glBindTexture(GL_TEXTURE_2D, textureColorID); 
	glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32UI, width, height,
				 0, GL_RGBA_INTEGER, GL_INT, 0);
	glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, 
						   GL_TEXTURE_2D, textureColorID, 0); 
	// add depth buffer
	glGenTextures(1, &textureDepthID); 
	glBindTexture(GL_TEXTURE_2D, textureDepthID);
	glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height,
				 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
	glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
						   GL_TEXTURE_2D, textureDepthID, 0);

	// restore the default framebuffer 
	glBindTexture(GL_TEXTURE_2D, 0);
	glBindFramebuffer(GL_FRAMEBUFFER, 0);

and for getting indices switch to RGBA:


glReadPixels(x, y, 1, 1, GL_RGBA_INTEGER, GL_UNSIGNED_INT, &pixel);

Dont forget to out vec4 in FS. Also try to change uniform to plain int if it wont help.

I just thought I would let everyone know I found the problem and fixed it. I tried blackbees suggestions of switching out RGB with RGBA and checked my code against the posted code. I ended up finding the problem thanks to aqnueps suggestion to sample the texture produced by the picking frame buffer in another pass - the results were okay on my Nvidia but messed up with the ATI so that narrowed it down
to a problem with the texture.

The app I’m making right now is a 3D toolkit for a game (hence the need for a picking frame buffer) and so the 3d window might be resized several times - each time it was resized I deleted and remade my frame buffers.

The problem was that in the destructor for my picking frame buffer class I called glDeleteTextures on the color texture twice and didnt call it on the depth texture. Basically the two names were similar (texture and texture_D) and I copy pasted and forgot to add _D to the second delete. Anyways it didn’t have any errors because I only deleted if the GLuint was not 0 so basically the second delete call never was made. It worked fine on Nvidia and didn’t work on ATI probably due to different memory structures on the card types.

So thanks for helping solve the problem!

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.