PDA

View Full Version : Problem using FrameBuffer for Picking



johnh
11-05-2014, 09:08 PM
I want to use a FrameBuffer to hold an ObjectID and a VertexIndex so I can read the pixel data and determine the object the mouse is over.


I have managed to do this OK for an 32bit Unsigned Integer FrameBuffer. However not all hardware is supporting the Integer frame buffer (Intel HD 4000 does not supply a 32bit buffer and even a 16bit buffer does not seem to work correctly). 16bit with a max of 65k may be too small.


I want to do the same thing using a RGB32F FrameBuffer. However, the FrameBuffer ends up with no data (all pixels are (0,0,0)). The following code snippets show my basic coding (in Pascal)


I am using NVIDEA with OpenGL4.4 and latest drivers for development.


FrameBuffer setup




// Create the FBO
glGenFramebuffers(1, @fFBOHandle);
glBindFramebuffer(GL_FRAMEBUFFER, fFBOHandle);


// Create the texture object for the primitive information buffer
glGenTextures(fColBufSize, @fColorTexture[0]);


for I := 0 to fColBufSize-1 do
Begin
glBindTexture(GL_TEXTURE_2D, fColorTexture[I]);
case aMode of
GL_RGB32F: glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB32F, SizeX, SizeY, 0, GL_RGB, GL_FLOAT, nil );
GL_RGB16F: glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB16F, SizeX, SizeY, 0, GL_RGB, GL_FLOAT, nil );
end;
glTexImage2D( GL_TEXTURE_2D, 0, aMode, SizeX, SizeY, 0, GL_RGB, GL_FLOAT, nil );
glFramebufferTexture2D (GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + I, GL_TEXTURE_2D, fColorTexture[I], 0 );
fBuffer[I] := GL_COLOR_ATTACHMENT0 + I;
end;


glDrawBuffers(fColBufSize, @fBuffer[0]);
glBindTexture(GL_TEXTURE_2D, 0);


// depth
if fIncDepthBuffer then
Begin
glGenTextures(1, @fDepthBuff);
glBindTexture(GL_TEXTURE_2D, fDepthBuff);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, SizeX, SizeY, 0, GL_DEPTH_COMPONENT, GL_FLOAT, Nil);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fDepthBuff, 0);
glBindTexture(GL_TEXTURE_2D, 0);
end;


// Verify that the FBO is correct
aStatus := glCheckFramebufferStatus(GL_FRAMEBUFFER);


// Restore the default framebuffer
glReadBuffer(GL_NONE); //for older hardware


glBindRenderbuffer(GL_RENDERBUFFER, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);



FrameBuffer Enable Writing (beginning of frame buffer render loop) and will end with glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);



Result:=False;
if fFBOHandle = 0 then exit;


glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fFBOHandle);


if ClearBuffers then
Begin
for I := 0 to self.fColBufSize-1 do
glClearBufferfv(GL_COLOR,I,@BufClearColF); //all zero


if fIncDepthBuffer then
glClearBufferfv(GL_DEPTH,0,@BufClearDepthF) //1
else
glClear(GL_DEPTH_BUFFER_BIT);
End;


fIsLinked := True;
Result := True;



FrameBuffer Read Pixels


Var PixelID : array [0..2] of GLFLOAT; //3 x 4 Byte
PixelPos: array [0..2] of GLFLOAT; //3 x 4 Byte


begin
ObjID := 0;
if fFBOHandle = 0 then exit;


PixelID[0]:=0;
PixelID[1]:=0;
PixelID[2]:=0;


glBindFramebuffer(GL_READ_FRAMEBUFFER, fFBOHandle);


if fColBufSize>=1 then
Begin
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(x, y, 1, 1, GL_RGB, GL_FLOAT, @PixelID[0]);
end;


if fColBufSize>=2 then
Begin
glReadBuffer(GL_COLOR_ATTACHMENT1);
glReadPixels(x, y, 1, 1, GL_RGB, GL_FLOAT, @PixelPos[0]);
end;


glReadBuffer(GL_NONE);


//release frame buffer
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);


ObjID := Trunc(PixelID[0]);
GripIndex:= Trunc(PixelID[1]);
end;



Vertex Shader




//RO_Arrow_SEL.vert
//-------LAYOUT
layout(location=0) in vec3 VertexPosition ;

layout(std140) uniform WorldUniform
{
vec3 GlobalOffset;
} World;

//----------UNIFORM
uniform mat4 ModelViewProjection;
//---------IN

//---------OUT
out float InstanceID;

void main(void)
{
InstanceID = float(gl_InstanceID);
gl_Position = ModelViewProjection * vec4((VertexPosition - World.GlobalOffset), 1.0) ;
}




Fragment Shader




//RO_Arrow_SEL.frag
///---------UNIFORM
uniform float ObjectID;

//used for selection render

//---------IN
in float InstanceID;

//---------OUT
layout (location = 0) out vec3 FragColor ;



void main(void)
{
FragColor = vec3(float(ObjectID), float(InstanceID) ,0);


}

GClements
11-05-2014, 10:58 PM
If you only need to store 2 values, GL_RGBA16UI (4x 16-bit unsigned integers per pixel) would suffice; split each ID into high-word and low word and store them separately, e.g.


FragColor = uvec4(ObjectID & 0xFFFFU, ObjectID >> 16, InstanceID & 0xFFFFU, InstanceID >> 16);

Or if the GLSL version doesn't support bitwise operations:


FragColor = uvec4(ObjectID % 0x10000U, ObjectID / 0x10000U, InstanceID % 0x10000U, InstanceID / 0x10000U);


Also, you say:

even a 16bit buffer does not seem to work correctly
You are using 16-bit unsigned integer (e.g. RG_16UI), not signed integer (RG_16I) or fixed-point (GL_RG16), right?

And if you need to use 32-bit floats, you still only need 2 channels (i.e. RG_32F rather than RGB_32F).

johnh
11-06-2014, 11:51 AM
Thanks for the reply.

Yes a good idea using the 4 channel 16bit. I may only need three as the InstanceID will be less than 65k.
Yes I did use RGB_16UI and it worked OK on my NVIDEA and AMD but failed on the Intel HD 4000 (with recent drivers).

My main concern is that using the float approach fails to write data to the buffer on all my hardware and I want to understand why.

Once working with float option I will make a choice of which approach (UI or F) to take and optimize the channel count appropriately. I need something that will work on a good range of hardware, including the Intel HD.

johnh
11-06-2014, 03:01 PM
My problem was that I needed to disable Blending. lesson learned, watch the current GL state. Blending is not an issue with a UI buffer??

GClements
11-06-2014, 08:36 PM
Blending is not an issue with a UI buffer??
No. Blending is only performed for fixed-point or floating-point colour buffers.

GClements
11-06-2014, 09:01 PM
Yes I did use RGB_16UI and it worked OK on my NVIDEA and AMD but failed on the Intel HD 4000 (with recent drivers).
Odd. According to Wikipedia, HD 4000 should support OpenGL 3.3 on Linux, 4.0 on Windows, 4.1 on Mac. All of those versions require RG32UI and RGBA16UI to be supported for both textures and renderbuffers. 3-component (RGB) formats (including RGB_16UI), _SNORM formats and compressed formats must be accepted for textures but not renderbuffers. I'm not sure whether the renderbuffer restrictions are also supposed to apply to textures which are bound to framebuffers, but it might be worth checking whether 2- or 4-component formats such as RG_16UI, RG_32UI or RGBA_16UI work.

malexander
11-07-2014, 09:28 AM
I believe you need to use RGBA16UI on Intel. I had a similar issue with RGB32I on the HD4600, and using RGBA32I fixed the problem.

Edit: Yes, on Intel you must disable blending, or it'll fail to write to integer framebuffers. Ran into that as well.