Renderbuffer with int values

Hello.
My Question is: How can I make renderbuffer read int values from the FragmentShader?
I’m attempting to output int values from fragment shader in the following manner.

FragmentShader (glsl):

#version 400 core

in int pass_objectId;

out int out_Color;

void main(void){

	out_Color = 10;

}

and save those values into a renderbuffer as follows:
ObjectBuffer.java (java with LWJGL):


public float getId(int x, int y){
	//Read data from renderbuffer
	GL30.glBindFramebuffer(GL30.GL_READ_FRAMEBUFFER, idBuffer);
        IntBuffer bytes = BufferUtils.createIntBuffer(1);
        GL11.glReadPixels(x, y, 1, 1, GL11.GL_RED, GL11.GL_INT, bytes);
        GL30.glBindFramebuffer(GL30.GL_READ_FRAMEBUFFER, 0);
        System.out.print(bytes.get());
        return 0;
    }

private int createIdData(int width, int height){
	//Create renderbuffer
	int id = GL30.glGenRenderbuffers();
	GL30.glBindRenderbuffer(GL30.GL_RENDERBUFFER, id);
	GL30.glRenderbufferStorage(GL30.GL_RENDERBUFFER, GL11.GL_RED, width, height);
	GL30.glFramebufferRenderbuffer(GL30.GL_FRAMEBUFFER, GL30.GL_COLOR_ATTACHMENT0, GL30.GL_RENDERBUFFER, id);
	GL30.glBindRenderbuffer(GL30.GL_RENDERBUFFER, 0);
	return id;
}

As above code shows, I’m using LWJGL for OpenGL. However when I attempt to read int values using GL11.glReadPixels(); it only returns 0s for every fragment drawn (When I press on area that has no fragment data, I get Integer.MAX_VALUE = 2^31-1 = 2147483647). When I change the output type of Fragment shader into a float and make it return a floating point value (eq. 0.1), I can read that value from the RenderBuffer as expected.

For starters, you’re not specifically requesting an unnormalized (raw) integer renderbuffer internal format.

GL_RED is what’s called a base internal format. Multiple specific “sized internal formats” share this same base internal format. The base internal format just says what components are present. It says nothing about what types those components are. They could be float, signed int, unsigned int, signed normalized fixed-point, or unsigned normalized fixed point, and could have varying number of bits per component besides that.

You want to choose a specific sized internal format that you know your GL implementation supports for renderbuffer internal formats. A sized internal format specifies both the type and interpretation (float, normalized/unnormalized signed/unsigned int, etc.) as well as bits per component. Try GL_R32I. The GL 3.0 spec requires that that be supported as a valid render buffer format. For more, see “Required Renderbuffer Formats” in the OpenGL 3.0 Spec.

Also, check for GL errors (OpenGL Error, glGetError (or "How do I check for GL errors?), glGetError()).

Also when you do the glReadPixels, I think you’re going to want to use GL_RED_INTEGER instead of GL_RED. GL_RED/GL_RG/GL_RGB/GL_RGBA actually imply a normalized integer format. To get unnormalized (raw) integers back, add the _INTEGER suffix.

[QUOTE=Dark Photon;1286934]For starters, you’re not specifically requesting an unnormalized (raw) integer renderbuffer internal format.

GL_RED is what’s called a base internal format. Multiple specific “sized internal formats” share this same base internal format. The base internal format just says what components are present. It says nothing about what types those components are. They could be float, signed int, unsigned int, signed normalized fixed-point, or unsigned normalized fixed point, and could have varying number of bits per component besides that.

You want to choose a specific sized internal format that you know your GL implementation supports for renderbuffer internal formats. A sized internal format specifies both the type and interpretation (float, normalized/unnormalized signed/unsigned int, etc.) as well as bits per component. Try GL_R32I. The GL 3.0 spec requires that that be supported as a valid render buffer format. For more, see “Required Renderbuffer Formats” in the OpenGL 3.0 Spec.

Also, check for GL errors (OpenGL Error, glGetError (or "How do I check for GL errors?), glGetError()).

Also when you do the glReadPixels, I think you’re going to want to use GL_RED_INTEGER instead of GL_RED. GL_RED/GL_RG/GL_RGB/GL_RGBA actually imply a normalized integer format. To get unnormalized (raw) integers back, add the _INTEGER suffix.[/QUOTE]

It actually works that way! Thank you so much for your help! I could have never figured out that those are the values to get the result I want! Spent days trying to look for an answer but couldn’t find one. Much appreciated! <3