Does fragment shader only ouput normalized values?

I transfer unsigned bytes into vertex shader, and then forward them into fragment shader as outputs. Because I want to save these unsigned bytes into the render buffer COLOR_ATTACHMENT0, such that I could read them in host program later.

I tried various ways, and found only the following method is effective.

I have to normalize these unsigned bytes to [0-1], either by OpenGL or manually to divide them by 255 in fragment shader. Then I could successfully get the unsigned byte values in my program.

Does fragment shader only output values that must be normalized between 0 and 1?

The range of outputs is dictated by the internal format of the colour buffer.

A list of the various formats can be found in table 2 of the glTexImage2D manual page. All of the formats in table 2 should be usable for textures used as colour buffers, and for renderbuffers (the compressed formats cannot be used for either).

If the colour buffer is an unsigned, normalised fixed-point (any of the legacy formats such as GL_RGB or GL_RGBA, or any of the sized types which lack a suffix), then the outputs are clamped to the 0…1 range. This is likely to be the case for the default framebuffer; if you want some other format, you need to use an FBO and attach a renderbuffer or texture with the appropriate format as the colour buffer.

If the colour buffer is a signed, normalised fixed-point (anything with a “_SNORM” suffix), then the outputs are clamped to the -1…1 range.

If the colour buffer is a signed or unsigned integer type (anything with an “I” or “UI” suffix), the values will be converted to integer and stored.

If the colour buffer is a floating-point type (anything with an “F” suffix), the values will be stored as-is.

Hi,GClements,
I appreciate for your help. I carefully read your post and “Table 2. Sized Internal Formats” on the reference page. Yes, I attached the render buffer upon a customized frame buffer object.

Since I am dealing with unsigned bytes, I think GL_RGB8 is the proper internal format I should use in setting the render buffer storage.

glRenderbufferStorage(GL_RENDERBUFFER, internalformat, width, height);

Then, when setting the vertex attribute (the tube where the values are plugged), I have to use

glVertexAttribPointer( (GLuint)1, 3, GL_UNSIGNED_BYTE, GL_TRUE, 0, NULL );

It is quite strange, isn’t it? I guess I should use GL_FALSE, such that the values passed into shaders should be [0-255], instead of [0-1].

Please correct me if I made mistakes. Thanks in advance.

God bless you.

BTW, just now, I also experimented GL_RGB8UI as the internal format, GL_FALSE in glVertexAttribPointer. Such configuration does not work either.

It depends. GL_RGB8 is an unsigned, fixed-point format. If you want to write 8-bit unsigned integers, you should use GL_RGB8UI.

The above call will convert unsigned bytes to floats in the range 0.0 to 1.0. If you set the normalized parameter to GL_FALSE, they will be converted to floats in the range 0.0 to 255.0.

If you want them to be passed to the shader as unsigned integers, you should use glVertexAttribIPointer() instead (note the extra “I” in the name).

The first versions of GLSL required that all of the data flowing through the shaders (vertex attributes, in/out variables, framebuffer contents) was “real” (fixed/floating-point) rather than “integer”, although integers could be used for uniforms, loop counters, etc. Full support for integer data was added later (on a similar note, the first versions of GLSL didn’t support bitwise and/or/xor/not, shifts, or the modulo (%) operator). Similarly, earlier versions of OpenGL always treated textures as containing fixed-point values; integer formats were added much later.

As a consequence of this, the “default” handling of integer is to treat it as fixed-point, and convert it to/from floating-point. If you want the data to be treated as integers throughout, you have to specifically request it by using newer functions (e.g. glVertexAttribIPointer) or formats (those with “I” or “UI” suffixes).

Dear GClements ,

I sincerely appreciate for your help with detailed instructions. I felt quite embarrassed because I could not get the right result while I tried to follow the “right” path.

Here is the vertex shader:
#version 400

layout (location = 0) in vec3 VertexPosition;
layout (location = 1) in vec3 VertexFaceID;

uniform mat4 MVP;

flat out vec3 FaceID;

void main()
{
FaceID = VertexFaceID;
gl_Position = MVP * vec4(VertexPosition,1.0);
}

This is the fragment shader:
#version 400

flat in vec3 FaceID;

layout( location = 0 ) out vec4 FragColor;

void main() {

FragColor = vec4(FaceID.x/255, FaceID.y/255, FaceID.z/255, 1.0);

}

When I use GL_RGB8 as the render buffer internal format, and use
glVertexAttribPointer( (GLuint)1, 3, GL_UNSIGNED_BYTE, GL_FALSE, 0, NULL );
to specify the vertex shader attribute, I could get the correct result as:

[ATTACH=CONFIG]509[/ATTACH]

However, when I use GL_RGB8UI and
glVertexAttribIPointer( (GLuint)1, 3, GL_UNSIGNED_BYTE, 0, NULL );

I get a all-zero result which is incorrect.

[ATTACH=CONFIG]510[/ATTACH]

Your vertex shader input types must match your vertex attributes. If you use glVertexAttribIPointer, that means you are feeding an integer attribute. It must be a uint, int, uvec or ivec type. It cannot be a float or vec.

Similarly, your fragment shader output types must match the image format they’re destined for. Floating point image formats (which include normalized integer formats like GL_RGB8) take floating-point output values. Integer image formats take integer output variables. So if you want to write to a GL_RGBA8UI image, you must use a uvec4 as the output variable.

Also, don’t render to (most) 3-component image formats. If you need to write 3-components, render to a 4-component format. Most of the 3-component formats are not required for use as render targets.

Hi, Alfonse Reinheart ,

I sincerely appreciate for your answering, and am sorry for giving you feedback late.

I tested with your idea, by changing the type for Face ID from vec3 to uvec3, and rendering these values into the default frame buffer. So that I could visually confirm whether these values are passed into shaders correctly or not.
Here is the vertex shader:
#version 400

layout (location = 0) in vec3 VertexPosition;
layout (location = 1) in uvec3 VertexFaceID;

uniform mat4 MVP;

flat out uvec3 FaceID;

void main()
{
FaceID = VertexFaceID;
gl_Position = MVP * vec4(VertexPosition,1.0);
}

This is the fragment shader:
#version 400

flat in uvec3 FaceID;

layout( location = 0 ) out vec4 FragColor;

void main() {

FragColor = vec4(FaceID.x/255, FaceID.y/255, FaceID.z/255, 1.0);
}

Unfortunately. There are some improvements to the result, but which is still not correct.

[ATTACH=CONFIG]513[/ATTACH]

BTW, Please click the image I post to enlarge it; otherwise you could not see the difference.

Doesn’t GLSL follow C/C++ in that if you divide two integers you get integer division? In that case you want to divide by 255.0 to get floating point division.

Dear carsten neumann, I appreciate for your contribution. By making the following change,

FragColor = vec4(FaceID.x/255.0f, FaceID.y/255.0f, FaceID.z/255.0f, 1.0);

The default frame buffer shows me the CORRECT result, proving that the unsigned bytes are successfully passed from host program to vertex shader, then from vertex shader to fragment shader.

However, I further tested that, uvec3 can not be used as the output type for a custom frame buffer .

Here is the details of my testing process:
1). Set GL_RGB8UI as the custom render buffer storage internal format:

glRenderbufferStorage(GL_RENDERBUFFER, internalformat, width, height);

2). Set uvec3 as the fragment output type in the fragment shader:
#version 400

flat in uvec3 FaceID;
layout( location = 0 ) out uvec3 FragFaceID;
void main() {

FragFaceID = FaceID;

}

3). Read the pixels in the custom render buffer:

framebuffer_.bind();
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0, 0, glViewportWidth_, glViewportHeight_, GL_RGB, GL_UNSIGNED_BYTE, src_);

Debugging the code, I found that the data I read back are all zeros.

On contrary, use GL_RGB8 in step 1; normalize the data and and use vec3 in step 2,

#version 400

flat in uvec3 FaceID;
layout( location = 0 ) out vec3 FragFaceID;

void main() {

FragFaceID = vec3(FaceID.x/255.0f, FaceID.y/255.0f, FaceID.z/255.0f);

}

we get the correct result in step 3.

It seems that my assumption: “fragment shader only ouput normalized values” turns to be right. But I really hope that somebody could tell me it is not the truth, and give me instructions to show my assumption is wrong.

You should read all of the documentation about integer processing.

You’ll need to use GL_RGB_INTEGER (not GL_RGB) when moving integer data around, like with glReadPixels.

BTW, according to Alfonse Reinheart’s comments, I also tested 4-component format. I still got all-zero if GL_RGBA8UI is used in step 1 and uvec4 is used in step 2.

Of course you get all zeros. You’re passing a floating-point value between 0 and 1 to an integer. Zero or one is the only reasonable outcome of that.

If you wanted the numbers 0 to 255, then you shouldn’t be dividing by 255.

[QUOTE=arekkusu;1254600]You should read all of the documentation about integer processing.

You’ll need to use GL_RGB_INTEGER (not GL_RGB) when moving integer data around, like with glReadPixels.[/QUOTE]

Hi, arekkusu. Thanks for hitting the point. Using GL_RGB_INTEGER in step3, I got the CORRECT result. Yes, I actually read the latest version of Opengl programming guide. In that book, it says the format could be GL_RGB or GL_RGB_INTEGER. The word ‘or’ makes me believe that these two formats are equivalent.

Now, the bulb of my assumption goes to be exploded. I am very happy with that.

Finally, many many thanks to all of you. You teach me a lot.:biggrin-new:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.