Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 2 12 LastLast
Results 1 to 10 of 15

Thread: Does fragment shader only ouput normalized values?

  1. #1
    Junior Member Newbie
    Join Date
    Aug 2013
    Posts
    15

    Post Does fragment shader only ouput normalized values?

    I transfer unsigned bytes into vertex shader, and then forward them into fragment shader as outputs. Because I want to save these unsigned bytes into the render buffer COLOR_ATTACHMENT0, such that I could read them in host program later.

    I tried various ways, and found only the following method is effective.

    I have to normalize these unsigned bytes to [0-1], either by OpenGL or manually to divide them by 255 in fragment shader. Then I could successfully get the unsigned byte values in my program.

    Does fragment shader only output values that must be normalized between 0 and 1?

  2. #2
    Member Regular Contributor
    Join Date
    Jun 2013
    Posts
    474
    Quote Originally Posted by ayongwust_sjtu View Post
    Does fragment shader only output values that must be normalized between 0 and 1?
    The range of outputs is dictated by the internal format of the colour buffer.

    A list of the various formats can be found in table 2 of the glTexImage2D manual page. All of the formats in table 2 should be usable for textures used as colour buffers, and for renderbuffers (the compressed formats cannot be used for either).

    If the colour buffer is an unsigned, normalised fixed-point (any of the legacy formats such as GL_RGB or GL_RGBA, or any of the sized types which lack a suffix), then the outputs are clamped to the 0..1 range. This is likely to be the case for the default framebuffer; if you want some other format, you need to use an FBO and attach a renderbuffer or texture with the appropriate format as the colour buffer.

    If the colour buffer is a signed, normalised fixed-point (anything with a "_SNORM" suffix), then the outputs are clamped to the -1..1 range.

    If the colour buffer is a signed or unsigned integer type (anything with an "I" or "UI" suffix), the values will be converted to integer and stored.

    If the colour buffer is a floating-point type (anything with an "F" suffix), the values will be stored as-is.

  3. #3
    Junior Member Newbie
    Join Date
    Aug 2013
    Posts
    15
    Hi,GClements,
    I appreciate for your help. I carefully read your post and "Table 2. Sized Internal Formats" on the reference page. Yes, I attached the render buffer upon a customized frame buffer object.

    Since I am dealing with unsigned bytes, I think GL_RGB8 is the proper internal format I should use in setting the render buffer storage.

    glRenderbufferStorage(GL_RENDERBUFFER, internalformat, width, height);

    Then, when setting the vertex attribute (the tube where the values are plugged), I have to use

    glVertexAttribPointer( (GLuint)1, 3, GL_UNSIGNED_BYTE, GL_TRUE, 0, NULL );

    It is quite strange, isn't it? I guess I should use GL_FALSE, such that the values passed into shaders should be [0-255], instead of [0-1].

    Please correct me if I made mistakes. Thanks in advance.

    God bless you.

  4. #4
    Junior Member Newbie
    Join Date
    Aug 2013
    Posts
    15
    BTW, just now, I also experimented GL_RGB8UI as the internal format, GL_FALSE in glVertexAttribPointer. Such configuration does not work either.

  5. #5
    Member Regular Contributor
    Join Date
    Jun 2013
    Posts
    474
    Quote Originally Posted by ayongwust_sjtu View Post
    Since I am dealing with unsigned bytes, I think GL_RGB8 is the proper internal format I should use in setting the render buffer storage.
    It depends. GL_RGB8 is an unsigned, fixed-point format. If you want to write 8-bit unsigned integers, you should use GL_RGB8UI.

    Quote Originally Posted by ayongwust_sjtu View Post
    Then, when setting the vertex attribute (the tube where the values are plugged), I have to use

    glVertexAttribPointer( (GLuint)1, 3, GL_UNSIGNED_BYTE, GL_TRUE, 0, NULL );

    It is quite strange, isn't it? I guess I should use GL_FALSE, such that the values passed into shaders should be [0-255], instead of [0-1].
    The above call will convert unsigned bytes to floats in the range 0.0 to 1.0. If you set the normalized parameter to GL_FALSE, they will be converted to floats in the range 0.0 to 255.0.

    If you want them to be passed to the shader as unsigned integers, you should use glVertexAttribIPointer() instead (note the extra "I" in the name).

    The first versions of GLSL required that all of the data flowing through the shaders (vertex attributes, in/out variables, framebuffer contents) was "real" (fixed/floating-point) rather than "integer", although integers could be used for uniforms, loop counters, etc. Full support for integer data was added later (on a similar note, the first versions of GLSL didn't support bitwise and/or/xor/not, shifts, or the modulo (%) operator). Similarly, earlier versions of OpenGL always treated textures as containing fixed-point values; integer formats were added much later.

    As a consequence of this, the "default" handling of integer is to treat it as fixed-point, and convert it to/from floating-point. If you want the data to be treated as integers throughout, you have to specifically request it by using newer functions (e.g. glVertexAttribIPointer) or formats (those with "I" or "UI" suffixes).

  6. #6
    Junior Member Newbie
    Join Date
    Aug 2013
    Posts
    15
    Dear GClements ,

    I sincerely appreciate for your help with detailed instructions. I felt quite embarrassed because I could not get the right result while I tried to follow the "right" path.

    Here is the vertex shader:
    #version 400

    layout (location = 0) in vec3 VertexPosition;
    layout (location = 1) in vec3 VertexFaceID;

    uniform mat4 MVP;

    flat out vec3 FaceID;

    void main()
    {
    FaceID = VertexFaceID;
    gl_Position = MVP * vec4(VertexPosition,1.0);
    }

    This is the fragment shader:
    #version 400

    flat in vec3 FaceID;

    layout( location = 0 ) out vec4 FragColor;

    void main() {

    FragColor = vec4(FaceID.x/255, FaceID.y/255, FaceID.z/255, 1.0);
    }

    When I use GL_RGB8 as the render buffer internal format, and use
    glVertexAttribPointer( (GLuint)1, 3, GL_UNSIGNED_BYTE, GL_FALSE, 0, NULL );
    to specify the vertex shader attribute, I could get the correct result as:

    Click image for larger version. 

Name:	Unsigned Bytes (Correct).jpg 
Views:	61 
Size:	8.7 KB 
ID:	1139

    However, when I use GL_RGB8UI and
    glVertexAttribIPointer( (GLuint)1, 3, GL_UNSIGNED_BYTE, 0, NULL );

    I get a all-zero result which is incorrect.

    Click image for larger version. 

Name:	Unsigned Bytes (Wrong).jpg 
Views:	52 
Size:	4.6 KB 
ID:	1140

  7. #7
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    Your vertex shader input types must match your vertex attributes. If you use `glVertexAttribIPointer`, that means you are feeding an integer attribute. It must be a `uint`, `int`, `uvec` or `ivec` type. It cannot be a `float` or `vec`.

    Similarly, your fragment shader output types must match the image format they're destined for. Floating point image formats (which include normalized integer formats like GL_RGB8) take floating-point output values. Integer image formats take integer output variables. So if you want to write to a GL_RGBA8UI image, you must use a `uvec4` as the output variable.

    Also, don't render to (most) 3-component image formats. If you need to write 3-components, render to a 4-component format. Most of the 3-component formats are not required for use as render targets.

  8. #8
    Junior Member Newbie
    Join Date
    Aug 2013
    Posts
    15
    Hi, Alfonse Reinheart ,

    I sincerely appreciate for your answering, and am sorry for giving you feedback late.

    I tested with your idea, by changing the type for Face ID from vec3 to uvec3, and rendering these values into the default frame buffer. So that I could visually confirm whether these values are passed into shaders correctly or not.
    Here is the vertex shader:
    #version 400

    layout (location = 0) in vec3 VertexPosition;
    layout (location = 1) in uvec3 VertexFaceID;

    uniform mat4 MVP;

    flat out uvec3 FaceID;

    void main()
    {
    FaceID = VertexFaceID;
    gl_Position = MVP * vec4(VertexPosition,1.0);
    }

    This is the fragment shader:
    #version 400

    flat in uvec3 FaceID;

    layout( location = 0 ) out vec4 FragColor;

    void main() {

    FragColor = vec4(FaceID.x/255, FaceID.y/255, FaceID.z/255, 1.0);
    }

    Unfortunately. There are some improvements to the result, but which is still not correct.

    Click image for larger version. 

Name:	Unsigned Bytes (Wrong1).jpg 
Views:	60 
Size:	4.7 KB 
ID:	1144

  9. #9
    Junior Member Newbie
    Join Date
    Aug 2013
    Posts
    15
    BTW, Please click the image I post to enlarge it; otherwise you could not see the difference.

  10. #10
    Advanced Member Frequent Contributor
    Join Date
    Apr 2010
    Posts
    645
    Doesn't GLSL follow C/C++ in that if you divide two integers you get integer division? In that case you want to divide by 255.0 to get floating point division.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •