Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 5 of 5

Thread: simple shader -- how to use unsigned chars for colors?

Hybrid View

  1. #1
    Intern Newbie
    Join Date
    Nov 2010
    Posts
    40

    simple shader -- how to use unsigned chars for colors?

    Hi, I am just learning OpenGLES 2.0, and I am trying to use an array of color values for my vertex colors. Rather than use floats I wanted to use unsigned chars for the RGB values, to save memory.
    It seems to work but the shader compile is producing this warning:

    WARNING: 0:25: Overflow in implicit constant conversion, minimum range for lowp float is (-2,2)
    Here is the shader code I am using. I assume that the divide by 255 is producing the warning? What is the correct way to do this? Surely passing floats for colors is not a better option?
    Thanks
    Bob



    attribute vec4 position;

    attribute vec3 normal;
    attribute lowp vec4 color;


    varying lowp vec4 colorVarying;


    uniform mat4 modelViewProjectionMatrix;
    uniform mat3 normalMatrix;


    void main()
    {
    vec3 eyeNormal = normalize(normalMatrix * normal);
    vec3 lightPosition = vec3(0.0, 0.0, 1.0);

    float nDotVP = max(0.0, dot(eyeNormal, normalize(lightPosition)));

    colorVarying = (color/255.0) * nDotVP;

    gl_Position = modelViewProjectionMatrix * position;
    }



  2. #2
    Intern Newbie
    Join Date
    Nov 2012
    Posts
    33
    First thing I want to say: don't worry about optimization until you have it working already.

    That aside, shaders aren't really my strong point, but from what I can see, the line with the colorVarying assignment looks a bit messy.

    You're passing in attribute lowp vec4 color. vec4 is a vector of four floats, and if it's lowp, the range is -2 to 2. I'm pretty sure that when you pass it unsigned bytes via glVertexAttribPointer (make sure you use parameter GL_UNSIGNED_BYTE), it's already converted to lowp float.

    You divide this small float by 255.0 (definitely not a lowp float) then multiply by nDotVP, which, even its value is probably small enough, you defined as float, not lowp float.

    I think if you drop the division and make sure nDotVP is a lowp float, it will work.

  3. #3
    Intern Newbie
    Join Date
    Nov 2010
    Posts
    40
    It works, except that it gives me that warning. Yes I am passing unsigned chars via the attrib pointer call, but how do I specify that in the shader? I was thinking that the 'lowp' would indicate bytes instead of floats, but maybe that is wrong.

    Because the RGB values range from 0 to 255, I've got to divide by 255 to get them in the 0.0 - 1.0 range that the frag shader wants. Without the divide, I end up with color values on the range 0.0 to 255.0, which is basically always white because anything over 1 is treated that way.

    Anyone know? Thanks for the help!
    Bob

  4. #4
    Intern Newbie
    Join Date
    Nov 2012
    Posts
    33
    Quote Originally Posted by bsabiston View Post
    Because the RGB values range from 0 to 255, I've got to divide by 255 to get them in the 0.0 - 1.0 range that the frag shader wants. Without the divide, I end up with color values on the range 0.0 to 255.0, which is basically always white because anything over 1 is treated that way.
    Is that theoretical, or have you actually tested it? In my experience, OpenGL converts it before it ever gets to the shader; that's why you have to specify GL_UNSIGNED_BYTE for glVertexAttribPointer, otherwise it will convert incorrectly. In your case, it's converting from normalized unsigned byte/char (8 bits, 0x00 to 0xFF) to lowp vec4/float (8 bits, 0.0f to 1.0f).

    You say that if you don't divide, you'll have values up to 255.0, but if your GLSL compiler is taking the hint (which it should if you're using OpenGL ES 2.0), you can only fit values from -2.0f to 2.0f anyway.

    EDIT: I just remembered something that sounds like it would cause your problem. Are you calling

    glVertexAttribPointer(color, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(vertex_struct), color_pointer);

    or something similar? The GL_TRUE parameter is what tells it to convert to 0.0f to 1.0f float.
    Last edited by Aestivae; 12-07-2012 at 07:48 AM.

  5. #5
    Intern Newbie
    Join Date
    Nov 2010
    Posts
    40
    Yes, that was it! I did not know what that argument was for, before.

    Thanks,
    Bob

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •