Problem with bit shifting

Hi,
I have a problem with some bit shifting in the shader. I’m trying to convert the gl_PrimitiveID in vec4 for using it as a color. But the result is not that what I’m expact. The object is rendered white (every value of the vec4 is >= 1.0).
I want to use the shader to render every triangle in a unique color (for color picking). But I don’t know if this is the best way to do this. Maybe it is besser to make a 1D texture which holds the RGBA values for every ID inside but than I need more memory.

#version 150
#extension GL_EXT_gpu_shader4 : enable

in int gl_PrimitiveID;

in vec4 color;

out vec4 gs_FragColor[2];

void main(void)
{
float R = float((gl_PrimitiveID | 0xFF000000) ) / 255.0;
float G = float((gl_PrimitiveID | 0x00FF0000) << 8) / 255.0;
float B = float((gl_PrimitiveID | 0x0000FF00) << 16) / 255.0;
float A = float((gl_PrimitiveID | 0x000000FF) << 24) / 255.0;

gs_FragColor[0] = color;
gs_FragColor[1] = vec4(R,G,B,A);
}

When I try to compile it, there comes the error, that int cannot convert to uint at the line where R is calculated.

GLSL will not implicitly convert a signed integer to an unsigned integer. gl_PrimitiveID is a signed integer. Boolean literals are unsigned integers. You must first convert the gl_PrimitiveID to an unsigned integer.

The code above is quite wrong, doing exactly the opposite operations :slight_smile:

There, fixed:


void main(void)
{

float R = float((gl_PrimitiveID>> 0) & 255) / 255.0;
float G = float((gl_PrimitiveID>> 8) & 255) / 255.0;
float B = float((gl_PrimitiveID>>16) & 255) / 255.0;
float A = float((gl_PrimitiveID>>24) & 255) / 255.0;

gs_FragColor[0] = color;
gs_FragColor[1] = vec4(R,G,B,A);
}

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.