PDA

View Full Version : Suggestion on cyclic data passage.



CaptainSnugglebottom
05-14-2016, 01:32 PM
Apologies if this is a repeat question.

I am trying to create a neat way of detecting whether an object is being looked. I plan to achieve this by rendering relevant objects to a frame buffer object or a texture while all objects are being drawn. Then I will check the pixel at that frame buffer using the mouse location. If the pixel color (used as a key) is in the map of objects with control enabled (in the current scene), the engine will switch the control register of that objects.

Now, the issue is that I assign unique control color value to an object from its unique ID. The ID is an unsigned int, and when it gets assigned it gets chopped up into a vector of char to be used as a control color vector. I can load values into the uniform Vec3 value in my shader, however I realize that GLSL is using floats between 0 and 1 for representing colors. Will I experience any issues when sending Vec3 of unsigned chars into GLSL to be converted to Vec3 of floats, then to be read as RGB Vec3 of unsigned chars? Will the color vector converted back and forth be the same, or will there be any differences?

GClements
05-14-2016, 08:56 PM
Depending upon which version of OpenGL you require, there may not be any need to perform the conversion.

If you require OpenGL 3.0 or later, you can just use e.g. a GL_R32UI texture where each pixel holds a single 32-bit unsigned integer.

Otherwise, provided that you're using desktop OpenGL and not OpenGL ES, it's possible to convert a 24-bit integer to a vec3, store it in a GL_RGB8 texture, and reliably reconstruct the original value. You just need to ensure that the calculations are robust against rounding errors rather than assuming that arithmetic is exact.

CaptainSnugglebottom
05-14-2016, 09:38 PM
Depending upon which version of OpenGL you require, there may not be any need to perform the conversion.

If you require OpenGL 3.0 or later, you can just use e.g. a GL_R32UI texture where each pixel holds a single 32-bit unsigned integer.

Otherwise, provided that you're using desktop OpenGL and not OpenGL ES, it's possible to convert a 24-bit integer to a vec3, store it in a GL_RGB8 texture, and reliably reconstruct the original value. You just need to ensure that the calculations are robust against rounding errors rather than assuming that arithmetic is exact.

But how would one do that in GLSL (version 4.3)? Will the interfacing in the fragment shader be any different (from the standard vec4 float)?

I just realized that there's no easy way to send a vector of characters, so I have to do either one of:
1) send an integer into fragment shader and then separate it into bytes inside the shader
2) convert the value into 3 floats and then load them all separately, this reduces load on the shader but it does increase the load on the CPU. Either way the issue with this one is that it might add error to the float value during the conversions, thus colorOut might not be equal colorIn.

This brings me to my next question, is there any way to use integers in the fragment shader to write color? So instead of using float vec4 for the fragment color, I could use unsigned int vec4.


Sorry for so many questions. It's kind of a new territory for me, OpenGL Super Bible doesn't quiet go so in depth.

GClements
05-15-2016, 05:03 PM
But how would one do that in GLSL (version 4.3)? Will the interfacing in the fragment shader be any different (from the standard vec4 float)?

In GLSL 3.3 and later, gl_FragColor and gl_FragData only exist in the compatibility profile (i.e. they're legacy features). You're supposed to explicitly declare fragment shader outputs, e.g.


layout(location=0) out vec4 frag_color;


If you're writing a single integer, you'd declare the variable accordingly, e.g.


layout(location=0) out uint object_id;




I just realized that there's no easy way to send a vector of characters, so I have to do either one of:
1) send an integer into fragment shader and then separate it into bytes inside the shader
2) convert the value into 3 floats and then load them all separately, this reduces load on the shader but it does increase the load on the CPU. Either way the issue with this one is that it might add error to the float value during the conversions, thus colorOut might not be equal colorIn.

GLSL doesn't have "characters", but it does have signed and unsigned integers, and vectors of those.

As for conversion between integers and floats, rounding error exists, but it doesn't necessarily present a problem. You can convert an integer to a float then back to the original integer so long as the intermediate float has sufficient precision. For desktop GLSL (which always uses IEEE-754 single precision), a float (or a component of a float vector) can represent any integer between -224 and 224 exactly.

Similarly, arithmetic operations on floats may introduce rounding error if the correct result isn't exactly representable. But the magnitude of the rounding error is limited to +/- the magnitude of the least significant bit (possibly to half that, but I wouldn't rely upon it).

CaptainSnugglebottom
05-15-2016, 10:24 PM
In GLSL 3.3 and later, gl_FragColor and gl_FragData only exist in the compatibility profile (i.e. they're legacy features). You're supposed to explicitly declare fragment shader outputs, e.g.


layout(location=0) out vec4 frag_color;


If you're writing a single integer, you'd declare the variable accordingly, e.g.


layout(location=0) out uint object_id;



GLSL doesn't have "characters", but it does have signed and unsigned integers, and vectors of those.

As for conversion between integers and floats, rounding error exists, but it doesn't necessarily present a problem. You can convert an integer to a float then back to the original integer so long as the intermediate float has sufficient precision. For desktop GLSL (which always uses IEEE-754 single precision), a float (or a component of a float vector) can represent any integer between -224 and 224 exactly.

Similarly, arithmetic operations on floats may introduce rounding error if the correct result isn't exactly representable. But the magnitude of the rounding error is limited to +/- the magnitude of the least significant bit (possibly to half that, but I wouldn't rely upon it).

Hey, thanks for the input. I used the legacy GLSL values as I studied OpenGL and as I now approach multi-layered rendering I realize it's gotta go. I might ask a few questions about that later as well.

As for the pixel color, I'll just floats. I realized (also tested) that float bounded between 0 and 1 can easily represent all 256 values and be converted back and forth without data being lost due to rounding. I'll just use that for now, until I figure out how to do multi-layer rendering with each layer being rendered into 5 different renderbuffers before doing the whole deferred lighting thing. I might lose some performance on the float-integer conversion, but it's not my concern right now.

Thanks.