color channel: float and 8 bit

Hello there.
I’ve seen that the color to go into the shaders was specified as a vec3 so 3 floats for each RGB channel.

  1. Isn’t this a waste of memory? Is the hardware driver really using the float resolution (precision) for pixel LED output?
    If the final output has 24 bit precision:

  2. Can someone tell me who converts and at which moment is the color converted from this 3 x float to 3 x 8 bit to display on the screen?

  3. In order to save memory isn’t it worth:
    a. specifying the color using a 1 pixel 1D texture?
    b. using 3 x GLchar and using glAttribPointer() with GL_UNSIGNED_BYTE as type and GL_TRUE for normalized (or manually dividing each by 256 in the shader)?
    c. another way to directly parse from 3 x 8 bit (RGB space of the input color) to… 3 x 8 bit (output)?

I suppose it would be useful to work with a larger precision when interpolating color from a vertex to another but I don’t need it since I use one color per primitive.

I’ve seen that the color to go into the shaders was specified as a vec3 so 3 floats for each RGB channel.

Where?

There are many ways to get a color into a shader, and from the shader’s side, pretty much all of them will look like 3 floats. That doesn’t mean the source data actually are 3 IEEE-754 32-bit floating-point numbers.

It is very common to use normalized integers, either in texture image formats or in vertex formats. While many tutorials will provide colors in vertex formats with 3 floats, this is usually for the sake of simplicity, not a suggestion of how things should actually be handled in serious applications.

Isn’t this a waste of memory?

In the general sense, no. See below

Is the hardware driver really using the float resolution (precision) for pixel LED output?

No. Generally, you’ll get 8 bits per color channel of precision.

However, this does not mean that you don’t want more bits for intermediate computations. Just because your final value will be squashed to 8 bits per color channel doesn’t mean you don’t need more bits in the middle.

I’m not going to get into the details, but high dynamic range rendering basically requires more than 8 bits of precision. Not at the end, but it requires light intensities (at the very least) to have absolute magnitudes larger than 1.0.

specifying the color using a 1 pixel 1D texture?

No. If you’re talking about some form of palette, that’s always a losing proposition in terms of compression. S3TC, BPTC, or ASTC will generally beat paletting in image quality, data size, and performance simultaneously.

using 3 x GLchar and using glAttribPointer() with GL_UNSIGNED_BYTE as type and GL_TRUE for normalized (or manually dividing each by 256 in the shader)?

You should not manually divide it in the shader. But yes, this is a common means of specifying a color via the vertex format.

Note however, that the shader will still see the value as a vec3. The whole point of the vertex format is that the system will decide on its own how to convert the data for you (using dedicated hardware or shader logic, depending on the hardware’s capabilities).

Thank you for the detailed explanation and references!
From what you’re saying and from what I’ve read in the articles, I came to the conclusion that it’s best for me to use GL_RGB8UI constant but I don’t know where to specify it, in what function? Can you please, give me an example?

From what you’re saying and from what I’ve read in the articles, I came to the conclusion that it’s best for me to use GL_RGB8UI constant

No, those are unsigned integers, not [i]normalized[/i], unsigned integers.

Furthermore, that’s a texture image format, not something you use for a vertex format. Granted, you haven’t made it clear which one you’re trying to use at the moment. Image formats are specified for textures when you are creating storage for them.

What I’m trying to do is draw some (upscaled) points of different colors at random coords on the screen and I thought specifying each point’s color through vertex attributes i.e. triplets of unsigned integers for a 24-bit color. This “project” is a bit tedious for me as I’m a beginner and still striving with which gl* functions to call, when.
I thought I’d use glVertexAttribIPointer() as I said above but I don’t know how to use GL_RGB8UI to tell it that I need non-normalized color information i.e. {128, 128, 255}.
How can I go about it?

I thought I’d use glVertexAttribIPointer() as I said above but I don’t know how to use GL_RGB8UI to tell it that I need non-normalized color information i.e. {128, 128, 255}.

First, as I explained, GL_RGB8UI is a texture image format. It has nothing to do with vertex formats; those are completely different things specified by completely different APIs.

Second, why do you need non-normalized colors? Colors are a good 80% of the reason why we use integer normalization to begin with. Your use case doesn’t seem to merit the use of integers here.

I think you’ve become confused. Your use case strongly suggests that you want normalized color values, but you claim to not want them normalized. If your use case truly needs these colors to not be normalized, then you should be able to explain why.

Third, glVertexAttribIPointer cannot perform integer normalization. As stated on the previously linked OpenGL wiki page, integer normalization is for when data is stored as an integer, but seen as if it were a float. glVertexAttirbIPointer is for data that is stored as an integer, and you want to be seen as an integer. That’s why there’s no parameter for it; you’re feeding integer data as integers, so the concept of normalization just doesn’t apply.

Fourth, if you are going to use glVertexAttribIPointer, you must use [var]ivec[/var] or [var]uvec[/var] in your shader as the corresponding input value.

Thank you for staying with me so far. Yes, I’m confused indeed. I want neither more nor less than to draw a point and use a simple way to input its color into the shaders (varying/uniform?) in the form of RGB where each channel has a value of [0,255] just as any color is commonly expressed.
So basically I want to draw a few points on the screen like for instance a green point and I want to pass its color into the shaders as (0, 190, 0) and not as (0f, 0.745098f, 0f) for reasons of readability (obvious), memory (I only need 24 bits instead of 96 bits) and (possibly though doubtful) precision (due to rounding). But you said:

the shader will still see the value as a vec3. The whole point of the vertex format is that the system will decide on its own how to convert the data for you
So I want GLSL to treat (0, 190, 0) as a color, as channel values i.e. green value of 190 and not alter the “shade” in the slightest. This, instead of the undesired case of treating the triplet as general numbers and possibly convert it to float and make it lossy.
That’s why I came up with the idea of using a 1 pixel 1D texture – thinking that the color (0, 190, 0) would occupy 24 bit and at the same time being 100.00% preserved. But you don’t recommend this and it is, I admit, bad practice.

Here is where I’m puzzled:
What’s the most straight forward way of inputing a color into the shaders without using unnecessary memory and losslessly? How do you advise me to draw the points?
I suspect you’ll say (and If you do then I’ll do it like that):
glVertexAttribPointer(ind, 3, GL_UNSIGNED_SHORT, GL_TRUE, str, off)

RGB where each channel has a value of [0,255] just as any color is commonly expressed.

And that’s the source of your confusion. See, you’ve learned that colors range from [0, 255].

The don’t. That’s just what colors look like when stored as normalized integer bytes. Colors actually range on the floating-point range [0, 1] (for the purposes of this conversation). Which is exactly what the normalized byte range [0, 255] maps to.

The shader should will see floating-point values on the range [0, 1]. That’s the whole point of normalized integers.

So I want GLSL to treat (0, 190, 0) as a color, as channel values i.e. green value of 190 and not alter the “shade” in the slightest. This, instead of the undesired case of treating the triplet as general numbers and possibly convert it to float and make it lossy.

Converting from normalized integers to floats is not lossy.

glVertexAttribPointer(ind, 3, GL_UNSIGNED_SHORT, GL_TRUE, str, off)

Your data is 8-bit bytes, not 16-bit shorts. Also, you should pad out your data structure to be 4-byte aligned; pass 4 values instead of 3. The shader can still use a [var]vec3[/var] if you want.

Thank you again.
Yes, I meant 8 bit unsigned (from your link, GL_UNSIGNED_BYTE) and I wrote GL_UNSIGNED_SHORT.

About padding, I speculate: the shader works with power of 2 sized memory blocks (maybe even a minimum of 4-byte) and so a 4th value is from the house.
But still, I can’t find use for a 4th value. In addition I’m afraid that the memory on the client side i.e. the buffer in RAM would still be occupied (not freed) after the data has been transferred to video RAM…

But still, I can’t find use for a 4th value.

You don’t have to. Padding represents wasted space.

In addition I’m afraid that the memory on the client side i.e. the buffer in RAM would still be occupied (not freed) after the data has been transferred to video RAM…

I’m not really sure what you mean here. The “memory on the client side” is your memory. You allocated it, so you need to delete it when you’re finished.

You’re still confused. Do you want it to treat the input as a colour or as a triple of integers? A colour is actually a triple of “reals” in the range 0…1, although a common representation is as 8-bit unsigned fixed-point values. Note: fixed-point, not “integer”. Use of integers in client code is an artefact of most languages not having a distinct fixed-point type, meaning that you have to specify the representation rather than the value.

If you want to ensure that values which end up in the framebuffer are identical to the inputs, you have to create and bind a FBO whose colour attachment is a GL_RGB8UI (or GL_RGBA8UI) texture or renderbuffer, and access that via a uvec3 (or uvec4) “out” variable.

The default framebuffer isn’t guaranteed to be 24-bpp, and the implicit declarations of gl_FragColor and gl_FragData are as vec4 (i.e. floating-point). So simply passing colours into the shader as integers isn’t enough to ensure they stay integers.

In any case, if you’re trying to ensure that values are passed through as integers, you either have a rather “niche” requirement, or you’re doing something wrong. In most cases you would want to use floats for everything except storage (where fixed-point will save space).

you either have a rather “niche” requirement, or you’re doing something wrong
I’m doing it the wrong way. I’ll stick to floats then.

[QUOTE]In addition I’m afraid that the memory on the client side i.e. the buffer in RAM would still be occupied (not freed) after the data has been transferred to video RAM…

I’m not really sure what you mean here. The “memory on the client side” is your memory. You allocated it, so you need to delete it when you’re finished. [/QUOTE]
I’m not aware of the inner working of the openGL/client but what I meant was when you create for instance a vertex buffer as an array in the client, and pass it to openGL. You’ll probably laugh reading this but I speculate that openGL creates a corresponding array of bytes in video memory (doesn’t matter when but probably at the time of using glBufferData() ) so the point is that there would be 2 data sets with the same data.
But I haven’t seen anywhere somebody to free up the array after passing it to openGL. And I wouldn’t do it anyway, since in most cases I need to only update the data without creating a new buffer.

You have the patience of a professor just for the replies so far. It’s more than expected and I’ve learned pretty much from your replies.

I’m doing it the wrong way. I’ll stick to floats then.

I’m not sure why. You understood it back when I explained it. Then GClements came along with his “fixed-point” stuff (technically a legitimate phrase, but not correct OpenGL terminology), and now you don’t get it.

Just forget everything he wrote :wink:

I’m not aware of the inner working of the openGL/client but what I meant was when you create for instance a vertex buffer as an array in the client, and pass it to openGL. You’ll probably laugh reading this but I speculate that openGL creates a corresponding array of bytes in video memory (doesn’t matter when but probably at the time of using glBufferData() ) so the point is that there would be 2 data sets with the same data.

When I said “your memory”, I didn’t mean “addressable on the CPU”. I meant “you told OpenGL to allocate it, you told OpenGL what to fill it with, and it won’t go away until you tell OpenGL to delete it.”

The money you put in a bank account is still your money, even though it’s not physically in your possession right now. A buffer object is like that.

But you are right that immediately after calling glBufferData, there are two copies of your data. One in the pointer you gave to glBufferData, and one in your buffer object’s storage.

But I haven’t seen anywhere somebody to free up the array after passing it to openGL.

By “the array”, I’m going to assume you’re talking about “the pointer passed to glBufferData”.

Well, most tutorial code doesn’t allocate that memory directly anyway. They usually use C-style arrays, like this:


glm::vec4 IAmAnArray[] = { ... };

Those are either stack variables or globals. Either way, C++ will clean them up for you.

You only need to be concerned about freeing memory that you directly allocated.

If you were talking about the buffer object itself, yes, most tutorials are kinda… stupid about that. When your application destroys the OpenGL context, all objects in OpenGL will also be destroyed. So most tutorials ignore the whole memory management thing in favor of just dropping it on the floor.

Of course, such tutorials forget that the entire point of a tutorial is to teach something. Like, for example, managing OpenGL memory objects.

That’s just what colors look like when stored as normalized integer bytes. Colors actually range on the floating-point range [0, 1] (for the purposes of this conversation). Which is exactly what the normalized byte range [0, 255] maps to.

Use of integers in client code is an artefact of most languages not having a distinct fixed-point type

I feel like I was born in a jungle and then saw civilization :slight_smile:

By “the array”, I’m going to assume you’re talking about “the pointer passed to glBufferData”.

Yes.
Now I do understand. Both about the format and the memory allocation.

Thank you again and I forgot to thank GClements first time;