Unnormalized unsigned bytes as vertex attributes

Hi,

My question is that is it allowed to issue VertexAttribPointer with a type of GL_UNSIGNED_BYTE but with normalized flag set as GL_FALSE? If yes then I would ask whether they are usable from GLSL with e.g. vec4 in the same style if they would have been normalized? I think this should be a yes as you couldn’t use it as e.g. uvec4.

I ask this because if I would have to pass e.g. bone indices as vertex attributes I could have several possibilities:

  1. Pass them as unsigned integers and use them in GLSL as uvec4.
  2. Pass them as normalized unsigned bytes, scale the vec4 up to 0-255 and then convert them to an uvec4.

The disadvantage of the first one is that it consumes more memory and bandwidth, the disadvantage of the second that it is a bit nasty and it needs additional computation and conversion to integer. I think the second is still better from performance point of view as computational power usually is less critical than bandwidth.

If the above mentioned possibilities would be valid there would be one or two more possibilities:

  1. Pass them as unnormalized unsigned bytes and use them in GLSL as vec4. This way the upscale can be avoided but still need to convert to uvec4.
  2. Pass them as unnormalized unsigned bytes and use them in GLSL as uvec4.

I’m pretty sure that #4 is invalid and #3 is also very unlikely to be valid, however I did not find anything in the spec about such situations (most probably because I wasn’t reading it carefully enough).

So my question is whether these are valid and how would you pass the bone indices if you were me?

I did not receive any comments about my questions.
Is this question so stupid not worth answering or simply nobody has tried any of these?

Definitely not stupid. I was assuming the latter (I haven’t done this before yet either). Also, hasn’t even been 2 days – not everyone is on the board every day.

From the spec (and inferring a bit), guess if you want a fixed-point int to go in without any float conversion, you use VertexAttribIPointer and ivec*/uvec* in the shader. Otherwise, just use VertexAttribPointer / vec*. And for doubles, VertexAttribLPointer / dvec*. But as to the tradeoffs in-practice… (?)

My question is that is it allowed to issue VertexAttribPointer with a type of GL_UNSIGNED_BYTE but with normalized flag set as GL_FALSE? If yes then I would ask whether they are usable from GLSL with e.g. vec4 in the same style if they would have been normalized?

Yes, that is my understanding. In the spec, search for VertexAttribPointer and look at the table in that section. Also, read the next to the last paragraph on the previous page. In particular, starting at “can be handled in one of three ways”.

  1. Pass them as unsigned integers and use them in GLSL as uvec4.
  2. Pass them as normalized unsigned bytes, scale the vec4 up to 0-255 and then convert them to an uvec4.
  3. Pass them as unnormalized unsigned bytes and use them in GLSL as vec4. This way the upscale can be avoided but still need to convert to uvec4.
  4. Pass them as unnormalized unsigned bytes and use them in GLSL as uvec4.

Based on spec reading only (and ignoring the “unnormalized” mention in your last bullet), these options all look valid.

1 = VertexAttribIPointer / uvec4 input
2 = VertexAttribPointer / vec4 input / normalize = true
3 = VertexAttribPointer / vec4 input / normalize = false
4 = VertexAttribIPointer / uvec4 input

The last and first being effectively the same. If you really literally meant using VertexAttribPointer with normalize = false, then I don’t get the impression GL will take that (but haven’t tried it). I mean, why would you ask it to convert to float only to turn around and have it convert back to int when populating the input in the shader?

Unquestionably, option #1 “looks” like the cheapest with the fewest needless conversions and wasted shader math. But in practice… (?)

Yeah, sorry. I’m doing OpenGL stuff for more than ten years and I just started to actively participate on the forum community just in the last few months and since that I’m pretty addicted to it, so sorry if I’m too impatient.

I am aware of that. After revising the specification based on your response I just figured out that I missed table 2.5 which states that unsigned bytes can be used in case of VertexAttribIPointer. Actually that answers my question.

My whole misunderstanding came from the fact that I assumed that VertexAttribIPointer allows only 32 bit signed and unsigned integer arrays but it seems that I completely misunderstood it.

Usually I always read just extension specifications as they concentrate more on the actual functionality so it is easier to read. However, that results in you won’t see the big picture.

Anyway, thanks for the response.

That makes sense.

Usually I always read just extension specifications as they concentrate more on the actual functionality so it is easier to read.

Same here :wink: Got in that habit years ago partly for the same reason but also because then the GL core wasn’t moving much. Now however, with so much going core so fast, I start with the GL spec over searching the extension specs at least half the time.

The one thing now I really avoid using the spec for (and so nearly always search/grep the extension specs for first) is finding GL symbols. For instance (random example):, if you bring up the GL 4.1 core spec in acroread and search for COMPRESSED_TEXTURE_FORMATS you come up with … nothing! Now look at pg. 385. It’s right there in the middle of the page! It won’t match the underscores in those tables for some reason. And I don’t know of a regex PDF search tool that would allow me to kludge around this.

Yes, I ran into the same problem a week or two ago. I did not found VERTEX_ARRAY_BINDING and wrote to the forum to ask why I cannot find it and Alfonse explained me that this PDF search is simply not working for those tables.

Doh, try replacing underscores with spaces.

Ah! Good tip. Thanks! All is not what it seems, apparently…