PDA

View Full Version : Vertex specification with signed integers



silikone
11-27-2017, 10:51 AM
The wiki suggests using integers for certain vertex specifications in order to save bandwidth. However, with the bias that signed integers have, does this not create inequality in the precision between positive and negative values when normalized? How would one then for example create a perfectly symmetrical object with an origin of 0 by using integer 3D coordinates?

mhagain
11-27-2017, 11:58 AM
How would one then for example create a perfectly symmetrical object with an origin of 0 by using integer 3D coordinates?

You don't; like any other performance optimization this involves a tradeoff, and in this case part of the tradeoff is a loss of precision. It's up to you to decide if the loss of precision is acceptable for your use case or not, or - looking at it another way - whether a (theoretical - you may not even be bottlenecked on vertex bandwidth) performance gain is more important to you than having full precision. Only you can answer those questions.

Alfonse Reinheart
11-27-2017, 01:13 PM
Signed normalized integers do not have a bias. Post-OpenGL 4.2 (https://www.khronos.org/opengl/wiki/Normalized_Integer#Signed), all signed, normalized integers use an even distribution around 0, with MIN being identical in value to MIN+1.

Indeed, even before then, they didn't have a bias. You simply couldn't represent 0 exactly.

silikone
11-27-2017, 04:13 PM
Indeed, even before then, they didn't have a bias. You simply couldn't represent 0 exactly.

Ah yes, that seems to be the case. I wonder if omitting the normalization and doing some scaling magic would be a feasible workaround for <4.2

GClements
11-27-2017, 07:53 PM
Ah yes, that seems to be the case. I wonder if omitting the normalization and doing some scaling magic would be a feasible workaround for <4.2
Just use the same conversion regardless of version; that's what most existing code will be doing (unintentionally).

Any code which used signed normalised values prior to 4.2 and which didn't force the use of a specific version will exhibit subtly-different behaviour depending upon the OpenGL version. That's assuming that the implementations actually followed the specification, rather than the specification having been adjusted to match practice.

Data generated for versions prior to 4.2 will result in the model becoming larger by a factor of 1.000015 and offset by -0.000015 when used with 4.2 or later. Except in the specific case where you have objects which are very close together relative to their size, the difference won't be noticeable. E.g. if the data was generated by measuring a physical object, the difference between versions corresponds to an accuracy of 15 microns for an object a metre across. Most physical measurements are nowhere near that accurate.