Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 5 of 5

Thread: Vertex specification with signed integers

  1. #1
    Junior Member Newbie
    Join Date
    Oct 2017
    Posts
    13

    Vertex specification with signed integers

    The wiki suggests using integers for certain vertex specifications in order to save bandwidth. However, with the bias that signed integers have, does this not create inequality in the precision between positive and negative values when normalized? How would one then for example create a perfectly symmetrical object with an origin of 0 by using integer 3D coordinates?

  2. #2
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,728
    Quote Originally Posted by silikone View Post
    How would one then for example create a perfectly symmetrical object with an origin of 0 by using integer 3D coordinates?
    You don't; like any other performance optimization this involves a tradeoff, and in this case part of the tradeoff is a loss of precision. It's up to you to decide if the loss of precision is acceptable for your use case or not, or - looking at it another way - whether a (theoretical - you may not even be bottlenecked on vertex bandwidth) performance gain is more important to you than having full precision. Only you can answer those questions.

  3. #3
    Senior Member OpenGL Lord
    Join Date
    May 2009
    Posts
    5,932
    Signed normalized integers do not have a bias. Post-OpenGL 4.2, all signed, normalized integers use an even distribution around 0, with MIN being identical in value to MIN+1.

    Indeed, even before then, they didn't have a bias. You simply couldn't represent 0 exactly.

  4. #4
    Junior Member Newbie
    Join Date
    Oct 2017
    Posts
    13
    Quote Originally Posted by Alfonse Reinheart View Post
    Indeed, even before then, they didn't have a bias. You simply couldn't represent 0 exactly.
    Ah yes, that seems to be the case. I wonder if omitting the normalization and doing some scaling magic would be a feasible workaround for <4.2

  5. #5
    Senior Member OpenGL Guru
    Join Date
    Jun 2013
    Posts
    2,516
    Quote Originally Posted by silikone View Post
    Ah yes, that seems to be the case. I wonder if omitting the normalization and doing some scaling magic would be a feasible workaround for <4.2
    Just use the same conversion regardless of version; that's what most existing code will be doing (unintentionally).

    Any code which used signed normalised values prior to 4.2 and which didn't force the use of a specific version will exhibit subtly-different behaviour depending upon the OpenGL version. That's assuming that the implementations actually followed the specification, rather than the specification having been adjusted to match practice.

    Data generated for versions prior to 4.2 will result in the model becoming larger by a factor of 1.000015 and offset by -0.000015 when used with 4.2 or later. Except in the specific case where you have objects which are very close together relative to their size, the difference won't be noticeable. E.g. if the data was generated by measuring a physical object, the difference between versions corresponds to an accuracy of 15 microns for an object a metre across. Most physical measurements are nowhere near that accurate.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •