View Full Version : Are unsigned byte indices supported in hardware?
11-03-2010, 08:16 AM
Does anyone know if GL_UNSIGNED_BYTE indices are supported natively in hardware? Or are they are converted to GL_UNSIGNED_SHORT by the driver so they don't actually result in any memory savings?
D3D 11 only supports unsigned shorts and ints. See IASetIndexBuffer (http://msdn.microsoft.com/en-us/library/ff476453%28v=VS.85%29.aspx). Perhaps D3D doesn't support unsigned bytes because they are not supported in hardware, and GL supports them because historically, GL always has.
11-03-2010, 08:48 AM
I am not sure we can really speak about memory saving when we compare GL_UNSIGNED_BYTE and GL_UNSIGNED_SHORT. GL_UNSIGNED_BYTE implies that the number of vertices is really small so that even if we use GL_UNSIGNED_SHORT instead of GL_UNSIGNED_BYTE when we could, the bandwidth cost is quite insignificant.
All in all, I guess it's converted but I don't really know.
11-03-2010, 09:02 AM
I knew someone was going to say that ;).
I agree the savings are usually insignificant, but I am writing something (text, not code) and want to be as precise as possible.
It is a good practice to always use the smallest datatype, but ultimately it could cost some overhead if it is converted, especially if the indices are dynamic.
Sounds like the safe bet is to go with unsigned shorts and not bother with bytes. Anyone agree/disagree?
11-03-2010, 09:08 AM
Sounds like the safe bet is to go with unsigned shorts and not bother with bytes
That's what I would have said. :p
Also, I don't really agree with "It is a good practice to always use the smallest datatype". Alignment is really important so I would add "accordingly to the appropriate alignment".
11-03-2010, 09:11 AM
across the range of AMD graphics card, ushort are the best option for both performance and memory saving.
11-03-2010, 09:14 AM
How ubytes are handle on AMD graphics cards? cast to ushort or something else?
11-03-2010, 03:02 PM
So it sounds like if someone is designing an engine, they probably shouldn't even expose unsigned byte indices since it is unlikely that there is _any_ performance or memory benefits. It can always be added to the engine later without breaking backwards compatibility.
I see this as different than not exposing something like double precision vertex attributes, which are supported in hardware now. I doubt there will ever be motivation to add hardware support for unsigned byte indices.
11-03-2010, 05:23 PM
I've found as a general rule of thumb that if it's not supported by D3D then it's not supported in hardware. This is a result of D3D's approach of only supporting functionality that's on the hardware and not software emulating anything. It's not always the case of course, but it does apply much more often than not.
Of course, there may be some hardware that does support GL_UNSIGNED_BYTE indexes (the iPhone is one possibility that springs to mind) but for general usage GL_UNSIGNED_SHORT should always be preferred.
A secondary bonus of preferring GL_UNSIGNED_SHORT (and in this case it's vs 32-bit indexes) is that it will prevent you from going over the maximum hardware-supported index count most of the time. There is still hardware out there where this is limited, and OpenGL will fall back to software emulation if you do go over it.
Powered by vBulletin® Version 4.2.2 Copyright © 2016 vBulletin Solutions, Inc. All rights reserved.