Data type and performance

As is well known, we can use different data types (int, float, double) to define vertex coordinates.
The OpenGL driver/implementation contains scene description as vertex/primitive list in it’s memory

The question is: does the memory consumption and performance of OpenGL driver depends on data type used?

Thanks in advance

First of all, you should not use double for anything, they are not supported in hardware, will be converted to floats anyway and therefore only cost performance with no precision benefit. As all modern GPUs use IEEE floats internally, they (floats ) are usually a good choice. It is also possible that GPUs natively accept vertex attributes as signed/unsigned bytes (usually used for normals and colors); but this is only a guess on my side. If the GPU cannot read the type directly, the conversion will be performed in the driver, resulting in a performance loss.

Yes it does. If you have bigger data types, the memory consumption increases and performance might decrease. There is additional catch which is more significant than that. If you use format which is not supported by the HW, the driver will have to convert the data which can significantly reduce performance. Especially if you use such format when vertex buffer objects are used, the performance hit will be very big.

I can second that. The performance hit for non-optimal types can be “much” larger when using VBOs (server-side vertex arrays) as compared to client-side vertex arrays.