Using GL_BYTE for normal array

I know that some ATI cards will fallback to software rendering if a 24-bit color array is used. Does anyone know if using GL_BYTE for the normal array data type will cause the same problem? I would prefer to do without a vertex attribute, if possible.

Erm, software-mode AFAIK not, but it will definitely be slow (tested that half a year ago).

It’s better to use GL_UNSIGNED_BYTE with 4 components and unpack the data into the [-1;1] range manually in the shader. Maybe you even find some useful data, that you can store in the fourth component.

Jan.