64-bit integer vertex attribute

GLSL420 doesn’t have a native 64bit integer support. We can get away with this with Nvidia hardware (using NV_vertex_attrib_integer_64bit), but otherwise we have to emulate. These types exist in OpenCL though (long and ulong) so why can’t we have them in the GL (especially now we have double precision floatting points) if the hardware (seems to) supports it?
Cheers

Who says that the hardware supports it? Just because something is in a language doesn’t mean that it exists in hardware. OpenCL may require long and ulong to be 64-bits, but that doesn’t mean a compiler for a specific platform won’t convert those into multiple integer registers and convert operations on them into multiple integer operations.

I am curious as to what rendering work you’re doing that needs 64-bit integers.

True. I rather wanted to mention that AMD’s Evergreen GPUs (and newer) and Nvidia GPUs support 64bit types natively, we can manipulate double floats, so why not long integers ?

An example : I have an implementation of a quadtree (fusion, fission, and rendering of quadrilateral patches) which runs on the GPU and I need a bitfield to store some information for each node. The number of subdivisions depends on the number of bits I have, and 32 bits aren’t enough in some cases, so I’m currently emulating a 64bit key.

I rather wanted to mention that AMD’s Evergreen GPUs (and newer) and Nvidia GPUs support 64bit types natively, we can manipulate double floats, so why not long integers ?

Well, not all of those support doubles natively. Some of the lower-end ones use multiple instructions to emulate double-precision support.

More importantly, there is a difference between double-precision operations and 64-bit integer operations. Just because a double is 64-bits wide doesn’t mean you can use double math opcodes to do 64-bit integer arithmetic.

I’m not saying that any one of them does or doesn’t. But hardware double-precision support does not imply 64-bit integer support.

Okay now that it has been made clear that integers and floats are handled differently on GPU hardware (^^), I’d like to go back to the main subject :
native 64bit integers in GLSL -> good or bad idea ?

Some of my thoughts for the API modifications
A solution would be to add the long keyword in the grammar before the base type like we can do in C++ or like we have for signed integers in OpenCL :


long int a64BitSignedInteger;
long uint a64BitUnsignedInteger;
long ivec2 a64BitSignedIntegerVector; 
long uvec3 a64BitUnsignedIntegerVector;
etc. 

On the GL side, add the GL_INT and GL_UNSIGNED_INT enums support to the type variable in the function


void VertexAttribLPointer( uint index, int size, enum type,
sizei stride, const void *pointer );

This will allow to send 64bit integer vertex attributes.

Finally, add functions for uniform variables. Tricky : could add a new suffix such as lu or li, or perhaps have a new function for 64bits variables such as


void UniformL{1234}{if}( int location, T value );
void UniformL{1234}ui( int location, T value );
etc.

In my opinion the second solution would make the GL a bit more consistent for 64bit variable specification for GLSL as a specific function would exist for vertex attribs AND uniforms.

I’d now like to have some opinions on these ideas : what do you think ?

I think it would be fine if there was hardware support. But otherwise, it’d be a lie.