GL_AMD_name_gen_delete is bad?

I was perusing the extension list and found http://www.opengl.org/registry/specs/AMD/name_gen_delete.txt

It doesn’t seem particularly useful. Why would you want to unify name generation?
Also, the classical glGenTextures and the other ones allow you to get many names at the same time. Why would you want to do that?

It would be better to have
textureID = glCreateTexture();
or
glCreateTexture(&textureID);

or if you want to unify
textureID = glCreateName(GL_TEXTURE);
or
glCreateName(GL_TEXTURE, &textureID);

Generating texture ID is not cost free. It probably does some synchronization with GL server thread. It is similar as any glGet* call. So it is better to generate 100 of texture IDs in one go to amortize the cost. This is at least true with NVIDIA drivers.

I think it build some opportunities but so far this extension is useless.

A glGenNames capable of generating multiple names of any object types would make is much more relevant.

GLenum const Type[] = {GL_TEXTURE, GL_VERTEX_PROGRAM, GL_FRAGMENT_PROGRAM, GL_BUFFER, GL_BUFFER, GL_BUFFER};

std::vector<GLuint> MeshNames(sizeof(Type) / sizeof(GLenum, 0);
glGenNamesSet(1, Type, MeshNames);

glDeleteNamesSet(1, Type, MeshNames);

This way build a strong semantics for the OpenGL programmer based on program design but also give a hint to the drivers for maybe some optimisations.

There is something flawed in the drivers if getting a handle is a costly procedure.

The 4.1 spec says:

The command
void GenBuffers( sizei n, uint *buffers );
returns n previously unused buffer object names in buffers.
These names are marked as used, for the purposes of GenBuffers only, but they acquire buffer state only when they are first bound with BindBuffer, just as if they were unused.
A buffer object is created by binding a name returned by GenBuffers to a buffer target.

so all the driver is supposed to be doing is reallocating memory to increase the size of the relevant name table.
But NVIDIA has direct-state-access, and the objects can be accessed before the first bind, so they either need to pre-create all of the objects or add code to every DSA call to detect the first use of a name and create its object.

A call to tell the driver how many slots it needs to reserve in each of its name tables and then individual ‘Create’ calls would be more consistant with how object oriented languages work, and would fit well with the DSA concept, but it is unlikely the ARB would make such a fundamental change.

More important is having DSA in core and something like NV_shader_buffer_load to minimise cache misses by using GPU addresses instead of names as the handles.

Except NV would need to check in every DSA call anyways since EXT_dsa is written against 2.1 where glGen* are optional.
I could be wrong though.

This ‘glGen*’ paradigm is silly imo. Also, I havent seen a single serious application that does generate names in bulk.