glVertexAttribIPointer

Minor point in case we ever refactor the API at some point. glVertexAttribIPointer is a bit of a wart in that if you have generic batch launch code (vtx attrib binds, etc.) you have to use a separate API for this, which doesn’t even have the same signature, so you’re more likely to eat the cost of an “if” per vtx attrib to select the right API (potential branch prediction miss).


  if ( integer )
    glVertexAttribIPointer( index, size, type, stride, ptr );
  else
    glVertexAttribPointer (index, size, type, normalized, stride, ptr );

Would have been better had this been handled like the normalized flag (i.e. added as a flag on a new API call): ignored if your input type isn’t a fixed-point value, but used if it is. Then you could just call one API regardless. Alternatively, handle this internally based on the shader input type (e.g. uvec4 vs. vec4) and not even have an arg.

A middle ground solution isn’t as pretty – wrap AttribIPointer with an function that has the same signature as AttribPointer and then select from a 2-element array based on an integer flag. Avoids the if, but imposes another layer of function call on IPointer.

Ditto glVertexAttribIFormatNV()

:slight_smile:

Useless parameters arent exactly pretty either (though there is some precedent in GL with this).

That would be a very good solution api wise, imo.
The problem is, it would be discarding HW feature of conversion int -> float (w/o normalization).

This could of course be culled into single function that says what is needed explicitly, without O(n) boolean parameters - let specify target format in the function as well with tokens encoding conversion, say NORMALIZED_FLOAT, or whatever.

Or do like DX does and allow for UNORM/SNORM ‘type’ (at least thats my understanding as to how it works there).

Well, you forgot that even if you don’t have to do the branching to choose between the two cases, the driver will eventually have (besides dozens of other conditionals). I don’t say it’s nice that we have separate VertexAttribPointer, VertexAttribIPointer and VertexAttribLPointer, but justifying it’s rewriting because of the potential overhead of an “if” statement is, hmm, rather optimistic. The driver has orders of magnitude more work when any of these functions get called so eliminating that “if” won’t really help (even assuming that it won’t result in another “if” in the driver).

Though, point taken, it’s not nice that we have three versions of these functions. Actually, if VertexAttribPointer would have been designed in the first place in a way to handle integer and double attributes properly, the other entry points wouldn’t even be necessary.

VAO should generally make this a moot point.
Other than this, like agnuep mentioned, drivers will generally have lots of more branching than this single “if”, so a minor optimization like this on user frontend can’t improve performance.

Considering that drivers must check whether every GLenum parameter is legal, sometimes using pretty long switch statements, I wouldn’t bother about the performance of one “if”. The advice is… just use VAOs.

[QUOTE=kyle_;1237538]

That would be a very good solution api wise, imo.
The problem is, it would be discarding HW feature of conversion int -> float (w/o normalization).[/quote]

I don’t think so. Let me clarify. The idea would be that the convert-to-float Y/N behavior (currently selected by calling Pointer vs. IPointer) would be driven completely by whether the shader input attribute was declared float or not. For instance, float/vec2/… -> YES. uint/uint2/… -> NO.

For the float input attribute case, you’d still have the “normalized” flag which would tell the driver/GPU whether to do the fixed-point normalize or not when converting to float (e.g. ubyte 0…255 -> 0…1). For the integer input attribute case, the “normalized” flag would be ignored.

This could of course be culled into single function that says what is needed explicitly, without O(n) boolean parameters - let specify target format in the function as well with tokens encoding conversion, say NORMALIZED_FLOAT, or whatever.

Could. Just seems to me it doesn’t make sense to (for instance) pass uint bits into a float var or float bits into a uint var, so why not make the -convert-to-float operation automatic depending on input type.

[QUOTE=Ilian Dinev;1237546]VAO should generally make this a moot point.
Other than this, like agnuep mentioned, drivers will generally have lots of more branching than this single “if”, so a minor optimization like this on user frontend can’t improve performance.[/QUOTE]
Probably true. Can’t do much about that besides lazy-state setting. Just trying to keep the app code driving it clean and lean. And re VAOs, I get more bang for the buck with bindless and streaming VBOs.

Yeah, VAO is strangely not the best performer on important implementations, even after so many years of having it as a must-use object in core >_< . (though vs bindless it’s somewhat understandable)

Bindless state (the stuff set by glVertexAttrib*FormatNV) is also VAO state. So there’s nothing preventing you from using both.

Do VAOs make bindless significantly slower?

Also, it’s interesting to follow the etymology of these sorts of things.

glVertexAttribIPointer descends from EXT_gpu_shader4. All of the “contact” listings come from NVIDIA persons, so they either developed almost entirely on their own or had a major part in it.

Generally, NVIDIA doesn’t do something API-wise without a good reason. And note that there is no equivalent framebuffer binding method for integer image buffers. Those deduce whether they’re for integer buffers from the fragment shader output values alone.

This leads me to suspect that there is, or at least was at the time, a good hardware-based reason for having the user specify this info up-front. That vertex processing hardware of the day needs to know if it’s dealing with integers or floats, and fetching this information from the shader object would be slower or otherwise impair performance.

It should be noted that D3D does not require this up-front knowledge however. So if NVIDIA wanted to make this distinction explicit and up-front in the API, they probably had a good reason for it. Especially since they did this again with NV_shader_buffer_shared_memory, which was their own extension.

I can understand why they wanted to make a new function for integer parameters rather than create new type enumerators. I have no idea at all why they made a new function for doubles; that’s just dumb.

In the testing I did a few years ago, bindless alone was faster than VAOs + bindless.

(Can’t find the threads I posted about this using the search on the new forums to save my life though. Searching for keyword bindless by me reveals only 2 hits.)

And even that case was limited to static VBOs (the best case for VAOs – you don’t need to reload them!). However, have largely flipped to streaming VBOs and there VAOs more an annoyance than anything else. Many little pieces of memory to cause cache misses that you’re having to reload. Might as well store the actual 64-bit “handles” in my batch object rather than a 32-bit handle to another memory block containing the 64-bit “handles”. The whole point of bindless is to get away from the binding (memory-chasing cache misses), and glBindVertexArray is not bindless.

Simple. glVertexAttribPointer(GL_DOUBLE) converts all doubles into floats at the vertex fetch stage and maps to float shader inputs. glVertexAttribLPointer(GL_DOUBLE) doesn’t do any conversion.

Why not simply determine inside the driver if the variable is 64-bit wide? I mean, the index of the attribute is unique and its type is known. This shouldn’t be too much of a miracle. Also, there seems to be no error generated when you use the double version to specify data for a floating point vertex attribute.

I am not sure, but it seems to me the idea was to make vertex shader and vertex attrib states as much independent of each other as possible, so that drivers don’t have to look at the vertex shader everytime vertex attribs need to be set up.