PDA

View Full Version : VAO in GL4.x?



Osbios
06-02-2014, 01:16 PM
So currently I try to get a overview of all the ways to input vertex data layouts. And I still don't know how much to use or avoid VAOs.
Because I try to make a class with a unified interface. I would use VAOs for the layout only and then switch Buffers all the time. Meaning I would not make a VAO for each layout-buffer combination.
And I think that would kill most of the performance benefits VAOs may have, except for ARB_vertex_attrib_binding where I can switch all buffer bindings with a single call.

Also I read that in beginning with some Core GL Version like 3.1 or 4.0 you must use VAOs, but could not find it in the specs so far. Maybe somebody could point me to it?

So shall I change my design to use VAO for each buffer combination or only use them if ARB_vertex_attrib_binding is available? Or am I forced to use them with some Core versions?

reto.koradi
06-02-2014, 11:51 PM
Using VAOs is required in the core profile. This is listed in Appendix E of the specs, starting with 3.0: "The default vertex array object (the name zero) is also deprecated."

Osbios
06-04-2014, 07:16 PM
Apparently AMD drivers like VAOs way more then nvidia. Nvidia goes so far to still support the 0 VAO even in core. But it seems I don't need to care about them anymore!

According to valves report about the l4d2 GL port, VAOs where slower on ALL drivers then using ARB_vertex_attrib_binding without them. My google research tells me that ARB_vertex_attrib_binding seems to be supported by all drivers, meaning intel (at last for iris since March) and mesa, too. I actually didn't expect that. This wide support of the extension resolves so much headaches and makes developing way more elegant. :)

I will only use a single VAO instead of the 0 binding if the context requires it.

addition
Looks like ive-bridge, but not sandy-bridge GPUs support the extension in case of the closed intel driver. (http://downloadmirror.intel.com/23713/eng/ReleaseNotes_GFX_3496_32.pdf)
The older GPUs are so weak I don't think using plain old glVertexAttribPointer​ will hurt performance that much on the CPU side.