More BS from Alfonse (seconded). When you start your senseless NVidia bashing, know that you’ve lost the argument.
It’s funny. The NVIDIA version of the extension is inconsistent with standard GLSL principles. Namely, baking everything in a program at link time. Thus it is very clear that NVIDIA did not value consistency with standard GLSL practice.
But if you want some more evidence, look no further than EXT_separate_shader_objects. This extension cannot be used with user-defined varyings. It must use built-in varyings, virtually all of which are not available in core 1.40 or above. So NVIDIA created an extension which is not just inconsistent with existing GLSL practice, it is 100% incompatible with it.
NV_vertex_array_range. It encapsulates memory in a way that is fundamentally antithetical to how OpenGL has ever operated. Something similar could be said for NV_shader_buffer_load/NV_vertex_buffer_unified_memory
I can keep going. You may like these extensions. I may like these extensions. But that doesn’t change the fact that NVIDIA has a history of making extensions that do not do things the way OpenGL has done them. This is not a statement about whether I think NVIDIA’s way is better or worse; it is simply pointing out the truth.
but is trumped by efficiency and flexibility.
The both of you seem to mistake facts for value judgments. All I said was that the reason the ARB used this method for transform feedback was that it was consistent with existing GLSL practice. I did not state or imply whether I think that it is a good idea, or whether consistency could or could not in this instance be trumped by other concerns. I’m simply stating what is most likely their reasons for implementing it as such.
NVidia’s GL implementation rocks. It rocks in MS-Windows. It rocks in Linux. It works. The people behind it are wonderful, bugs get fixed when I have reported them. They have, in my eyes, pushed GL forward.
It’s funny how nobody said anything contrary to that. I don’t know how it is you go from “NVIDIA has written quite a few extensions that are inconsistent with existing OpenGL practice” to “NVIDIA is crap.”
Yes, I like NVIDIA’s GL implementation. However, I also live in a world where ATI has plenty of good hardware out there too. A world where they have a pretty good OpenGL implementation too. So vendor-lockin is not something I’m interested in.
there is absolutely nothing inconsistent about changing what varying are to be recorded.
I found this in the specification:
Yes, that does make the current transform feedback inconsistent with existing GL practice. But simply removing this prohibition would make it consistent with GL practice. Taking it to where NVIDIA did would be similarly inconsistent, if a bit more flexible.