OK! Maybe you're right but I think it is still useful as an optimization because sometimes you need per-triangle data or things like that and in such cases you either have to have redundant data or use things like buffer textures using primitive ID for lookup but I think that would hit performance a bit. Anyway, if NVIDIA had support for it maybe it's not a big deal to put it back again or just someone should come up with a similar stuff because in my rendering engine I would really be able to take advantage of it.Originally Posted by Alfonse Reinheart
Hmm. I don't think so that new OpenGL versions shall include just already existing hardware functionalities but should also look forward, otherwise it will be always behind DX by releasing core functionalities only after the hardware for it is already out.Originally Posted by Alfonse Reinheart
Yes, but DX11-class hardware will support DX10.1 as well and they will be soon out, so it's time for OpenGL to support such stuff (like tesselation for example, even if ATI supports it for a long time ago).Originally Posted by Alfonse Reinheart
My vision about OpenGL's future is that the specification should be already out when hardware supporting it just appears. This can be achieved, because the ARB is a strong cooperation between vendors. Microsoft already achieved this why OpenGL shouldn't?
I have concerns with the attitude of most people working with OpenGL, because they are NVIDIA supporters. Of course I know why is this so, because NVIDIA had always the best support for OpenGL. Maybe I'm the only one believing that ATI/AMD can also be an excellent choice with OpenGL. Anyway, if we just care about what NVIDIA supports and we don't care about at least the second big player in desktop 3D world, then OpenGL will just become NVIDIA's "proprietary" API.
Off-topic, but two more points for ATI: they have quite good drivers nowadays and they really have more pure horsepower what I really like when using heavy weight shaders.