an open source GLU replacement library. Much more modern than GLU.
float matrix, inverse_matrix;
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Very true and accurate.I think the whole point of the core profile wasn't to make it easier for OpenGL users but for OpenGL implementors.
In my opinion, driver implementers should not be involved in the specification, or even users. It should only be designed and maintained by Academia, computer graphics professors, and some well recognized names in the industry.
I guess menzel needs to get another round ...
If you'd let the professors I got to know specifiy OpenGL then its good bye OpenGL. Leaving the implementors out of the picture is simply not a good idea. At all. If some professor who has never written software that surpasses simple examples used in lectures, and I bet there's a lot of those out there, then how are they supposed to anticipate if a specification can actually be implemented? IMHO, most people in academia (unless they have experience as a GPU driver developer) simply don't have the experience you need. I'm not saying that there aren't capable people in academia but specifications on that scale need to be done by people who have experience with software engineering on that scale. Writing a little shadow mapping demo or something to show stupid students how it's done in principle isn't going to cut it. I can say without a doubt that at my university most of us surpassed the OpenGL facilities of our professors before studying was over - I suspect this is true in many case, if not in most cases. Does that make us fit for specifying or implementing OpenGL? Hell no.In my opinion, driver implementers should not be involved in the specification, or even users. It should only be designed and maintained by Academia, computer graphics professors, and some well recognized names in the industry.
You know what people bare well recognized names in the industry? The promoting members of the ARB ...
What does knowledge of GPU details have to do with API design? You are mixing two different things, drivers and APIs. When we talk about API design we talk from a higher abstract level that should serve a certain purpose regardless of hardware details...and remember not every hardware work the same. API interface = exposed functionality vs. how it works. Of course there are other aspects like what's possible and what's on current hardware but these are too general than being limited to only driver developers.
And remember we base the design from existing designs, otherwise how could we have "future suggestions forum" where none driver developers could suggest many useful features and doable (based on what we already know from another API on the same hardware). The point is let lets have the Academia, researchers, PhD people with a lot of experience in software engineering come up with a nice flexible design, and I'm not talking about your "professors" or mine. There are scientists in the field who can come up with the best designs ever!!! Rule of thumb, never let a hardware engineer do the software engineer's job, though the opposite is possible.
Performance. Just look at OpenGL 1.1 as an example.What does knowledge of GPU details have to do with API design?
How many features of 1.1 were never implemented in consumer hardware (at least, until they could be implemented via shaders internally)? You've got accumulation buffers and selection buffers, at the very least; any attempt to use these features was basically the kiss of death as far as performance was concerned. In the early days, falling off the "fast path" was ridiculously easy in OpenGL.
Indeed, it wasn't until 1999 that OpenGL implementations could do T&L in hardware; until then, all that T&L was done on the CPU. Which skilled programmers could probably do just as well if not faster. From OpenGL's initial release until the GeForce 256, vertex T&L was a liability to performance, not a benefit.
Look at how tortured a bad API design can get. The glTexEnv nonsense that was extended and extended until the combiner level where it was just a really horrible way to specify an assembly language for fragment shading. At one point in time, glTexEnv was a good idea. But it wasn't very extensible and eventually led to horrible things.
Any hardware-based API needs input from the hardware makers themselves. You always want the lowest level API to be as close to the hardware as possible while still providing a reasonable abstraction. The people who best know how to make an abstraction are the people who make the hardware. Or at least have detailed knowledge of it.
OpenGL has been there before and it didn't work.
Separation of OpenGL specification from how hardware actually worked directly led to the API becoming more and more irrelevant and baroque, and was also a huge contributing factor to the rash of GL_VENDOR1_do_it_this_way, GL_VENDOR2_do_it_that_way and GL_VENDOR3_do_it_t'other_way extensions to meet the requirements of people who were actually developing and releasing programs that used OpenGL. At the same time as that was going on, D3D was tying itself closer and closer to how hardware worked and having OpenGL's ass on a plate as a result.
What you're proposing would be a return to the days when drivers would advertise GL_ARB_texture_non_power_of_two but not actually have it supported by the hardware. Yes, this really happened, and you had no way of knowing until you got that sudden crunch back to under 1 fps. You were obviously not around back then, but let's safely assure you - nobody else wants to go back there.
So, given that it didn't work before, given that the D3D approach is what has already been established in the field to be what actually works (and if you think GL core context creation is ugly you should see what D3D7 and earlier were like...), given all the good work done in the past few years to put OpenGL back into a position where it can at least try to be competitive again, and given that you completely fail to provide any compelling reason why an approach that didn't work before should be any different now (and why all that good work should be undone), it has to be said that your position on this lacks any substance.
And who says that the hardware designers and implementors do the driver implementation? Do you think NVIDIA, AMD, Intel and so on do only employ hardware specialists? You need both parties.Originally Posted by Janika
I seriously doubt that this is true. The best designs ever come from people who have years or decades of experience designing and implementing industrial strength software - not out of the ivory tower ...There are scientists in the field who can come up with the best designs ever!!!
A question from a novice who knows nothing and wants to learn from experts.
If I'm linking at run time to the OpenGL32.DLL using LoadLibrary, how can I use the wglGetProcAdress or even load other core profile functions or extensions?