Update Website to Show Specific Functionalities Visually Not Just Textual

There are alot of various extensions and though there’s textual documentation sometimes it’s hard to find the right info and you have to try and visualize how best to use these features and how the end result would appear when using them. Most of the functions are related to graphical operations or other calculations and manually testing has to be performed to see the results or in depth research, this is very time consuming when their are lack of adequate examples demonstrating usage techniques or the outcome, I’d like to see a categorized and clickable thumbnail picture next to each extension on the website displaying visually something to indicate for how these extensions and functions are working and their actual overall impact when using them, not just in a textual since, but for comparison each extensions there really isn’t anything and it’s sometimes hard to visualize how it would work to achieve a desired effect vs. using something else to do the samething.

For example:
If there is a blend function extension and a core feature for handling the samething, I want to scroll a list of extensions and click on an image next to the extension name to see to what benefit the blend function extension would have over the core version of the function, and perhapes an additional thumbnail image to show a few different uses for the function, and a snapshot image displaying performance results in framerate and times ms.

My point being, sometimes if you actually visually see the impact of something visually it can help you in your programming tasks and choices on what best to implement and programming usage for implementing the best performance.

Even if there are a number of tutorials online or examples you get to a certain point where the information sort of stops, then you run into an issue and go to a forum or have to contact someone knowledgable, then if that doesn’t take place and you can’t find reference information you can work with immediately that’s where the development time setback comes in.

Another example would be with the Wiki, sure it’s great for some of the beginning steps such as context creation, and a few basic samples, and sights like Nehe providing many good samples, as well as books and other documentation online, however, more often than not they only cover a limited number of core functions and extensions and have the bulk of further simplified or advanced topics not covered and rely on multiple 3rd party solutions to achieve desired results which many may already be included in the OpenGL standard’s Current Version Timeline, the information becomes dated.

So yeah I think some photo’s for some extra visual reference and performance comparisons when using the functions as compared to other similiar named functions might help alot save some development time, this could begin with future releases of OpenGL and covering all functions and working way up the extension ladder.

Displayed on website could be something such as:
FUNCTION NAME: Photo1, Photo2, Photo3, Photo4, Photo5

Photo’s 1 through 4 could show some main basic operations the function performs or is used to perform, Photo 5 would contain something such as the FPS and ms calling speed Performance values of calling/using the function.

Also, if a function does not perform a graphical operation and is not normally displayed graphically, such as a loading function or seperate mathmatical only operation or check, then a graphical sample of an application can still be used but with simple performance information of using the function same as with the graphical functions.

OpenGL does not define performance. And there is no way to quantify performance in any absolute way since it depends on performance. Lastly, there is no way to know how individual implementations will perform certain operations, so there is no way to determine what “best practices” for something are.

The only extensions that matter these days (besides the two that can’t go into core) are the core-ARB extensions and maybe ARB_shading_language_include. And since core-ARB extensions are identical in every way to the core feature, they are, well, identical. So there’s no need to document them separately.

Also, however much this might be nice, who’s going to do it? It’s hard enough work for the ARB to document the behavior of every piece of OpenGL state. Now you want them to generate some picture that is descriptive of every piece of OpenGL state?

I’ve invested a substantial quantity of time on the Wiki, on my own tutorials, and on personal projects for OpenGL. What you’re talking about is an order of magnitude more complex than even that. You could have 3 people working on this for a whole year and still not get it done.

The ARB can’t even scrape together a conformance test suite for OpenGL.

Asking for this is like asking the bum on the street to start a business. Yes, it’d be great if that could just magically happen, but there’s a massive gulf between those two things that asking for it is rather naive.

Personally, a longtime ago I begun my own projects called OpenGL.NET and WebGL way before these newer ones came out i’m not familiar, but I knew a guy really nice named The Fiddler “Stephen”, with the earlier specs and worked on some earlier things like universal mesh format and KT3D, and I worked with as well DirectX and vector graphics, and just now coming back to that OpenGL.NET lately after working with Canvas 2D and 3D, I am restarting conversion on VB.NET on Windows 8, and maybe will do some Linux on micro computers or board such as Raspberry Pi and iOS devices, most your HTML5 will work on any machine and the WebGL is what suffers more so from performance not OpenGL.

OpenGL 101: When you look up anything on about OpenGL you will know it is always defined even on Khronos website “OpenGL - The Industry’s Foundation for High Performance”, hence, foundation for performance.

With all the hardware and technical development, I don’t underestimate a webmasters ability to take a few snapshots of working applications using the features to show off an existing or future spec, it’s no different than when a new product rolls out and you demonstrate new capabilities.

Yes I know the software controls the hardware and hardware is designed to meet the needs of the software alike, but to say OpenGL doesn’t define performance is to say it’s not useful as a high performance graphics API which it most certainly is. My own experience, I’ve programmed in over 48 programming languages, and a long history with pioneers well over 14 years programming like some out there with 17 years or more experience, I can’t remember everything, but I do know that adding some pictures to an existing website any webmaster could do with basic HTML with a few extra hyperlinks leading to some bigger pictures from thumbnails, taking a snapshot with 2 text labels showing on screen with a performance query counter built into an OS showing the current FPS rate and ms difference is simple, compared to the years of development of OpenGL so far is child’s play. This is definately not hard work at all unless you are locked into specific languages with lack of conversion information like I have been with OpenGL’s extension declarations and usage in VB.NET. But OpenGL is a high performance graphics API, you used to be able to use the API directly in the browser and in OCX controls and directly loaded from downloadable DLL’s and ASP.net server side, the early WebGL I was working on was back then until the ball dropped and somebody changed the security on browsers to be more strict and object declaration tags.

I tend to think of OpenGL’s API as building blocks (foundation), and over time building blocks and methods for doing things somewhat change or improve over time, especially as there’s new ways of doing things.

Telling them not to do it would be like saying Multitexturing of a mesh would not be worth adding, or shading, and back then before we could define those extra texture slots and texture index/names/ID’s had those extra options it was very limiting and things took much longer with many more lines of code.

We live in modern times today which is no longer the early computer graphics era, but as such even the core of OpenGL is useful, to maintain the higher level of not only competitiveness and performance, the most minor of changes do need made and some additions that improve the overall working with OpenGL can go a longway without impacting the hardware end much if at all.

We’ve come along way from defining geometry and setting solid colors manually, even if it seems today developers are spoiled with higher level languages the advancements shouldn’t be ignored, they should be fully displayed and understood keeping it as simple as possible and clear.

The History itself of OpenGL is related to Silicon Graphics, hence when you see the reference and linkage SGI always comes to mind.

When it was decided to release the first OpenGL API standard and it rolled out, it was set to be “guaranteeing that it has a wide application deployment” being platform independant helps to achieve this when it can be implemented on multiple systems under it’s open standard definition, but that descision almost didn’t come to be, but it became public thanks to the first person that managed the very first OpenGL release to the public.

Anyways, back to the feature suggestion on website and in future consideration, no it’s not hard to add this at all, yes it can be done, if someone wants to would be nice, if not maybe I will if I have some extra time which I probably won’t, and no not all functions with similiar names are the same, when you start making hundred of calls and you’re using higher precision numbers or you start creating and not reusing multiple objects and the code block inside the function is not fully optimized inside it you will notice bottlenecks and things add up especially on weaker hardware.

If a version of a function says 1.2 and another says 1.3 and another 2.0 performing the same operation, it would be nice to know which one is really better or achieves the exact desired results before having to manually code examples to re-test the samething over and over using different methods, it’s like ripping the engine out of your car or the electrical wires over and over again till you find what works best, and that’s what is time consuming, I’d rather spend the time on the actual usage and advancements with the updated functionality than with the uneccessary troubleshooting on repetitive tasks.

I don’t underestimate a webmasters ability to take a few snapshots of working applications using the features to show off an existing or future spec

OK, it is now two months ago. The OpenGL 4.3 spec is about to be released. By your reasoning, we also need to release a bunch of screenshots for the new features in the various extension specifications and the core spec.

Alright, we need some working applications… oh right, there are no working applications of OpenGL 4.3 features. No developers have even seen the 4.3 spec, so there can’t be applications yet. There aren’t even 4.3 drivers at that point. Hell, there aren’t 4.3 drivers now 2 months later; the closest we get are some betas from NVIDIA.

Without drivers, you can’t have applications. And without applications, you can’t have screenshots. But you can have a text specification, which describes 100% of the new functionality.

So what do you want? Screenshots of applications that haven’t been written yet because there are no drivers? Or something that’s actually possible to make?

Now, let’s fast-forward 6 months. There are now actual 4.3 drivers. Granted, they have bugs, but we’ll pretend that they implement the spec perfectly. How many people have actually written “working applications using the features”?

A few hobbyists. Groovenet probably has updated his suite of example programs for GL features. But that’d be about it. Any actual professional applications? No. Unless those hobbyists have covered every element from every feature and written profiling tests for them, it’s not going to be nearly enough for what you want. It will be at least 6 months before any actual product ships with support for any 4.3 feature, and in all likelihood much longer than that.

Will all of those applications cover every feature that was added? Unlikely; I don’t know of any application that uses dual-source blending outside of hobbyists.

Furthermore, you didn’t ask for what “working applications” do. You asked for:

What “working applications” will do all of that? If I write some code that uses a compute shader, will that code show everything that compute shaders do? Well obviously that’s too big to show a screenshot of. What about ARB_vertex_attrib_binding. That only changes the API, not the functionality, so you’d probably want performance. Except… that requires both a version that uses VAB and one that doesn’t. In general, unless I’m writing an application that actually tests the performance difference between them, I won’t have an application that does both, since both VAB and old-style vertex arrays do the same thing.

The only way to do what you ask for would be to explicitly write applications for the sole purpose of generating the information you’re looking for. This information is not out there somewhere waiting to be collated. You can’t just download some programs, take some screenshots, and stick them up on a site. The information must be created. Which means that the ARB (or some interested party) would have to write the applications that generate this info.

I can’t remember everything, but I do know that adding some pictures to an existing website any webmaster could do with basic HTML with a few extra hyperlinks leading to some bigger pictures from thumbnails, taking a snapshot with 2 text labels showing on screen with a performance query counter built into an OS showing the current FPS rate and ms difference is simple, compared to the years of development of OpenGL so far is child’s play.

Again, the hard part is getting the images, not putting them on the web.

Also, notice that the OpenGL specification is a PDF. And the extension specifications are .txt files. Neither of these are well known for being easy to dump images into.

If a version of a function says 1.2 and another says 1.3 and another 2.0 performing the same operation, it would be nice to know which one is really better or achieves the exact desired results before having to manually code examples to re-test the samething over and over using different methods, it’s like ripping the engine out of your car or the electrical wires over and over again till you find what works best, and that’s what is time consuming, I’d rather spend the time on the actual usage and advancements with the updated functionality than with the uneccessary troubleshooting on repetitive tasks.

You seem to be operating under the delusion that there exists some knowledge out there where we can know exactly how fast every function is on every piece of hardware, and thus we can know the right order to put our function calls in to get the fastest possible performance in achieving a certain effect.

Performance doesn’t work like that. How fast a particular piece of code executes depends on a large number of factors. Among them are CPU cache issues, GPU pipeline stalls, and so on.

Even if we magically had some profiling suite that could perform simple performance tests, they would ultimately be artificial. A decent starting point for performance perhaps, and a way to identify easily-avoidable performance traps. But it is only a start point, not an end point; if maximum performance is your goal, then you’re going to need to profile your actual application, not some artificial test.

Also, I again remind you no such profiling suite exists. And you’re now asking the people who can’t get a conformance test together to make one.

I completely agree with Alfonse!

Without any intention to offend you, Knight Chat X, what you asked for is not feasible.
When saying “not feasible” I mean it is too hard to implement that nobody will bother to do that.

Besides, a great deal of functionality is not really “screen-shotable”: texture buffer objects, integer textures, uniform buffer objects, all those conveniences added to GLSL, image buffer store, on and on and on. Take a screen shot of any of these in action is a WTF… like asking what does the color green smell like.

What might be nice is example code using such features linked to the docs… and guess what: many GL extensions have an examples section.

On the subject of performance: each GPU has it’s own way of doing things, given multiple (good) ways of getting the same thing done, often as not which is faster depends on the GPU architecture, bandwidth, compute capabilities, etc… etc…there was a time that doing trig functions was done via texture look up for best performance, now you just use cos/sin as that is faster in just about every way possible… heck, there are embedded GPU’s that can do cos, sin, etc in one or two freaking clock cycles (at reduced accuracy). It all depends on the GPU, GL state, driver version, etc, etc.

Well paint me purple and put me on a high stool, but I’m agreeing with Alfonse too.

Just want to highlight the “foundation for high performance” part here. The key is - yes - “foundation”, but what it means is that the OpenGL specification in intended to outline a mechanism by which high performance may be achieved; the actual achievement (or not) of that high performance is up to individual implementations.

This is where you seem to be getting a bit muddy on things.

What you call “OpenGL” actually consists of 3 main components (4 if you count your program): the graphics hardware, the driver and the OpenGL specification. Of those, only the last is what “OpenGL” really is. You say that “the software controls the hardware and hardware is designed to meet the needs of the software alike” but in reality it’s more like a man driving a mule. Sure, if the man does things right (or gets lucky) the mule will go where he wants him to, but sometimes that mule is just gonna dig his heels in and refuse to budge. Likewise with OpenGL - the software is only capable of controlling the hardware in so far as the hardware is capable of doing what the software wants done. No matter what you do in software, if the hardware can’t do it then the hardware won’t do it.

So - OpenGL doesn’t define performance because OpenGL cannot define performance. The same OpenGL version must run on multiple tiers of hardware, all the way from a crappy little integrated chip in a netbook up to serious render farms. OpenGL can’t promise anything to do with performance on those terms; it can’t promise you a billion FPS on the netbook. OpenGL doesn’t even promise hardware acceleration because that’s the job of the implementation, not of the specification.