Enforce speeds and focus on 2D! and More

Enforce certain speed standards for rendering/memory transferring and ensure all required functionality is in the core and ensure the graphics card arent permitted to make high-pitch noises. (ive noticed with some high-end games like starcraft 2 in non-game/loading or title mode)

Before i go any further i should point out whilst other opensource projects may be able to achieve these on there own, it takes a lot more knowledge to use them and then they cannot work in unison with OpenGL. (depending on which paragraph below you look at)

Maths - write lots upon lots of maths, OpenGL should be able to implement the best standard OpenCMath (Computer Math) or whatever, and should challenge other peoples work if it conflicts with the Open Standard Ie. It doesn’t allow you to expand into that area. - Mainly referring to mouse picking which i cannot do.

A proper 2D standard. For extremely fast rendering of pixmaps.

Ive been trying to utilise framebuffers on my computer and PBO’s and whatnot to get 2D to work but meh it just doesnt want to work, OpenGL is supposed to be a graphics library, what use is it if its incomplete? (Hardly any fast 2D facility’s and as just stated nothing direct, atm im using Rectangles and Billboards) - it might be my fault though since i havent initialised opengl with glew on linux (i cant remember for windows i think it is, even though it still doesnt work) - i dont like glew

PNG/JPEG/TarGa/etc - all image file loaders and advanced options for tiled/animations to give OpenGL the most optimal format for processing the image data (instead of relying on other people to just repeat implemention of this). Also to this stop denying non-professionals, since OpenGL biggest arena or future arena wont be windows! Allow people to make programs and not crap.

Console mode standards. Console mode as in being able to access multiple lines and where the current user is, could become extremely useful esp. if hardware was wired into this. Or a type of OS, which would bring up a point of the greatness of MS-DOS, because of the simple fact that i hate wasting resources on additional programs when im in-game and it would make it harder for people to hack into computers and all with access to only certain stuff. (launch from windows or linux into Open-DOS or Gaming-DOS and no im not referring to single thread etc, hows OpenCL doing)

More Sourcecode error gathering functionality and perhaps another project for ensuring correct opengl implemention in sourcecode (sourcecode parser) and dont allow developers to wreck there machines (through the parser)

Far easier and more cross-platform, even if its not native to OpenGL, perhaps a new way to access graphics? without having to rely on xlib or windows to let you into the system? The Open-DOS option sounds champion, that way you could keep it as it is aswell.

Cut out computer memory (for graphics) so that games have a much faster load-up time. Think about a by-pass system? if everything has to go through a CPU then you could either utilise the GPU or a secondary CPU for Audio/Video streaming directly into Video memory and not from computer memory.

Lay some foundation for new developers, my game machine - www.fobbix.com/projects/foxy_game_machine.php has a class called StoreBuffer thats very easy for rendering with shaders. Obv i have how to get into fullscreen mode on linux, and a few other quirks there, just as long as you dont try and release it all together or within a collective space then im more than happy for people to reuse parts of my game machine for free.

Anyway just a few thoughts, would make OpenGL far better than DirectX (even though for the most part it’ll just be bringing the additionals up to the standard of DirectX and obv surpass them) Ive always liked the simple nature of OpenGL but at the moment its an false economy.

OpenGL can become the #1 standard, at the moment on linux its the ONLY standard. If i was only interested in windows i would be using DirectX.

Again thanks for your time.

This is probably the most pointless suggestion list I’ve ever seen on this forum :frowning:

is it pointless to actually want OpenGL to work properly? and be able to do anything in real-time with graphics? maybe im going too far with another dedicated OS, HERE, but it would be loads better and as for loading up games personally i hate waiting a few mins on fully-featured games to play so if theres a way to get rid of that it would be great. (memory copying takes its time, lots of time)

Pointless as in “I do not see your point”.
All your suggestions are high confined to a 2D genre, and has nothing to do with GL.
“memory copying takes its time, lots of time” in fact no. In-memory plain copy is several GB per second. Time taken depends on disk I/O, decompression, GPU tiling of textures, etc.

This one is actually interesting.
Unfortunately, khronos struggles to provide functional conformance tests, so performance tests most likely wont happen in forseeable future.

I see the use of functional conformance tests but performance test? Seriously? Demands on how the hardware works (noise etc.) and how fast it is should not be part of a API spec…

Why? The only reson for conformance tests is to have high quality implementations of GL. Performance of the driver is also a contributing factor of its quality.

So you want OpenGL to have fast memory, more memory and to not make noise? And that will make OpenGL #1 compared to DirectX?

I think you are confusing API with hardware.

Please read “What is OpenGL” in the FAQ section :
http://www.opengl.org/wiki/FAQ#What_is_OpenGL.3F

Yes, but no API specification should include any specific performance requirements.

  1. It’s difficult to specify performance requirements.
  2. It would disallow certain IHVs from implementing the GL.
  3. It would disallow software implementations of the GL.
  4. D3D does not have it either.

You assume that such tests/requirements will be absolute, which doesnt have to be true.

  1. Yes its difficult, thats one of the reasons to not do it (at least before other problems arent solved, and it looks like there is a lot of time until then).
  2. Not really, i imagine this could be done by vendors declaring upfront their performance targets. Just like they do with GL version supported.
  3. Yes, thats very unfortunate, but what is practical about software implementation of GL? Mesa can run trivial shaders on teapot in small window, but thats about it. No one will play Doom3 on software GL renderer.
  4. Windows Hardware Certification Kit (HCK) FAQ - Windows 8.1 HCK | Microsoft Learn

Microsoft Windows XP Display Driver Model (XPDM) drivers cannot get a Windows logo.

         |                     [b]Logo[/b]                   

|
| Area | Requirement |
|Basic|Video driver|WDDM driver|
|Basic|Color depth|32 BPP|
|Basic|GPU generation|DirectX v9 or later|
|Premium|Memory allocated for graphics|

[ul]
[li] 64 MB at 1024x768 native resolution[/li]
[li] 128 MB at 1024x768+ native resolution[/li]
[li] 256 MB at 1600x1200+ native resolution[/li]> [/ul]
|
|Premium|Texture update bandwidth|2 GB/second|
|Premium|Polygon counts|~1.5 triangles/second|

1.5 triangles/second probably isnt that hard to reach ;), but thats not the point here. Dont know if this all there is in WLK either.

Performance requirements for certain functions/methods, Hardware Vendors might try hard to get certain functions of OpenGL running fast but then fall short with different/alternative functions. You could just add it as an additional OpenGL speed stamp. (as a simple example… render 1000 smooth shaded triangles/quads/polygons/etc at a minimum of x, OpenGL does have the timing capacity, also it should be pointed out that theres a difference in the terms of performance between Windows/Linux? so opengl could have speed stamps for different OS’s)

My main point is OpenGL leaves it abit too open for hardware vendors to mess up the performance of OpenGL.

And to V-man i am referring to loads more above, like dealing with images/animations, if glX is apart of OpenGL or released very closely then surely opengl could come up with image loading facilities easily. Rendering extremely fast/directly with 2D is important in any game, most have HUD’s and GUI’s.

As a question, Ive always been thinking about Rendering 2D stuff to one buffer and the 3D gameworld to another (at the same time) and then just copying the 3D to 2D and flipping the image to the screen but idk rather that would be faster? Not like id be able todo that since im using C++ with OpenGL and i dont know anything about graphics parallel processing.

Anyway as a bottom line for the actual API spec then OpenGL could really do with ensuring new developers have a much easier time with opengl functionality. Even if it means revamping 2D. It shouldnt be too difficult and frustrating to program with OpenGL. (Which is exactly how ive found it)

Again, OpenGL is an API spec and should care about defining the functionality, not the hardware. This Windows logo for hardware isn’t part of the D3D specs as well…

Yes, OpenGL isn’t easy to learn, but that’s because 3D programming close to the metal isn’t easy. Also, there are too much outdated and too few up-to-date books and tutorials which often lead into a wrong direction. Both isn’t a problem of the spec!
There is one problem with the spec: I grew over the last years and GPU generations and carries old concepts and backwards compatibility. A clean, new API might be smaller and easier to learn (but not too much because of the other reasons!), but we just can’t create every two years a complete new API + drivers and throw everything else away.

I understand, that you have some suggestions which would make your life easier, but these aren’t ideas which should get inplemented into the OpenGL standart - it’s just the wrong address for your critics.

Thats really beside the point. Im not even aware of ‘D3D spec’ if such a beast exists - its aparently not needed. Document on its own isnt really worth all that much (unless its everything produced), when you have tests that everyone needs to pass. And saying that WLK isnt part of the D3D may even be technically true, but what does it change? Every HW vendor wants to pass it anyways - the tests and requirements i have in mind dont even need to be part of GL pdf spec, but instead ‘GL logo kit’ or whatever you want to name it.

[QUOTE=menzel;1237186]Again, OpenGL is an API spec and should care about defining the functionality, not the hardware. This Windows logo for hardware isn’t part of the D3D specs as well…

Yes, OpenGL isn’t easy to learn, but that’s because 3D programming close to the metal isn’t easy. Also, there are too much outdated and too few up-to-date books and tutorials which often lead into a wrong direction. Both isn’t a problem of the spec!
There is one problem with the spec: I grew over the last years and GPU generations and carries old concepts and backwards compatibility. A clean, new API might be smaller and easier to learn (but not too much because of the other reasons!), but we just can’t create every two years a complete new API + drivers and throw everything else away.

I understand, that you have some suggestions which would make your life easier, but these aren’t ideas which should get inplemented into the OpenGL standart - it’s just the wrong address for your critics.[/QUOTE]
“3D programming close to the metal isn’t easy” - i didnt know we were programming in a low level library? OpenGL im sure is actually quite high and when the code is in place it is easy to program with OpenGL. Its only the maths that are problematic, and really for programmers most of the time there is a case of don’t reinvent the wheel. And reading the first bit OpenGL is easy to learn if you were taught it.

The thing your not getting is everything oriented or associated with OpenGL, does in effect have a massive impact on how people view OpenGL. Is OpenGL practical? if you was in charge then the answer would be no. OpenGL should ALWAYS be pushing the graphical boundaries for games? Surely this is the objective of OpenGL, im sure ive read as much.

I do agree about the books, they should be a load better than they are. Esp. for beginners to OpenGL (they mislead and dont start out with easy stuff for the first few chapters, nor do they give simple useful sourcecode snippets and rely to heavily on GLUT, all you’d really need is one additional project to act as an engine and then get rid of all the unnes/messy code in there, worst of all they have a generally tendency of utilising deprecated functionality or cover unnes garbage first, and then they can be boring to read, and have the entire core spec located within the book???)

Thinking about it, if Frame Buffer Objects are good enough to utilise 2D then OpenGL should enforce the specs to have an infinite capacity of them. the Blit function should have a few more cousins to allow for maximum 2D features and perhaps a few states? (non related to the main blit which can be kept as is), and FBO’s should be strongly encouraged. Again if thats true then the opposite is also true that perhaps other 2D functionality shouldnt be present. Put very simple it should be very clear and concise.

OpenGL should have better parallel processing within the opengl spec (thinking about two OpenGL context’s here and perhaps a third for either loading or updating resources and being able to share memory etc) OpenGL should not rely on extensions esp. if those extensions pertain only to ATI or Nvidia, OpenGL should enforce cohesion.

For the second to last paragraph, SDL is quite old and hasent changed much? Point being that if you write the functionality properly and completely in the first place, then you can just make a few minor adjustments in the long run. I think the SDL for 2D is very good and perhaps its what OpenGL should have. Instead of making how to do 2D Rocket Science in OpenGL.

OpenGL is quite low-level, yes. It’s a little bit higher than D3D, which forces the programmer to go more hands-on with some of the messier aspects of the hardware, but much much lower than e.g. GDI.

…and speaking of hardware…

Something like this is not going to happen. In graphics API land hardware capabilities rule; if hardware is just not able to have infinite amounts of FBOs (because you can’t fit an infinite amount of resources into a coupla GB of video RAM) then it doesn’t matter a sweet damn what a hypothetical future spec may or may not say - you just can’t have infinite amounts of FBOs. OpenGL tried this route before, it tried to build up something where hardware details didn’t matter so much and the programmer was expected to just trust the driver to do the right thing. It didn’t work and the end result was unpredictable, unexpected behaviour and sudden nasty fallbacks to software emulation (not to mention a lot of poor-quality tutorials, the baleful influence of which is still be felt even today). That’s not a pleasant experience for a programmer and not a pleasant experience for an end-user - if you’re writing production code that is going to result in a publicly released program that gets used on a wide variety of machines then you absolutely need reasonable assurance that this kind of thing is not going to happen. And that means that details of the hardware need to be exposed in a manner that allows you the programmer to make appropriate judgement calls.

I suspect that overall what you want and need is not an API like OpenGL. I mentioned GDI before - I suspect that this kind of API is what you want and need, and is what you’re actually asking for. Sure, a higher-level framework could be built on top of OpenGL (or even D3D) that abstracts away the hardware-specific stuff somewhat more, and lets you throw draw calls around without having to sweat the details of the hardware (several such already exist), but don’t go looking for it in OpenGL itself.

OpenGL is unified high-level. Low-level would be found in the MS-DOS days. Or that lib that i cant remember that people use for consoles. The distinction between high and low is quite unimportant. How OpenGL initialises is poor, its not low-level its just not present, same for image loading.

[QUOTE=mhagain;1237197]Something like this is not going to happen. In graphics API land hardware capabilities rule; if hardware is just not able to have infinite amounts of FBOs (because you can’t fit an infinite amount of resources into a coupla GB of video RAM) then it doesn’t matter a sweet damn what a hypothetical future spec may or may not say - you just can’t have infinite amounts of FBOs. OpenGL tried this route before, it tried to build up something where hardware details didn’t matter so much and the programmer was expected to just trust the driver to do the right thing. It didn’t work and the end result was unpredictable, unexpected behaviour and sudden nasty fallbacks to software emulation (not to mention a lot of poor-quality tutorials, the baleful influence of which is still be felt even today). That’s not a pleasant experience for a programmer and not a pleasant experience for an end-user - if you’re writing production code that is going to result in a publicly released program that gets used on a wide variety of machines then you absolutely need reasonable assurance that this kind of thing is not going to happen. And that means that details of the hardware need to be exposed in a manner that allows you the programmer to make appropriate judgement calls.

I suspect that overall what you want and need is not an API like OpenGL. I mentioned GDI before - I suspect that this kind of API is what you want and need, and is what you’re actually asking for. Sure, a higher-level framework could be built on top of OpenGL (or even D3D) that abstracts away the hardware-specific stuff somewhat more, and lets you throw draw calls around without having to sweat the details of the hardware (several such already exist), but don’t go looking for it in OpenGL itself.[/QUOTE]
So how many Textures can a card have stored?, im just merely suggesting here that unless theres a reason for FBO’s to be capped, then they shouldnt be capped at all. Im not too sure on existing standards but i have read somewhere that theres guaranteed to only be at least 1 FBO in linux or something like that, and on top of that i was trying to use FBO’s for all my 2D stuff but it just wasnt working for one reason or another (also had a real lacklustre amount of features in the terms of only having blit), so i gave up and now im using rectangle textures for 2D, which never in doubt isnt direct/fast as it could and should be which is sloppy on opengl’s part. (Ive also tried PBO’s with no luck either)

I think the last part of you first paragraph actually shows that you support what im suggesting with quality control essentially. Personally i fail to know what your getting at there, im trying to get OpenGL to tighten up and ensure the hadrware vendors follow suite so software developers dont get nasty shocks, or more so the end-user. Having varied results on the same functionality is poor.

Are you trying to state that i can do 3D in GDI? GDI cant realistically work alongside OpenGL can it? (in parallel/unison) and im also asking for more software/state based options for FBO’s for 2D. Also according to you, you would have to learn two graphics libraries in order to successfully utilise OpenGL. I thought OpenGL got out of that whole IM JUST A 3D GRAPHICS LIBRARY K???

OpenGL isnt hardware dependent, so what are you talking about??? Quite clearly what you’ve just stated is allowing hardware to become independent again? That would be the worst thing ever, and would probably mean the imminent death of OpenGL.

Proper, realistic, useful, FAST, clear and concise 2D is long overdo.

What is it about people and weird names?

Think about your question for a moment.
Some textures can be 256 x 64, some can be 1 x 1. Some can be 4096 x 100.
The format can be GL_RGBA8 or even a floating point format such as GL_RGBA32F.
Some textures are mipmapped and others are not.

Now let me ask you a question : So, how many cars can we park in that driveway?

What on earth are you talking about?
Stop throwing around nonsense. You are going to confuse some new comers and next thing we know they’ll be making pointless suggestions here.

The beginners forums and this forum is already filled with enough nonsense.

I broadly support the concept of some form of performance testing for drivers, but this needs to be viewed in context. I as a developer want to know that if I do X the driver is or is not going to pull some shenanigans behind the scenes that will knock me off the fast path. Outside of forums, talking to other people, and running tests and heuristics myself, I frequently have no way of knowing this; centralized testing with a centrally maintained driver database and performance notes for each hardware/driver combo would help out a lot here. Enforcement - no. Knowledge - yes.

Given the ARB’s resources and mission constraints I freely admit that this is more a pipe-dream than anything else, but one can always dream.

I’m sorry to say this but OpenGL both is and is not hardware-dependent. You only need to look at vendor-specific extensions to know this. You only need to look at the various GL_MAX_ values you can query with glGet to know this. You only need to look at trying to run GLSL on a TNT2 to know this. OpenGL can only strive to be hardware-independent within the bounds of a given GL_VERSION and any minimum features and values that may be specified for that version, but otherwise it is very hardware-dependent. Unless you’re at a very raw beginner level, or unless you have some very serious misunderstandings about things, you should already know this.

It’s worth noting here that OpenGL itself doesn’t specify hardware dependence; it is, after all, “a software interface to graphics hardware … that may be accelerated”. But individual OpenGL implementations (outside of pure software implementations that are worthless for real-world high-performance use) are provided by hardware vendors, and so you have built-in hardware-dependence from the outset. You can’t use an NVIDIA driver with AMD graphics hardware, can you?

I read two things from your posts, and one of them is that you seem to share a common misconception that the majority of what OpenGL does is happening in the software domain, with perhaps only the final blit-to-screen happening in hardware. That’s not the case at all - the function of OpenGL is to tell your graphics hardware to draw stuff, but it’s your graphics hardware that actually does the drawing, all the way from individual vertexes and triangles to the final screen output. If your graphics hardware can’t do what you tell it to, guess what? It’s not going to do it (what happens next depends on individual drivers). The other one is that OpenGL is not a good choice for the kind of work you want to be able to do. You want to focus on predominantly 2D stuff, and - while OpenGL can certainly do this (2D is just a special-case of 3D after all) - that’s not what OpenGL really sets out to do. You need a specialized 2D API that has the capabilities you want instead.

[QUOTE=V-man;1237211]Think about your question for a moment.
Some textures can be 256 x 64, some can be 1 x 1. Some can be 4096 x 100.
The format can be GL_RGBA8 or even a floating point format such as GL_RGBA32F.
Some textures are mipmapped and others are not.

Now let me ask you a question : So, how many cars can we park in that driveway?[/QUOTE]
You fail to see the point.

[QUOTE=V-man;1237211]What on earth are you talking about?
Stop throwing around nonsense. You are going to confuse some new comers and next thing we know they’ll be making pointless suggestions here.

The beginners forums and this forum is already filled with enough nonsense.[/QUOTE]
Ive read such things somewhere or other (might of been something closely related though), and opengl does have minimums on quite a few things.

I’m sorry to say this but OpenGL both is and is not hardware-dependent. You only need to look at vendor-specific extensions to know this. You only need to look at the various GL_MAX_ values you can query with glGet to know this. You only need to look at trying to run GLSL on a TNT2 to know this. OpenGL can only strive to be hardware-independent within the bounds of a given GL_VERSION and any minimum features and values that may be specified for that version, but otherwise it is very hardware-dependent. Unless you’re at a very raw beginner level, or unless you have some very serious misunderstandings about things, you should already know this.

i would probably favour hardware specific code to be slower and much less supported across board, the point of those extensions is to bring in new ideas from the vendors into the OpenGL spec.

I read two things from your posts, and one of them is that you seem to share a common misconception that the majority of what OpenGL does is happening in the software domain, with perhaps only the final blit-to-screen happening in hardware. That’s not the case at all - the function of OpenGL is to tell your graphics hardware to draw stuff, but it’s your graphics hardware that actually does the drawing, all the way from individual vertexes and triangles to the final screen output. If your graphics hardware can’t do what you tell it to, guess what? It’s not going to do it (what happens next depends on individual drivers). The other one is that OpenGL is not a good choice for the kind of work you want to be able to do. You want to focus on predominantly 2D stuff, and - while OpenGL can certainly do this (2D is just a special-case of 3D after all) - that’s not what OpenGL really sets out to do. You need a specialized 2D API that has the capabilities you want instead.

OpenGL is emulated in hardware, the point is when you call RenderARedBlock it renders a red block. OpenGL is an high-end graphics API OpenGL should be more than capable of handling 2D at real-time rates (in respect to Very Fast 2D and not just rely on uber graphics cards, that have to process needlessly more data, just because of a sloppy OpenGL standard)

Anyway my dinners ready so cya.

This is nonsense. The OpenGL spec cannot mandate that a hardware-specific extension or hardware-specific anything runs slower. Hardware vendors implement OpenGL in their drivers; if a hardware vendor decides that their vendor-specific extension is going to run fast, then their vendor-specific extension will run fast. As for hardware-specific code in general - are you really demanding that calls such as glActiveTexture should run slower? I think you’re failing to understand the hardware-specific nature of these things, because, yes, glActiveTexture has hardware-specific dependencies - the values of GL_MAX_TEXTURE_COORDS and GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS which are determined by the hardware.

You’ve got it completely backwards. Hardware doesn’t know or care whether it’s OpenGL, D3D, or something else entirely different. Your OpenGL driver converts calls into some vendor-specific format that the hardware understands, then sends them on to the hardware for actually drawing stuff. At the most basic level hardware is just a bunch of logic gates and other gubbins; there’s nothing in hardware that knows or understands a glBlendFunc call, for example. There is a hardware blending unit, and your glBlendFunc call gets translated by the driver into something that sets parameters for the blending unit. What that “something” is is none of your business; it’s entirely hardware-dependent and that “something” is allowed to vary across different hardware, it’s entirely a property of the hardware, and is no different for any API. The blending unit itself doesn’t care about OpenGL, much the same way that your CPU doesn’t care about the OS it’s running.

OpenGL is implemented by the driver, which is software, not hardware, and that’s where it starts and ends. Beyond that point hardware takes over and OpenGL is completely irrelevant to any further discussion of what does or doesn’t happen.

That’s the whole point here. You can ask for new features to be added to OpenGL until you turn blue, but if those new features don’t exist in hardware, or can’t be mapped to something that does exist in hardware, then you are wasting your time. What you’re asking for here is very domain-specific. You want higher-level 2D features and support. High-level 2D features and support do not exist in hardware, and the whole thrust of recent OpenGL versions has been to move the API to a closer and more sensible mapping to how hardware actually works. So if you’ve chosen OpenGL for this, then you’ve made a bad choice. You don’t want OpenGL, you want a high-level 2D API instead.