PDA

View Full Version : Enforce speeds and focus on 2D! and More



fobbix
05-03-2012, 09:18 AM
Enforce certain speed standards for rendering/memory transferring and ensure all required functionality is in the core and ensure the graphics card arent permitted to make high-pitch noises. (ive noticed with some high-end games like starcraft 2 in non-game/loading or title mode)

Before i go any further i should point out whilst other opensource projects may be able to achieve these on there own, it takes a lot more knowledge to use them and then they cannot work in unison with OpenGL. (depending on which paragraph below you look at)

Maths - write lots upon lots of maths, OpenGL should be able to implement the best standard OpenCMath (Computer Math) or whatever, and should challenge other peoples work if it conflicts with the Open Standard Ie. It doesn't allow you to expand into that area. - Mainly referring to mouse picking which i cannot do.

A proper 2D standard. For extremely fast rendering of pixmaps.

Ive been trying to utilise framebuffers on my computer and PBO's and whatnot to get 2D to work but meh it just doesnt want to work, OpenGL is supposed to be a graphics library, what use is it if its incomplete? (Hardly any fast 2D facility's and as just stated nothing direct, atm im using Rectangles and Billboards) - it might be my fault though since i havent initialised opengl with glew on linux (i cant remember for windows i think it is, even though it still doesnt work) - i dont like glew

PNG/JPEG/TarGa/etc - all image file loaders and advanced options for tiled/animations to give OpenGL the most optimal format for processing the image data (instead of relying on other people to just repeat implemention of this). Also to this stop denying non-professionals, since OpenGL biggest arena or future arena wont be windows! Allow people to make programs and not crap.

Console mode standards. Console mode as in being able to access multiple lines and where the current user is, could become extremely useful esp. if hardware was wired into this. Or a type of OS, which would bring up a point of the greatness of MS-DOS, because of the simple fact that i hate wasting resources on additional programs when im in-game and it would make it harder for people to hack into computers and all with access to only certain stuff. (launch from windows or linux into Open-DOS or Gaming-DOS and no im not referring to single thread etc, hows OpenCL doing)

More Sourcecode error gathering functionality and perhaps another project for ensuring correct opengl implemention in sourcecode (sourcecode parser) and dont allow developers to wreck there machines (through the parser)

Far easier and more cross-platform, even if its not native to OpenGL, perhaps a new way to access graphics? without having to rely on xlib or windows to let you into the system? The Open-DOS option sounds champion, that way you could keep it as it is aswell.

Cut out computer memory (for graphics) so that games have a much faster load-up time. Think about a by-pass system? if everything has to go through a CPU then you could either utilise the GPU or a secondary CPU for Audio/Video streaming directly into Video memory and not from computer memory.

Lay some foundation for new developers, my game machine - www.fobbix.com/projects/foxy_game_machine.php has a class called StoreBuffer thats very easy for rendering with shaders. Obv i have how to get into fullscreen mode on linux, and a few other quirks there, just as long as you dont try and release it all together or within a collective space then im more than happy for people to reuse parts of my game machine for free.

Anyway just a few thoughts, would make OpenGL far better than DirectX (even though for the most part it'll just be bringing the additionals up to the standard of DirectX and obv surpass them) Ive always liked the simple nature of OpenGL but at the moment its an false economy.

OpenGL can become the #1 standard, at the moment on linux its the ONLY standard. If i was only interested in windows i would be using DirectX.

Again thanks for your time.

aqnuep
05-03-2012, 10:19 AM
This is probably the most pointless suggestion list I've ever seen on this forum :(

fobbix
05-03-2012, 11:24 AM
This is probably the most pointless suggestion list I've ever seen on this forum :(
is it pointless to actually want OpenGL to work properly? and be able to do anything in real-time with graphics? maybe im going too far with another dedicated OS, HERE, but it would be loads better and as for loading up games personally i hate waiting a few mins on fully-featured games to play so if theres a way to get rid of that it would be great. (memory copying takes its time, lots of time)

ZbuffeR
05-04-2012, 05:57 AM
Pointless as in "I do not see your point".
All your suggestions are high confined to a 2D genre, and has nothing to do with GL.
"memory copying takes its time, lots of time" in fact no. In-memory plain copy is several GB per second. Time taken depends on disk I/O, decompression, GPU tiling of textures, etc.

kyle_
05-04-2012, 06:21 AM
Enforce certain speed standards for rendering/memory transferring and ensure all required functionality is in the core

This one is actually interesting.
Unfortunately, khronos struggles to provide functional conformance tests, so performance tests most likely wont happen in forseeable future.

menzel
05-04-2012, 06:58 AM
I see the use of functional conformance tests but performance test? Seriously? Demands on how the hardware works (noise etc.) and how fast it is should not be part of a API spec...

kyle_
05-04-2012, 08:45 AM
Why? The only reson for conformance tests is to have high quality implementations of GL. Performance of the driver is also a contributing factor of its quality.

V-man
05-04-2012, 09:16 AM
Enforce certain speed standards for rendering/memory transferring and ensure all required functionality is in the core and ensure the graphics card arent permitted to make high-pitch noises. (ive noticed with some high-end games like starcraft 2 in non-game/loading or title mode)


So you want OpenGL to have fast memory, more memory and to not make noise? And that will make OpenGL #1 compared to DirectX?

I think you are confusing API with hardware.

Please read "What is OpenGL" in the FAQ section :
http://www.opengl.org/wiki/FAQ#What_is_OpenGL.3F

aqnuep
05-04-2012, 09:17 AM
Yes, but no API specification should include any specific performance requirements.
1. It's difficult to specify performance requirements.
2. It would disallow certain IHVs from implementing the GL.
3. It would disallow software implementations of the GL.
4. D3D does not have it either.

kyle_
05-04-2012, 09:50 AM
You assume that such tests/requirements will be absolute, which doesnt have to be true.
1. Yes its difficult, thats one of the reasons to not do it (at least before other problems arent solved, and it looks like there is a lot of time until then).
2. Not really, i imagine this could be done by vendors declaring upfront their performance targets. Just like they do with GL version supported.
3. Yes, thats very unfortunate, but what is practical about software implementation of GL? Mesa can run trivial shaders on teapot in small window, but thats about it. No one will play Doom3 on software GL renderer.
4. http://msdn.microsoft.com/en-us/library/windows/hardware/gg463054.aspx



Microsoft Windows XP Display Driver Model (XPDM) drivers cannot get a Windows logo.



Logo

Area
Requirement


Basic
Video driver
WDDM driver


Basic
Color depth
32 BPP


Basic
GPU generation
DirectX v9 or later


Premium
Memory allocated for graphics


64 MB at 1024x768 native resolution

128 MB at 1024x768+ native resolution

256 MB at 1600x1200+ native resolution




Premium
Texture update bandwidth
2 GB/second


Premium
Polygon counts
~1.5 triangles/second




1.5 triangles/second probably isnt that hard to reach ;), but thats not the point here. Dont know if this all there is in WLK either.

fobbix
05-04-2012, 10:18 AM
Performance requirements for certain functions/methods, Hardware Vendors might try hard to get certain functions of OpenGL running fast but then fall short with different/alternative functions. You could just add it as an additional OpenGL speed stamp. (as a simple example.. render 1000 smooth shaded triangles/quads/polygons/etc at a minimum of x, OpenGL does have the timing capacity, also it should be pointed out that theres a difference in the terms of performance between Windows/Linux? so opengl could have speed stamps for different OS's)

My main point is OpenGL leaves it abit too open for hardware vendors to mess up the performance of OpenGL.

And to V-man i am referring to loads more above, like dealing with images/animations, if glX is apart of OpenGL or released very closely then surely opengl could come up with image loading facilities easily. Rendering extremely fast/directly with 2D is important in any game, most have HUD's and GUI's.

As a question, Ive always been thinking about Rendering 2D stuff to one buffer and the 3D gameworld to another (at the same time) and then just copying the 3D to 2D and flipping the image to the screen but idk rather that would be faster? Not like id be able todo that since im using C++ with OpenGL and i dont know anything about graphics parallel processing.

Anyway as a bottom line for the actual API spec then OpenGL could really do with ensuring new developers have a much easier time with opengl functionality. Even if it means revamping 2D. It shouldnt be too difficult and frustrating to program with OpenGL. (Which is exactly how ive found it)

menzel
05-04-2012, 11:15 AM
Again, OpenGL is an API spec and should care about defining the functionality, not the hardware. This Windows logo for hardware isn't part of the D3D specs as well...

Yes, OpenGL isn't easy to learn, but that's because 3D programming close to the metal isn't easy. Also, there are too much outdated and too few up-to-date books and tutorials which often lead into a wrong direction. Both isn't a problem of the spec!
There is one problem with the spec: I grew over the last years and GPU generations and carries old concepts and backwards compatibility. A clean, new API might be smaller and easier to learn (but not too much because of the other reasons!), but we just can't create every two years a complete new API + drivers and throw everything else away.

I understand, that you have some suggestions which would make your life easier, but these aren't ideas which should get inplemented into the OpenGL standart - it's just the wrong address for your critics.

kyle_
05-04-2012, 11:47 AM
Again, OpenGL is an API spec and should care about defining the functionality, not the hardware. This Windows logo for hardware isn't part of the D3D specs as well...

Thats really beside the point. Im not even aware of 'D3D spec' if such a beast exists - its aparently not needed. Document on its own isnt really worth all that much (unless its everything produced), when you have tests that everyone needs to pass. And saying that WLK isnt part of the D3D may even be technically true, but what does it change? Every HW vendor wants to pass it anyways - the tests and requirements i have in mind dont even need to be part of GL pdf spec, but instead 'GL logo kit' or whatever you want to name it.

fobbix
05-05-2012, 12:53 AM
Again, OpenGL is an API spec and should care about defining the functionality, not the hardware. This Windows logo for hardware isn't part of the D3D specs as well...

Yes, OpenGL isn't easy to learn, but that's because 3D programming close to the metal isn't easy. Also, there are too much outdated and too few up-to-date books and tutorials which often lead into a wrong direction. Both isn't a problem of the spec!
There is one problem with the spec: I grew over the last years and GPU generations and carries old concepts and backwards compatibility. A clean, new API might be smaller and easier to learn (but not too much because of the other reasons!), but we just can't create every two years a complete new API + drivers and throw everything else away.

I understand, that you have some suggestions which would make your life easier, but these aren't ideas which should get inplemented into the OpenGL standart - it's just the wrong address for your critics.
"3D programming close to the metal isn't easy" - i didnt know we were programming in a low level library? OpenGL im sure is actually quite high and when the code is in place it is easy to program with OpenGL. Its only the maths that are problematic, and really for programmers most of the time there is a case of don't reinvent the wheel. And reading the first bit OpenGL is easy to learn if you were taught it.

The thing your not getting is everything oriented or associated with OpenGL, does in effect have a massive impact on how people view OpenGL. Is OpenGL practical? if you was in charge then the answer would be no. OpenGL should ALWAYS be pushing the graphical boundaries for games? Surely this is the objective of OpenGL, im sure ive read as much.

I do agree about the books, they should be a load better than they are. Esp. for beginners to OpenGL (they mislead and dont start out with easy stuff for the first few chapters, nor do they give simple useful sourcecode snippets and rely to heavily on GLUT, all you'd really need is one additional project to act as an engine and then get rid of all the unnes/messy code in there, worst of all they have a generally tendency of utilising deprecated functionality or cover unnes garbage first, and then they can be boring to read, and have the entire core spec located within the book???)

Thinking about it, if Frame Buffer Objects are good enough to utilise 2D then OpenGL should enforce the specs to have an infinite capacity of them. the Blit function should have a few more cousins to allow for maximum 2D features and perhaps a few states? (non related to the main blit which can be kept as is), and FBO's should be strongly encouraged. Again if thats true then the opposite is also true that perhaps other 2D functionality shouldnt be present. Put very simple it should be very clear and concise.

OpenGL should have better parallel processing within the opengl spec (thinking about two OpenGL context's here and perhaps a third for either loading or updating resources and being able to share memory etc) OpenGL should not rely on extensions esp. if those extensions pertain only to ATI or Nvidia, OpenGL should enforce cohesion.

For the second to last paragraph, SDL is quite old and hasent changed much? Point being that if you write the functionality properly and completely in the first place, then you can just make a few minor adjustments in the long run. I think the SDL for 2D is very good and perhaps its what OpenGL should have. Instead of making how to do 2D Rocket Science in OpenGL.

mhagain
05-05-2012, 02:56 AM
"3D programming close to the metal isn't easy" - i didnt know we were programming in a low level library? OpenGL im sure is actually quite high.....

OpenGL is quite low-level, yes. It's a little bit higher than D3D, which forces the programmer to go more hands-on with some of the messier aspects of the hardware, but much much lower than e.g. GDI.

...and speaking of hardware...


Thinking about it, if Frame Buffer Objects are good enough to utilise 2D then OpenGL should enforce the specs to have an infinite capacity of them.....

Something like this is not going to happen. In graphics API land hardware capabilities rule; if hardware is just not able to have infinite amounts of FBOs (because you can't fit an infinite amount of resources into a coupla GB of video RAM) then it doesn't matter a sweet damn what a hypothetical future spec may or may not say - you just can't have infinite amounts of FBOs. OpenGL tried this route before, it tried to build up something where hardware details didn't matter so much and the programmer was expected to just trust the driver to do the right thing. It didn't work and the end result was unpredictable, unexpected behaviour and sudden nasty fallbacks to software emulation (not to mention a lot of poor-quality tutorials, the baleful influence of which is still be felt even today). That's not a pleasant experience for a programmer and not a pleasant experience for an end-user - if you're writing production code that is going to result in a publicly released program that gets used on a wide variety of machines then you absolutely need reasonable assurance that this kind of thing is not going to happen. And that means that details of the hardware need to be exposed in a manner that allows you the programmer to make appropriate judgement calls.

I suspect that overall what you want and need is not an API like OpenGL. I mentioned GDI before - I suspect that this kind of API is what you want and need, and is what you're actually asking for. Sure, a higher-level framework could be built on top of OpenGL (or even D3D) that abstracts away the hardware-specific stuff somewhat more, and lets you throw draw calls around without having to sweat the details of the hardware (several such already exist), but don't go looking for it in OpenGL itself.

fobbix
05-05-2012, 05:47 AM
OpenGL is quite low-level, yes. It's a little bit higher than D3D, which forces the programmer to go more hands-on with some of the messier aspects of the hardware, but much much lower than e.g. GDI.
OpenGL is unified high-level. Low-level would be found in the MS-DOS days. Or that lib that i cant remember that people use for consoles. The distinction between high and low is quite unimportant. How OpenGL initialises is poor, its not low-level its just not present, same for image loading.


Something like this is not going to happen. In graphics API land hardware capabilities rule; if hardware is just not able to have infinite amounts of FBOs (because you can't fit an infinite amount of resources into a coupla GB of video RAM) then it doesn't matter a sweet damn what a hypothetical future spec may or may not say - you just can't have infinite amounts of FBOs. OpenGL tried this route before, it tried to build up something where hardware details didn't matter so much and the programmer was expected to just trust the driver to do the right thing. It didn't work and the end result was unpredictable, unexpected behaviour and sudden nasty fallbacks to software emulation (not to mention a lot of poor-quality tutorials, the baleful influence of which is still be felt even today). That's not a pleasant experience for a programmer and not a pleasant experience for an end-user - if you're writing production code that is going to result in a publicly released program that gets used on a wide variety of machines then you absolutely need reasonable assurance that this kind of thing is not going to happen. And that means that details of the hardware need to be exposed in a manner that allows you the programmer to make appropriate judgement calls.

I suspect that overall what you want and need is not an API like OpenGL. I mentioned GDI before - I suspect that this kind of API is what you want and need, and is what you're actually asking for. Sure, a higher-level framework could be built on top of OpenGL (or even D3D) that abstracts away the hardware-specific stuff somewhat more, and lets you throw draw calls around without having to sweat the details of the hardware (several such already exist), but don't go looking for it in OpenGL itself.
So how many Textures can a card have stored?, im just merely suggesting here that unless theres a reason for FBO's to be capped, then they shouldnt be capped at all. Im not too sure on existing standards but i have read somewhere that theres guaranteed to only be at least 1 FBO in linux or something like that, and on top of that i was trying to use FBO's for all my 2D stuff but it just wasnt working for one reason or another (also had a real lacklustre amount of features in the terms of only having blit), so i gave up and now im using rectangle textures for 2D, which never in doubt isnt direct/fast as it could and should be which is sloppy on opengl's part. (Ive also tried PBO's with no luck either)

I think the last part of you first paragraph actually shows that you support what im suggesting with quality control essentially. Personally i fail to know what your getting at there, im trying to get OpenGL to tighten up and ensure the hadrware vendors follow suite so software developers dont get nasty shocks, or more so the end-user. Having varied results on the same functionality is poor.

Are you trying to state that i can do 3D in GDI? GDI cant realistically work alongside OpenGL can it? (in parallel/unison) and im also asking for more software/state based options for FBO's for 2D. Also according to you, you would have to learn two graphics libraries in order to successfully utilise OpenGL. I thought OpenGL got out of that whole IM JUST A 3D GRAPHICS LIBRARY K???

OpenGL isnt hardware dependent, so what are you talking about??? Quite clearly what you've just stated is allowing hardware to become independent again? That would be the worst thing ever, and would probably mean the imminent death of OpenGL.

Proper, realistic, useful, FAST, clear and concise 2D is long overdo.

What is it about people and weird names?

V-man
05-05-2012, 09:10 AM
So how many Textures can a card have stored?,
Think about your question for a moment.
Some textures can be 256 x 64, some can be 1 x 1. Some can be 4096 x 100.
The format can be GL_RGBA8 or even a floating point format such as GL_RGBA32F.
Some textures are mipmapped and others are not.

Now let me ask you a question : So, how many cars can we park in that driveway?



im just merely suggesting here that unless theres a reason for FBO's to be capped, then they shouldnt be capped at all. Im not too sure on existing standards but i have read somewhere that theres guaranteed to only be at least 1 FBO in linux or something like that,

What on earth are you talking about?
Stop throwing around nonsense. You are going to confuse some new comers and next thing we know they'll be making pointless suggestions here.

The beginners forums and this forum is already filled with enough nonsense.

mhagain
05-05-2012, 09:47 AM
<snip>

I broadly support the concept of some form of performance testing for drivers, but this needs to be viewed in context. I as a developer want to know that if I do X the driver is or is not going to pull some shenanigans behind the scenes that will knock me off the fast path. Outside of forums, talking to other people, and running tests and heuristics myself, I frequently have no way of knowing this; centralized testing with a centrally maintained driver database and performance notes for each hardware/driver combo would help out a lot here. Enforcement - no. Knowledge - yes.

Given the ARB's resources and mission constraints I freely admit that this is more a pipe-dream than anything else, but one can always dream.

I'm sorry to say this but OpenGL both is and is not hardware-dependent. You only need to look at vendor-specific extensions to know this. You only need to look at the various GL_MAX_ values you can query with glGet to know this. You only need to look at trying to run GLSL on a TNT2 to know this. OpenGL can only strive to be hardware-independent within the bounds of a given GL_VERSION and any minimum features and values that may be specified for that version, but otherwise it is very hardware-dependent. Unless you're at a very raw beginner level, or unless you have some very serious misunderstandings about things, you should already know this.

It's worth noting here that OpenGL itself doesn't specify hardware dependence; it is, after all, "a software interface to graphics hardware ... that may be accelerated". But individual OpenGL implementations (outside of pure software implementations that are worthless for real-world high-performance use) are provided by hardware vendors, and so you have built-in hardware-dependence from the outset. You can't use an NVIDIA driver with AMD graphics hardware, can you?

I read two things from your posts, and one of them is that you seem to share a common misconception that the majority of what OpenGL does is happening in the software domain, with perhaps only the final blit-to-screen happening in hardware. That's not the case at all - the function of OpenGL is to tell your graphics hardware to draw stuff, but it's your graphics hardware that actually does the drawing, all the way from individual vertexes and triangles to the final screen output. If your graphics hardware can't do what you tell it to, guess what? It's not going to do it (what happens next depends on individual drivers). The other one is that OpenGL is not a good choice for the kind of work you want to be able to do. You want to focus on predominantly 2D stuff, and - while OpenGL can certainly do this (2D is just a special-case of 3D after all) - that's not what OpenGL really sets out to do. You need a specialized 2D API that has the capabilities you want instead.

fobbix
05-05-2012, 11:23 AM
Think about your question for a moment.
Some textures can be 256 x 64, some can be 1 x 1. Some can be 4096 x 100.
The format can be GL_RGBA8 or even a floating point format such as GL_RGBA32F.
Some textures are mipmapped and others are not.

Now let me ask you a question : So, how many cars can we park in that driveway?
You fail to see the point.


What on earth are you talking about?
Stop throwing around nonsense. You are going to confuse some new comers and next thing we know they'll be making pointless suggestions here.

The beginners forums and this forum is already filled with enough nonsense.
Ive read such things somewhere or other (might of been something closely related though), and opengl does have minimums on quite a few things.


I'm sorry to say this but OpenGL both is and is not hardware-dependent. You only need to look at vendor-specific extensions to know this. You only need to look at the various GL_MAX_ values you can query with glGet to know this. You only need to look at trying to run GLSL on a TNT2 to know this. OpenGL can only strive to be hardware-independent within the bounds of a given GL_VERSION and any minimum features and values that may be specified for that version, but otherwise it is very hardware-dependent. Unless you're at a very raw beginner level, or unless you have some very serious misunderstandings about things, you should already know this.
i would probably favour hardware specific code to be slower and much less supported across board, the point of those extensions is to bring in new ideas from the vendors into the OpenGL spec.


I read two things from your posts, and one of them is that you seem to share a common misconception that the majority of what OpenGL does is happening in the software domain, with perhaps only the final blit-to-screen happening in hardware. That's not the case at all - the function of OpenGL is to tell your graphics hardware to draw stuff, but it's your graphics hardware that actually does the drawing, all the way from individual vertexes and triangles to the final screen output. If your graphics hardware can't do what you tell it to, guess what? It's not going to do it (what happens next depends on individual drivers). The other one is that OpenGL is not a good choice for the kind of work you want to be able to do. You want to focus on predominantly 2D stuff, and - while OpenGL can certainly do this (2D is just a special-case of 3D after all) - that's not what OpenGL really sets out to do. You need a specialized 2D API that has the capabilities you want instead.
OpenGL is emulated in hardware, the point is when you call RenderARedBlock it renders a red block. OpenGL is an high-end graphics API OpenGL should be more than capable of handling 2D at real-time rates (in respect to Very Fast 2D and not just rely on uber graphics cards, that have to process needlessly more data, just because of a sloppy OpenGL standard)

Anyway my dinners ready so cya.

mhagain
05-05-2012, 11:59 AM
i would probably favour hardware specific code to be slower and much less supported across board

This is nonsense. The OpenGL spec cannot mandate that a hardware-specific extension or hardware-specific anything runs slower. Hardware vendors implement OpenGL in their drivers; if a hardware vendor decides that their vendor-specific extension is going to run fast, then their vendor-specific extension will run fast. As for hardware-specific code in general - are you really demanding that calls such as glActiveTexture should run slower? I think you're failing to understand the hardware-specific nature of these things, because, yes, glActiveTexture has hardware-specific dependencies - the values of GL_MAX_TEXTURE_COORDS and GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS which are determined by the hardware.


OpenGL is emulated in hardware

You've got it completely backwards. Hardware doesn't know or care whether it's OpenGL, D3D, or something else entirely different. Your OpenGL driver converts calls into some vendor-specific format that the hardware understands, then sends them on to the hardware for actually drawing stuff. At the most basic level hardware is just a bunch of logic gates and other gubbins; there's nothing in hardware that knows or understands a glBlendFunc call, for example. There is a hardware blending unit, and your glBlendFunc call gets translated by the driver into something that sets parameters for the blending unit. What that "something" is is none of your business; it's entirely hardware-dependent and that "something" is allowed to vary across different hardware, it's entirely a property of the hardware, and is no different for any API. The blending unit itself doesn't care about OpenGL, much the same way that your CPU doesn't care about the OS it's running.

OpenGL is implemented by the driver, which is software, not hardware, and that's where it starts and ends. Beyond that point hardware takes over and OpenGL is completely irrelevant to any further discussion of what does or doesn't happen.

That's the whole point here. You can ask for new features to be added to OpenGL until you turn blue, but if those new features don't exist in hardware, or can't be mapped to something that does exist in hardware, then you are wasting your time. What you're asking for here is very domain-specific. You want higher-level 2D features and support. High-level 2D features and support do not exist in hardware, and the whole thrust of recent OpenGL versions has been to move the API to a closer and more sensible mapping to how hardware actually works. So if you've chosen OpenGL for this, then you've made a bad choice. You don't want OpenGL, you want a high-level 2D API instead.

Janika
05-05-2012, 03:26 PM
I think many here confused themselves with what a high vs. low level API is.

When we say high-level 3D API, we usually refer to something similar to old Direct3D intermediate mode, the Java3D, the Ogre3D "engine" :), Renderman, or even scene graph libraries such as SceneGraph.

However both OpenGL and Direct3D are low, and very low, level 3D APIs. Both abstract the graphics hardware functionality, but at different levels of abstraction.

OpenGL is more abstract than Direct3D, and this does not mean you get more control over the functionality, or that the other API is less or more capable. Usually less abstraction can lead to more headache and extra housekeeping work on the API user, such as in the case of Direct3D lost devices...and other window set-up code.

I agree here that OpenGL would benefit a lot if it had support to accelerated 2D functionalities, whatever provided by the hardware, like in Direct2D :)

Unfortunately OpenGL is giving up several features to its competitor, but it's still winning! It's the GOD of Computer Graphics!

fobbix
05-06-2012, 02:36 AM
This is nonsense. The OpenGL spec cannot mandate that a hardware-specific extension or hardware-specific anything runs slower. Hardware vendors implement OpenGL in their drivers; if a hardware vendor decides that their vendor-specific extension is going to run fast, then their vendor-specific extension will run fast. As for hardware-specific code in general - are you really demanding that calls such as glActiveTexture should run slower? I think you're failing to understand the hardware-specific nature of these things, because, yes, glActiveTexture has hardware-specific dependencies - the values of GL_MAX_TEXTURE_COORDS and GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS which are determined by the hardware.
Well it might be experimental and might work extremely well on a few cards but wont be running any faster than most calls? or put better it willl just be some additional thing that most software devs will overlook (on purpose). OpenGL probably ensures GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS to be a minimum value to inform software devs, it would be the same thing or irrelevant if it was Nvidia and ATI demanding the minimums, but either way it has an impact to software developers, an impact that OpenGL should care about or have an interest in.


You've got it completely backwards. Hardware doesn't know or care whether it's OpenGL, D3D, or something else entirely different. Your OpenGL driver converts calls into some vendor-specific format that the hardware understands, then sends them on to the hardware for actually drawing stuff. At the most basic level hardware is just a bunch of logic gates and other gubbins; there's nothing in hardware that knows or understands a glBlendFunc call, for example. There is a hardware blending unit, and your glBlendFunc call gets translated by the driver into something that sets parameters for the blending unit. What that "something" is is none of your business; it's entirely hardware-dependent and that "something" is allowed to vary across different hardware, it's entirely a property of the hardware, and is no different for any API. The blending unit itself doesn't care about OpenGL, much the same way that your CPU doesn't care about the OS it's running.

OpenGL is implemented by the driver, which is software, not hardware, and that's where it starts and ends. Beyond that point hardware takes over and OpenGL is completely irrelevant to any further discussion of what does or doesn't happen.
Actually ive got it right, lookup the word emulated. Hardware is the approximation of OpenGL. (It runs to the OpenGL Spec and may add its own extensions beyond that) OpenGL to low-level software to Hardware can be considered C to ASM to Binary. The point is here hardware vendors conform to the OpenGL Spec. Im pretty sure OpenGL doesnt convert the code to low-level thats the job of hardware vendors to my knowledge. (If your referring to the egg and chicken then its irrelevant (to how OpenGL came about)) OpenGL is all about conformance (being cross-platform), and thats why its very bad for OpenGL to be hardware specific or for software developers to utilise hardware specific code (excluding consoles). And for the above this is why i call OpenGL high-level.

^ actually im abit wrong there because programs get complied into Binary and all executables to my knowledge work on the same OS across all sorts of computers. But its still the same thing anyway.


That's the whole point here. You can ask for new features to be added to OpenGL until you turn blue, but if those new features don't exist in hardware, or can't be mapped to something that does exist in hardware, then you are wasting your time. What you're asking for here is very domain-specific. You want higher-level 2D features and support. High-level 2D features and support do not exist in hardware, and the whole thrust of recent OpenGL versions has been to move the API to a closer and more sensible mapping to how hardware actually works. So if you've chosen OpenGL for this, then you've made a bad choice. You don't want OpenGL, you want a high-level 2D API instead.
Your saying newer hardware cant handle fast 2D? Beyond that what you've just stated is that xlib is a better candidate for 2D than OpenGL, which given all its gibberish is both true and untrue.

Blitting or just simply copying image data from a to b is one of the most basic functions/methods possible and your saying modern-day graphic cards cant handle this??? On top of that simple stuff like having a 2D rendering process, or 2D drawing function that goes through certain states (a bitmask for whether to render or not or even just through the shader) and then just gets mapped to 0.0 to 1.0 in both x and y on the screen, why would this be impossible for hardware developers to emulate?

The main point is if OpenGL tomorrow turned around and stated that for the next version there going to add speedy, clear & concise, 2D, yada yada to the core, perhaps existing OpenGL drivers might not be able to follow suite, and wont be able to support future versions (which is very unlikely) however, when that happens then all the future graphic card vendors will say "Ok, OpenGL demands this, so were gonna have to come up with the hardware to support this" (presuming its reasonable enough) in future cards. + not allowing hardware vendors to get away with cheap imitations of hardware versions with very slow software, then they could incorp the performance stamps, and theres probably something already there anyway.

For image loading allowing the hardware vendors to take over with simplistic calls (that doesnt take away anyway functionality), will enable hardware code to be smarter about how that data gets from the harddrive to video memory. (most of it will just conform for the most part to the CPU (software), as in OpenGL or IN glu not avail to software devs to call) but if these were in simplistic high-level functions then bypassing RAM would be possible, which would make loading textures/pixel data far faster to video memory. (so simplistic calls for OpenGL spec and advanced workings for the hardware - it would be up to the hardware on how to retrieve img.tga from the harddrive) and think about built-in graphics cards and CrossFire. Ofcourse a hardware vendor could just do this but then there workings wouldnt work with already released software.

I might not understand too much about OpenGL but i do understand simple logistics.

aqnuep
05-06-2012, 12:01 PM
Actually ive got it right, lookup the word emulated. Hardware is the approximation of OpenGL. (It runs to the OpenGL Spec and may add its own extensions beyond that) OpenGL to low-level software to Hardware can be considered C to ASM to Binary. The point is here hardware vendors conform to the OpenGL Spec. Im pretty sure OpenGL doesnt convert the code to low-level thats the job of hardware vendors to my knowledge. (If your referring to the egg and chicken then its irrelevant (to how OpenGL came about)) OpenGL is all about conformance (being cross-platform), and thats why its very bad for OpenGL to be hardware specific or for software developers to utilise hardware specific code (excluding consoles). And for the above this is why i call OpenGL high-level.

^ actually im abit wrong there because programs get complied into Binary and all executables to my knowledge work on the same OS across all sorts of computers. But its still the same thing anyway.

While it's true that hardware is designed with OpenGL and D3D in mind, the hardware still has its own command set. OpenGL or D3D API calls are just mapped appropriately to the corresponding hardware functionality, if it exists, or is emulated using the supported hardware features, like in case of most compatibility profile features.

From what you say, it seems that you have no basic understanding of how GPUs work. First, do some research then come back.


Your saying newer hardware cant handle fast 2D? Beyond that what you've just stated is that xlib is a better candidate for 2D than OpenGL, which given all its gibberish is both true and untrue.

Blitting or just simply copying image data from a to b is one of the most basic functions/methods possible and your saying modern-day graphic cards cant handle this??? On top of that simple stuff like having a 2D rendering process, or 2D drawing function that goes through certain states (a bitmask for whether to render or not or even just through the shader) and then just gets mapped to 0.0 to 1.0 in both x and y on the screen, why would this be impossible for hardware developers to emulate?

Current harrdware can handle 2D ultra fast, as fast as 3D (considering 2D is just a special case of 3D, as it was mentioned before). Just because you can't write an application that renders 2D elements fast it doesn't mean that it's not possible. Even for very complex 3D scenes, current GPUs can easily render above hundreds of FPS. With a properly written 2D renderer you should get at least the same with any GPU on the market. If you failed to do so, blame yourself, not the OpenGL spec.


For image loading allowing the hardware vendors to take over with simplistic calls (that doesnt take away anyway functionality), will enable hardware code to be smarter about how that data gets from the harddrive to video memory. (most of it will just conform for the most part to the CPU (software), as in OpenGL or IN glu not avail to software devs to call) but if these were in simplistic high-level functions then bypassing RAM would be possible, which would make loading textures/pixel data far faster to video memory. (so simplistic calls for OpenGL spec and advanced workings for the hardware - it would be up to the hardware on how to retrieve img.tga from the harddrive) and think about built-in graphics cards and CrossFire. Ofcourse a hardware vendor could just do this but then there workings wouldnt work with already released software.

OpenGL is for accessing 3D graphics hardware, not for accessing hard-drive or for reading your famous image format. There are tons of libraries out there to accomplish that. OpenGL is not a 3D rendering engine, it is an API for accessing graphics hardware. Just accept this and move on.

aqnuep
05-06-2012, 12:05 PM
Btw, FBO, that you've mentioned a couple of times in your posts, is simply an API concept, not a hardware feature. Hardware is able to write the results of the rendering to arbitrary video memory locations, which can be your texture, your renderbuffer or your default framebuffer. The rest is just an abstraction that the spec and thus the driver provides so that you don't have to deal with the intricate details of how to program the hardware in order to e.g. render to a texture or to multiple textures, etc.

fobbix
05-06-2012, 02:21 PM
Current harrdware can handle 2D ultra fast, as fast as 3D (considering 2D is just a special case of 3D, as it was mentioned before). Just because you can't write an application that renders 2D elements fast it doesn't mean that it's not possible. Even for very complex 3D scenes, current GPUs can easily render above hundreds of FPS. With a properly written 2D renderer you should get at least the same with any GPU on the market. If you failed to do so, blame yourself, not the OpenGL spec.
what about if i wanted to 10's of thousands of units? (with 2D images) whilst having a 3D map, its true though that they'd probably have to be billboards to have the correct depth scaling applied (even though you could get around that. Rendering 2D with 3D might have some side effects though). Still its very annoying for HUD's and GUI's which are 99% 2D, and from a latest FPS gen perspective the HUD show have the least performance penalty possible. And then theres the fact that you might want to work in real-time with modifying textures.

My biggest compliant to OpenGL is that it doesnt work like its supposed to on my computer. I think the biggest problem ive had is with PBO's, ive loaded some in, then rendered them and then tried loading some more in, a reasonable request but OpenGL doesnt like it, the framerate just starts crashing on me with the more i load in, it hate it, its not my fault but ive had to change to FBO's then textures either wont render to or from, so now ive changed it to rectangle textures and its working fine but i know its not direct, i know its not the fastest it should be, i know ive wasted so much time on the other things cos they was presented to me that they was great for 2D (by books and OpenGL spec).

Ill be honest at first i thought it was me ive messed around with the software image loader ive created alot (breaking and fixing the confusing looking code), ive done an animation loader (which is quite hard) but meh i feel that opengl should already have these functions embedded into OpenGL, at the moment i can only use TarGa's, not exactly a problem like and perhaps a slight blessing. I find theres too many variables for OpenGL to go wrong esp. with poor book examples and not enough show-how. (something even as simple as forgetting to enable 2D textures in the startup) and when you do come across a problem its very hard to track down. Also with my current graphics card i also have to call glFinish(); every frame? (to stop the high pitch noise, ill be honest i first heard this with DirectX, interesting how ive come over to OpenGL (i couldnt solve it, everytime i went into fullscreen it was there, emitting from my computer, thinking it was something id done i was looking for errors in my code) but meh it requires i put that func in)


OpenGL is for accessing 3D graphics hardware, not for accessing hard-drive or for reading your famous image format. There are tons of libraries out there to accomplish that. OpenGL is not a 3D rendering engine, it is an API for accessing graphics hardware. Just accept this and move on.
So OpenGL doesnt communicate with the CPU? and if there are tons of libraries out there then why doesnt OpenGL call out to them and get them incorp'd into the spec or glu spec anyway.

Ive made my points on this forum and ive wasted a lot of free time here. Its up to OpenGL rather they take an interest or not but i really want those 2D capabilities and i believe ive made some very good suggestions, even if some of them are not apparent to the spec itself or immediately anyway (like OpenGL having its own Operating System, so it can engage games and high-end 3D suites with 100% of the resources, so it can stamp out the cross-platform misapps, and give game/software developers a very easy time with most of the specs being combined and presented in a professional complete manner). More so it wouldnt matter which OS you choose to launch into the OpenGL OS, which for me is a very big thing. (I dont like windows 7 and the pricing is extortionate, it should be 10 per year indefinitely or until they discontinue the OS, and the funny thing about that is that it would cost people more and MS would make more. And they wouldnt have to try and sell crappy new nanny-like OS's)

Tech for me has moved on, i want to see some very big and positive changes, so i can create some very good games as a designer. !!! Before anybody mentions UDK or Unity !!! im trying to create a great RTS game (2.5D 2D Units and 3D Map with entirely 2D GUI/HUD grrr), but i dont have the man power and the mouse picking problem has stopped me dead, so im continuing on with my gaming zone until i heard some news from multiple places (relating to multiple subjects)

Ive also been going to the gym quite abit aswell and as you've probably guessed no im not employed lol, i should be but i have bigger ambitions and dont wanted to be limited. Hopefully ill hear some good news soon.

Its been nice'ish talking to you all, Thankyou for your views.

ZbuffeR
05-06-2012, 03:10 PM
My biggest compliant to OpenGL is that it doesnt work like its supposed to on my computer.
I guess this is the actual problem you have with OpenGL.
That is the responsibility of the implementation (ie. Nvidia or AMD/ATI or Intel etc, not Khronos Group).

Can you be specific about your current hardware/driver/OS ?


Also with my current graphics card i also have to call glFinish(); every frame? (to stop the high pitch noise, ill be honest i first heard this with DirectX, interesting how ive come over to OpenGL (i couldnt solve it, everytime i went into fullscreen it was there, emitting from my computer, thinking it was something id done i was looking for errors in my code) but meh it requires i put that func in)
glFinish() will totally kill your performance, especially with async texture data transfer.
The noise can come from the card getting hot and starting to accelerate the fan to avoid melting. A starting point to avoid unnecessary work on the card is to use vsync with *SwapBuffers(1) instead of (0).

Alfonse Reinheart
05-06-2012, 03:21 PM
I guess this is the actual problem you have with OpenGL.

No, really his problem with OpenGL is that he wants to "blit" sprites, rather than rendering sprites the correct way. That is, with textured quads. The way every other application that uses OpenGL (or D3D) to do 2D sprite drawing does.

fobbix
05-06-2012, 04:29 PM
I guess this is the actual problem you have with OpenGL.
That is the responsibility of the implementation (ie. Nvidia or AMD/ATI or Intel etc, not Khronos Group).

Can you be specific about your current hardware/driver/OS ?
ATI Radeon HD 5700 Series/Windows 7 Home X86 and Linux - Ubuntu X86

Driver Packaging Version 8.85-110419a-118908C-ATI
Catalyst™ Version 11.5
Provider ATI Technologies Inc.
2D Driver Version 8.01.01.1152
2D Driver File Path /REGISTRY/MACHINE/SYSTEM/ControlSet001/Control/Class/{4D36E968-E325-11CE-BFC1-08002BE10318}/0001
Direct3D Version 7.14.10.0833
OpenGL Version 6.14.10.10750
Catalyst™ Control Center Version 2010.0825.2146.37182

Primary Adapter
Graphics Card Manufacturer Powered by ATI
Graphics Chipset ATI Radeon HD 5700 Series
Device ID 68B8
Vendor 1002

Subsystem ID 2991
Subsystem Vendor ID 1682

Graphics Bus Capability PCI Express 2.0
Maximum Bus Setting PCI Express 2.0 x16

BIOS Version 012.019.000.013
BIOS Part Number 113-HD577XZNFB4-113-C01201-021
BIOS Date 2010/04/14

Memory Size 1024 MB
Memory Type GDDR5

Core Clock in MHz 850 MHz
Memory Clock in MHz 1200 MHz
Total Memory Bandwidth in GByte/s 76.8 GByte/s


Motherboard - EP43T-USB3 (Gigabyte)
CPU - Intel Core 2 Quad - Q9550


I dont put off that it was probably something i was doing wrong myself but the possibilities are endless, i cant be certain about these things. Do you have some test code? or perhaps something i can compile in C++ vs 2010 or C++ codeblocks.



glFinish() will totally kill your performance, especially with async texture data transfer.
The noise can come from the card getting hot and starting to accelerate the fan to avoid melting. A starting point to avoid unnecessary work on the card is to use vsync with *SwapBuffers(1) instead of (0).
The noise is a high-pitch freq that starts immediately when certain programs are run (when a frame is called before the previous frame has finished rendering, its def todo with graphics rendering), its not from the fan, and i have had similar problems with other graphics cards, actually even my mums computer im pretty sure does it aswell. Anyway thanks for the info, it could help alot.

Anyway its quite late and i have to get up early so cya.

V-man
05-06-2012, 07:19 PM
A high pitch noise happens when there is a coil that is vibrating. It happens Zbuffer. I have a PC that emits a high pitch noise when I use Linux (Ubuntu) but in Win XP, for some reason it doesn't do it. The noise was coming from a coil nearby my CPU. Also, when the CPU did some work, the noise would cease. Normally, the manufacturer should design the coil properly or put some silicone or hot melt glue over it to dampen the noise.
I have no idea why it was fine under Win XP.

l_belev
05-14-2012, 09:11 AM
The coil near the CPU is part of the CPU's power supply system. Some semiconductor switches pump current into the coil at frequency which depends on the load. When the CPU is not idle it consumes more power and the coil works at higher frequency that goes beyond the human hearing limit (ultrasound). Thats why when your CPU has work to do the coil becomes silent. As for XP, probably it always keeps CPU busy enough to consume enough power to keep the coil silent.
The coil produces sound because it is not glued together rigidly enough to withstand the magnetic forces that try to bend it's parts and so they vibrate a bit.

Sorry for the off-topic, but as other people already talked about it and it is one of my other interests, i decided to join :)