Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 30

Thread: Enforce speeds and focus on 2D! and More

  1. #11
    Junior Member Newbie
    Join Date
    May 2012
    Posts
    12
    Performance requirements for certain functions/methods, Hardware Vendors might try hard to get certain functions of OpenGL running fast but then fall short with different/alternative functions. You could just add it as an additional OpenGL speed stamp. (as a simple example.. render 1000 smooth shaded triangles/quads/polygons/etc at a minimum of x, OpenGL does have the timing capacity, also it should be pointed out that theres a difference in the terms of performance between Windows/Linux? so opengl could have speed stamps for different OS's)

    My main point is OpenGL leaves it abit too open for hardware vendors to mess up the performance of OpenGL.

    And to V-man i am referring to loads more above, like dealing with images/animations, if glX is apart of OpenGL or released very closely then surely opengl could come up with image loading facilities easily. Rendering extremely fast/directly with 2D is important in any game, most have HUD's and GUI's.

    As a question, Ive always been thinking about Rendering 2D stuff to one buffer and the 3D gameworld to another (at the same time) and then just copying the 3D to 2D and flipping the image to the screen but idk rather that would be faster? Not like id be able todo that since im using C++ with OpenGL and i dont know anything about graphics parallel processing.

    Anyway as a bottom line for the actual API spec then OpenGL could really do with ensuring new developers have a much easier time with opengl functionality. Even if it means revamping 2D. It shouldnt be too difficult and frustrating to program with OpenGL. (Which is exactly how ive found it)

  2. #12
    Member Regular Contributor
    Join Date
    Jan 2012
    Location
    Germany
    Posts
    325
    Again, OpenGL is an API spec and should care about defining the functionality, not the hardware. This Windows logo for hardware isn't part of the D3D specs as well...

    Yes, OpenGL isn't easy to learn, but that's because 3D programming close to the metal isn't easy. Also, there are too much outdated and too few up-to-date books and tutorials which often lead into a wrong direction. Both isn't a problem of the spec!
    There is one problem with the spec: I grew over the last years and GPU generations and carries old concepts and backwards compatibility. A clean, new API might be smaller and easier to learn (but not too much because of the other reasons!), but we just can't create every two years a complete new API + drivers and throw everything else away.

    I understand, that you have some suggestions which would make your life easier, but these aren't ideas which should get inplemented into the OpenGL standart - it's just the wrong address for your critics.

  3. #13
    Member Regular Contributor
    Join Date
    Apr 2009
    Posts
    268
    Quote Originally Posted by menzel View Post
    Again, OpenGL is an API spec and should care about defining the functionality, not the hardware. This Windows logo for hardware isn't part of the D3D specs as well...
    Thats really beside the point. Im not even aware of 'D3D spec' if such a beast exists - its aparently not needed. Document on its own isnt really worth all that much (unless its everything produced), when you have tests that everyone needs to pass. And saying that WLK isnt part of the D3D may even be technically true, but what does it change? Every HW vendor wants to pass it anyways - the tests and requirements i have in mind dont even need to be part of GL pdf spec, but instead 'GL logo kit' or whatever you want to name it.

  4. #14
    Junior Member Newbie
    Join Date
    May 2012
    Posts
    12
    Quote Originally Posted by menzel View Post
    Again, OpenGL is an API spec and should care about defining the functionality, not the hardware. This Windows logo for hardware isn't part of the D3D specs as well...

    Yes, OpenGL isn't easy to learn, but that's because 3D programming close to the metal isn't easy. Also, there are too much outdated and too few up-to-date books and tutorials which often lead into a wrong direction. Both isn't a problem of the spec!
    There is one problem with the spec: I grew over the last years and GPU generations and carries old concepts and backwards compatibility. A clean, new API might be smaller and easier to learn (but not too much because of the other reasons!), but we just can't create every two years a complete new API + drivers and throw everything else away.

    I understand, that you have some suggestions which would make your life easier, but these aren't ideas which should get inplemented into the OpenGL standart - it's just the wrong address for your critics.
    "3D programming close to the metal isn't easy" - i didnt know we were programming in a low level library? OpenGL im sure is actually quite high and when the code is in place it is easy to program with OpenGL. Its only the maths that are problematic, and really for programmers most of the time there is a case of don't reinvent the wheel. And reading the first bit OpenGL is easy to learn if you were taught it.

    The thing your not getting is everything oriented or associated with OpenGL, does in effect have a massive impact on how people view OpenGL. Is OpenGL practical? if you was in charge then the answer would be no. OpenGL should ALWAYS be pushing the graphical boundaries for games? Surely this is the objective of OpenGL, im sure ive read as much.

    I do agree about the books, they should be a load better than they are. Esp. for beginners to OpenGL (they mislead and dont start out with easy stuff for the first few chapters, nor do they give simple useful sourcecode snippets and rely to heavily on GLUT, all you'd really need is one additional project to act as an engine and then get rid of all the unnes/messy code in there, worst of all they have a generally tendency of utilising deprecated functionality or cover unnes garbage first, and then they can be boring to read, and have the entire core spec located within the book???)

    Thinking about it, if Frame Buffer Objects are good enough to utilise 2D then OpenGL should enforce the specs to have an infinite capacity of them. the Blit function should have a few more cousins to allow for maximum 2D features and perhaps a few states? (non related to the main blit which can be kept as is), and FBO's should be strongly encouraged. Again if thats true then the opposite is also true that perhaps other 2D functionality shouldnt be present. Put very simple it should be very clear and concise.

    OpenGL should have better parallel processing within the opengl spec (thinking about two OpenGL context's here and perhaps a third for either loading or updating resources and being able to share memory etc) OpenGL should not rely on extensions esp. if those extensions pertain only to ATI or Nvidia, OpenGL should enforce cohesion.

    For the second to last paragraph, SDL is quite old and hasent changed much? Point being that if you write the functionality properly and completely in the first place, then you can just make a few minor adjustments in the long run. I think the SDL for 2D is very good and perhaps its what OpenGL should have. Instead of making how to do 2D Rocket Science in OpenGL.

  5. #15
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,183
    Quote Originally Posted by fobbix View Post
    "3D programming close to the metal isn't easy" - i didnt know we were programming in a low level library? OpenGL im sure is actually quite high.....
    OpenGL is quite low-level, yes. It's a little bit higher than D3D, which forces the programmer to go more hands-on with some of the messier aspects of the hardware, but much much lower than e.g. GDI.

    ...and speaking of hardware...

    Quote Originally Posted by fobbix View Post
    Thinking about it, if Frame Buffer Objects are good enough to utilise 2D then OpenGL should enforce the specs to have an infinite capacity of them.....
    Something like this is not going to happen. In graphics API land hardware capabilities rule; if hardware is just not able to have infinite amounts of FBOs (because you can't fit an infinite amount of resources into a coupla GB of video RAM) then it doesn't matter a sweet damn what a hypothetical future spec may or may not say - you just can't have infinite amounts of FBOs. OpenGL tried this route before, it tried to build up something where hardware details didn't matter so much and the programmer was expected to just trust the driver to do the right thing. It didn't work and the end result was unpredictable, unexpected behaviour and sudden nasty fallbacks to software emulation (not to mention a lot of poor-quality tutorials, the baleful influence of which is still be felt even today). That's not a pleasant experience for a programmer and not a pleasant experience for an end-user - if you're writing production code that is going to result in a publicly released program that gets used on a wide variety of machines then you absolutely need reasonable assurance that this kind of thing is not going to happen. And that means that details of the hardware need to be exposed in a manner that allows you the programmer to make appropriate judgement calls.

    I suspect that overall what you want and need is not an API like OpenGL. I mentioned GDI before - I suspect that this kind of API is what you want and need, and is what you're actually asking for. Sure, a higher-level framework could be built on top of OpenGL (or even D3D) that abstracts away the hardware-specific stuff somewhat more, and lets you throw draw calls around without having to sweat the details of the hardware (several such already exist), but don't go looking for it in OpenGL itself.

  6. #16
    Junior Member Newbie
    Join Date
    May 2012
    Posts
    12
    Quote Originally Posted by mhagain View Post
    OpenGL is quite low-level, yes. It's a little bit higher than D3D, which forces the programmer to go more hands-on with some of the messier aspects of the hardware, but much much lower than e.g. GDI.
    OpenGL is unified high-level. Low-level would be found in the MS-DOS days. Or that lib that i cant remember that people use for consoles. The distinction between high and low is quite unimportant. How OpenGL initialises is poor, its not low-level its just not present, same for image loading.

    Quote Originally Posted by mhagain View Post
    Something like this is not going to happen. In graphics API land hardware capabilities rule; if hardware is just not able to have infinite amounts of FBOs (because you can't fit an infinite amount of resources into a coupla GB of video RAM) then it doesn't matter a sweet damn what a hypothetical future spec may or may not say - you just can't have infinite amounts of FBOs. OpenGL tried this route before, it tried to build up something where hardware details didn't matter so much and the programmer was expected to just trust the driver to do the right thing. It didn't work and the end result was unpredictable, unexpected behaviour and sudden nasty fallbacks to software emulation (not to mention a lot of poor-quality tutorials, the baleful influence of which is still be felt even today). That's not a pleasant experience for a programmer and not a pleasant experience for an end-user - if you're writing production code that is going to result in a publicly released program that gets used on a wide variety of machines then you absolutely need reasonable assurance that this kind of thing is not going to happen. And that means that details of the hardware need to be exposed in a manner that allows you the programmer to make appropriate judgement calls.

    I suspect that overall what you want and need is not an API like OpenGL. I mentioned GDI before - I suspect that this kind of API is what you want and need, and is what you're actually asking for. Sure, a higher-level framework could be built on top of OpenGL (or even D3D) that abstracts away the hardware-specific stuff somewhat more, and lets you throw draw calls around without having to sweat the details of the hardware (several such already exist), but don't go looking for it in OpenGL itself.
    So how many Textures can a card have stored?, im just merely suggesting here that unless theres a reason for FBO's to be capped, then they shouldnt be capped at all. Im not too sure on existing standards but i have read somewhere that theres guaranteed to only be at least 1 FBO in linux or something like that, and on top of that i was trying to use FBO's for all my 2D stuff but it just wasnt working for one reason or another (also had a real lacklustre amount of features in the terms of only having blit), so i gave up and now im using rectangle textures for 2D, which never in doubt isnt direct/fast as it could and should be which is sloppy on opengl's part. (Ive also tried PBO's with no luck either)

    I think the last part of you first paragraph actually shows that you support what im suggesting with quality control essentially. Personally i fail to know what your getting at there, im trying to get OpenGL to tighten up and ensure the hadrware vendors follow suite so software developers dont get nasty shocks, or more so the end-user. Having varied results on the same functionality is poor.

    Are you trying to state that i can do 3D in GDI? GDI cant realistically work alongside OpenGL can it? (in parallel/unison) and im also asking for more software/state based options for FBO's for 2D. Also according to you, you would have to learn two graphics libraries in order to successfully utilise OpenGL. I thought OpenGL got out of that whole IM JUST A 3D GRAPHICS LIBRARY K???

    OpenGL isnt hardware dependent, so what are you talking about??? Quite clearly what you've just stated is allowing hardware to become independent again? That would be the worst thing ever, and would probably mean the imminent death of OpenGL.

    Proper, realistic, useful, FAST, clear and concise 2D is long overdo.

    What is it about people and weird names?

  7. #17
    Super Moderator OpenGL Guru
    Join Date
    Feb 2000
    Location
    Montreal, Canada
    Posts
    4,264
    Quote Originally Posted by fobbix View Post
    So how many Textures can a card have stored?,
    Think about your question for a moment.
    Some textures can be 256 x 64, some can be 1 x 1. Some can be 4096 x 100.
    The format can be GL_RGBA8 or even a floating point format such as GL_RGBA32F.
    Some textures are mipmapped and others are not.

    Now let me ask you a question : So, how many cars can we park in that driveway?


    Quote Originally Posted by fobbix View Post
    im just merely suggesting here that unless theres a reason for FBO's to be capped, then they shouldnt be capped at all. Im not too sure on existing standards but i have read somewhere that theres guaranteed to only be at least 1 FBO in linux or something like that,
    What on earth are you talking about?
    Stop throwing around nonsense. You are going to confuse some new comers and next thing we know they'll be making pointless suggestions here.

    The beginners forums and this forum is already filled with enough nonsense.
    ------------------------------
    Sig: http://glhlib.sourceforge.net
    an open source GLU replacement library. Much more modern than GLU.
    float matrix[16], inverse_matrix[16];
    glhLoadIdentityf2(matrix);
    glhTranslatef2(matrix, 0.0, 0.0, 5.0);
    glhRotateAboutXf2(matrix, angleInRadians);
    glhScalef2(matrix, 1.0, 1.0, -1.0);
    glhQuickInvertMatrixf2(matrix, inverse_matrix);
    glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
    glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);

  8. #18
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,183
    Quote Originally Posted by fobbix View Post
    <snip>
    I broadly support the concept of some form of performance testing for drivers, but this needs to be viewed in context. I as a developer want to know that if I do X the driver is or is not going to pull some shenanigans behind the scenes that will knock me off the fast path. Outside of forums, talking to other people, and running tests and heuristics myself, I frequently have no way of knowing this; centralized testing with a centrally maintained driver database and performance notes for each hardware/driver combo would help out a lot here. Enforcement - no. Knowledge - yes.

    Given the ARB's resources and mission constraints I freely admit that this is more a pipe-dream than anything else, but one can always dream.

    I'm sorry to say this but OpenGL both is and is not hardware-dependent. You only need to look at vendor-specific extensions to know this. You only need to look at the various GL_MAX_ values you can query with glGet to know this. You only need to look at trying to run GLSL on a TNT2 to know this. OpenGL can only strive to be hardware-independent within the bounds of a given GL_VERSION and any minimum features and values that may be specified for that version, but otherwise it is very hardware-dependent. Unless you're at a very raw beginner level, or unless you have some very serious misunderstandings about things, you should already know this.

    It's worth noting here that OpenGL itself doesn't specify hardware dependence; it is, after all, "a software interface to graphics hardware ... that may be accelerated". But individual OpenGL implementations (outside of pure software implementations that are worthless for real-world high-performance use) are provided by hardware vendors, and so you have built-in hardware-dependence from the outset. You can't use an NVIDIA driver with AMD graphics hardware, can you?

    I read two things from your posts, and one of them is that you seem to share a common misconception that the majority of what OpenGL does is happening in the software domain, with perhaps only the final blit-to-screen happening in hardware. That's not the case at all - the function of OpenGL is to tell your graphics hardware to draw stuff, but it's your graphics hardware that actually does the drawing, all the way from individual vertexes and triangles to the final screen output. If your graphics hardware can't do what you tell it to, guess what? It's not going to do it (what happens next depends on individual drivers). The other one is that OpenGL is not a good choice for the kind of work you want to be able to do. You want to focus on predominantly 2D stuff, and - while OpenGL can certainly do this (2D is just a special-case of 3D after all) - that's not what OpenGL really sets out to do. You need a specialized 2D API that has the capabilities you want instead.

  9. #19
    Junior Member Newbie
    Join Date
    May 2012
    Posts
    12
    Quote Originally Posted by V-man View Post
    Think about your question for a moment.
    Some textures can be 256 x 64, some can be 1 x 1. Some can be 4096 x 100.
    The format can be GL_RGBA8 or even a floating point format such as GL_RGBA32F.
    Some textures are mipmapped and others are not.

    Now let me ask you a question : So, how many cars can we park in that driveway?
    You fail to see the point.

    Quote Originally Posted by V-man View Post
    What on earth are you talking about?
    Stop throwing around nonsense. You are going to confuse some new comers and next thing we know they'll be making pointless suggestions here.

    The beginners forums and this forum is already filled with enough nonsense.
    Ive read such things somewhere or other (might of been something closely related though), and opengl does have minimums on quite a few things.

    I'm sorry to say this but OpenGL both is and is not hardware-dependent. You only need to look at vendor-specific extensions to know this. You only need to look at the various GL_MAX_ values you can query with glGet to know this. You only need to look at trying to run GLSL on a TNT2 to know this. OpenGL can only strive to be hardware-independent within the bounds of a given GL_VERSION and any minimum features and values that may be specified for that version, but otherwise it is very hardware-dependent. Unless you're at a very raw beginner level, or unless you have some very serious misunderstandings about things, you should already know this.
    i would probably favour hardware specific code to be slower and much less supported across board, the point of those extensions is to bring in new ideas from the vendors into the OpenGL spec.

    I read two things from your posts, and one of them is that you seem to share a common misconception that the majority of what OpenGL does is happening in the software domain, with perhaps only the final blit-to-screen happening in hardware. That's not the case at all - the function of OpenGL is to tell your graphics hardware to draw stuff, but it's your graphics hardware that actually does the drawing, all the way from individual vertexes and triangles to the final screen output. If your graphics hardware can't do what you tell it to, guess what? It's not going to do it (what happens next depends on individual drivers). The other one is that OpenGL is not a good choice for the kind of work you want to be able to do. You want to focus on predominantly 2D stuff, and - while OpenGL can certainly do this (2D is just a special-case of 3D after all) - that's not what OpenGL really sets out to do. You need a specialized 2D API that has the capabilities you want instead.
    OpenGL is emulated in hardware, the point is when you call RenderARedBlock it renders a red block. OpenGL is an high-end graphics API OpenGL should be more than capable of handling 2D at real-time rates (in respect to Very Fast 2D and not just rely on uber graphics cards, that have to process needlessly more data, just because of a sloppy OpenGL standard)

    Anyway my dinners ready so cya.

  10. #20
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,183
    Quote Originally Posted by fobbix View Post
    i would probably favour hardware specific code to be slower and much less supported across board
    This is nonsense. The OpenGL spec cannot mandate that a hardware-specific extension or hardware-specific anything runs slower. Hardware vendors implement OpenGL in their drivers; if a hardware vendor decides that their vendor-specific extension is going to run fast, then their vendor-specific extension will run fast. As for hardware-specific code in general - are you really demanding that calls such as glActiveTexture should run slower? I think you're failing to understand the hardware-specific nature of these things, because, yes, glActiveTexture has hardware-specific dependencies - the values of GL_MAX_TEXTURE_COORDS and GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS which are determined by the hardware.

    Quote Originally Posted by fobbix View Post
    OpenGL is emulated in hardware
    You've got it completely backwards. Hardware doesn't know or care whether it's OpenGL, D3D, or something else entirely different. Your OpenGL driver converts calls into some vendor-specific format that the hardware understands, then sends them on to the hardware for actually drawing stuff. At the most basic level hardware is just a bunch of logic gates and other gubbins; there's nothing in hardware that knows or understands a glBlendFunc call, for example. There is a hardware blending unit, and your glBlendFunc call gets translated by the driver into something that sets parameters for the blending unit. What that "something" is is none of your business; it's entirely hardware-dependent and that "something" is allowed to vary across different hardware, it's entirely a property of the hardware, and is no different for any API. The blending unit itself doesn't care about OpenGL, much the same way that your CPU doesn't care about the OS it's running.

    OpenGL is implemented by the driver, which is software, not hardware, and that's where it starts and ends. Beyond that point hardware takes over and OpenGL is completely irrelevant to any further discussion of what does or doesn't happen.

    That's the whole point here. You can ask for new features to be added to OpenGL until you turn blue, but if those new features don't exist in hardware, or can't be mapped to something that does exist in hardware, then you are wasting your time. What you're asking for here is very domain-specific. You want higher-level 2D features and support. High-level 2D features and support do not exist in hardware, and the whole thrust of recent OpenGL versions has been to move the API to a closer and more sensible mapping to how hardware actually works. So if you've chosen OpenGL for this, then you've made a bad choice. You don't want OpenGL, you want a high-level 2D API instead.
    Last edited by mhagain; 05-05-2012 at 01:50 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •