Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 10 of 173 FirstFirst ... 891011122060110 ... LastLast
Results 91 to 100 of 1724

Thread: OpenGL 3 Updates

  1. #91
    Junior Member Regular Contributor
    Join Date
    Apr 2004
    Posts
    228

    Re: OpenGL 3 Updates

    Quote Originally Posted by ector
    Seems like a slower, more annoying version of asking questions to me.

    Can I do it this way? No.
    Can I do it this way? No.
    Can I do it this way? No.
    Can I do it this way? No.
    Can I do it this way? Yes, and you just did it.
    vs
    What does the hardware support? This and that.
    Okay, I do it that way. Fine.
    directx' approach is to add paralel APIs only for the questions.
    These query APIs come with their own set of tokens, structures, etc. which are somewhat corresponding but still very different from the main APIs - great complexity for the user.
    That paralel APIs require appropriate updates every time the main (functional) APIs are changed - very cumbersome and error-prone.
    Actually Microsoft fail to keep the query APIs up to date and adequate enough. There are many essential questions that one might want to ask and the API can't answer.

    This approach is apparently very user-unfriendly because in practice there are very few directx applications that actually use these query APIs.

    About whether it is slower or faster, this is irrelevant because generally one inquires for the hardware capabilities only once, during the start up of the application.

  2. #92
    Senior Member OpenGL Guru
    Join Date
    Mar 2001
    Posts
    3,576

    Re: OpenGL 3 Updates

    If I understand correctly, you first create a format object and then use that format object to bind to an image object. There is no guarantee that the bind will succeed either (e.g., supported format, insufficient resources). Then I have to delete the format and try one with a lesser demand on the system. Same boat different paddle.
    That's not how it works.

    In order to create an image in the first place, you need to have a format object. If you cannot create a particular image object for implementation-defined reasons (as opposed to programmer error), then you either use a fallback format object (unlikely, since your code was probably depending on something intrinsic to that format) or you throw an error.

    As for binding, the way all image attachments (framebuffer objects and program environment objects) work now is that as part of the creation of those attachment objects, you specify a format object that all images that get bound to it must use. So, the only reason an image bind to an FBO or PEO would ever fail is because the system could not load all of the images into video memory at the same time. Any format incompatibilities would have been detected as a failure to create the FBO or PEO.

    Everything I do is pretty much controlled by the demands of my users and I don't have the luxury of telling them that something is not supported. I have to provide an alternative -- period.
    Well, tough. The preponderance of OpenGL users would rather have a "query" mechanism that actually works instead of giving false positives or false negatives. The GL 3.0 version of your program will be more complicated. For the rest of us, our lives will be dramatically simplified.

    Once again, an inefficient algorithm to determine what went wrong after the fact.
    Please stop harping about this "inefficiency". Nobody cares about minor inefficiency in startup code.

    Even creating a format object doesn't guarantee that a particular size image will successfully bind.
    ... so? An image failing to bind to an FBO/PEO for implementation-dependent reasons (as opposed to using the wrong format object, which is a program error) is about the least likely thing to happen in a program. I imagine that most code wouldn't even catch the error.

    Take a 256MB card. In order to fail to bind a VAO/PEO/FBO set of state, the sum total of all buffer objects and image objects used by all three stages must be > 256MB (probably a bit less than 256, but never mind that now). You could do it with a small number of 4096x4096x32-bit textures.

    But consider this: what would D3D do? What would OpenGL 2.1 do? I don't know D3D, but I doubt it has any mechanism other than, "Hey, that didn't work." GL 2.1 would give rise to an error on rendering. So GL 3.0 isn't particularly better in this rare circumstance.

  3. #93
    Member Regular Contributor
    Join Date
    May 2001
    Posts
    348

    Re: OpenGL 3 Updates

    Quote Originally Posted by Korval
    But it is. Discard takes shader time; at least one opcode's worth. Alpha test is free; it happens alongside the depth test. On hardware that doesn't support actually halting on discard, this will be slower than alpha test.
    The cost of a single scalar comparison is getting insignificant given current hardware and shader complexity developments. It's also dwarfed by the impact fragment masking has on early Z optimizations.

    Both DX10 and OpenGL ES 2.0 dropped alpha test. Both were heavily influenced by IHVs. There's no reason to believe alpha test has a future as a fixed-function hardware feature.


    Quote Originally Posted by knackered
    because they have suddenly become high priority for me, and looking at recent threads, they have for lots of others going the unified shader route.
    ...and it should be a doddle, as it's already in ES.
    OpenGL ES supports loading binary shaders created by an offline compiler. There's no API for retrieving a binary blob from the driver yet, though. The latter is arguably more important if the number of hardware variations is high.

  4. #94
    Senior Member OpenGL Pro
    Join Date
    May 2000
    Location
    Naarn, Austria
    Posts
    1,102

    Re: OpenGL 3 Updates

    FWIW, if I know that a particular filtering mode is not supported at all for a particular graphics card, why should I even bother attempting to create a format object with that filtering mode? THAT is a better alternative.
    Ok, and how exactly do you find out that information? I mean detailed enough so none of the problems I described in my previous post appear?

    You're right, you may have to test a lot of combinations if you want to know exactly what combinations are allowed. I never said that's not true.

    Let's just assume there was an API that let's you query which filter modes are supported, and another API that let's you query which formats are supported. Then let's assume an implementation supports filtering mode A only with format B, and filtering mode C with format D, but not A with D or B with C.

    When I query support for format B, filtering A and filtering C respectively, what should the result be? Yes/Yes/Yes? That would be a lie, because if I try to use format B with filtering C it would fail. No/No/No? That would be a lie, too, because format B with filtering A would have worked. If there are X possible combinations, you may have to test X combinations in the worst case, it's as simple as that.

    Please, tell me a concrete suggestion how a simpler query scheme than format objects is possible. You just say "that's not good", but you fail to provide a better alternative.

    If you don't understand the difference between n! and (n-1)!, then you can't possibly understand the problem.
    I wonder if you understand what you're talking about. n! is not the number of combinations, it's the number of permutations. The number of possible combinations of n properties with m values each is m^n, and that's usually a lot less than n! (but still too much to test everything).

    Please, when trying to be smart and telling other people how incompetent they are, at least check you are correct, otherwise it could get embarrassing.

    Future proofing has nothing to do with having or not having a decent query system.
    "Future proofing" has everything to do with it. Any system that does not easily extend to future hardware is bad, because OpenGL should continue to work with future hardware. The old system of querying some arbitrary limits was not future proof, and we all know the result.

    And again, please enlighten us with your genious idea about a decent query system. If you don't have a better idea, you have no right to complain.

  5. #95
    Senior Member OpenGL Guru knackered's Avatar
    Join Date
    Aug 2001
    Location
    UK
    Posts
    2,833

    Re: OpenGL 3 Updates

    ok, just this once....
    Code :
    glCreateProgramObject(...)
    glCreateShaderObject(...)
    glShaderSource(...)
    glCompileShader(...)
    glAttachObject(...)
    glLinkProgram(...)
    glUseProgramObject(...)
    glGetProgramObjectBinary(void *ptr, uint32 *size)
    FILE* blobFile=fopen("blob.blob", "w");
    fwrite(ptr, 1, size, blobFile);
    fclose(blobFile);
    there you go, free of charge - and no tea needed.
    Knackered

  6. #96
    Senior Member OpenGL Guru Humus's Avatar
    Join Date
    Mar 2000
    Location
    Stockholm, Sweden
    Posts
    2,345

    Re: OpenGL 3 Updates

    Quote Originally Posted by Korval
    My point is that alpha test cannot emulate the functionality of discard, not the other way around.
    Fine, I was talking on an algoritmic level. If your algoritm uses one, it can be adapted to the other, but it may require more work to work around the limitations of alpha test.

    So that they could remove those shader opcodes and replace them with an alpha test.
    Which would be counterproductive in the vast majority of the cases.

    Alpha test is a per-sample operation.
    Alpha test is a per-fragment operation.

    Not without making reference to hardware that has dynamic branching and early termination support.
    Which modern hardware has to various degrees. You can't ignore this any more than you can ignore early-z and hi-z optimizations, even though it's irrelevant from a spec point of view.

    Some of us actually still care about R300/R400 class hardware. It's going to be around a while, and having support for it would be nice. Also, they're a lot more sensitive to performance due to number of opcodes.
    Well, I'm reasonably confident (was a while since I checked this) that the R400 generation kills the texture lookups for dead quads. Performance-wise we're talking about at best saving one opcode. Bloating a new API with legacy functionality for that seem like a waste far beyond what's normally accepted for supporting legacy hardware.

    I didn't say that you should leave it on all the time, but it certainly isn't as expensive as a shader opcode.
    It's typically far more expensive than a shader opcode.

  7. #97
    Senior Member OpenGL Guru Humus's Avatar
    Join Date
    Mar 2000
    Location
    Stockholm, Sweden
    Posts
    2,345

    Re: OpenGL 3 Updates

    Quote Originally Posted by Jan
    I always got the impression that discard is by far the most evil per-fragment operation.
    The most evil operation is depth output. It not only kills both hi-z and early-z, it also kills z-compression and thusly negatively affect performance of later rendering passes to the same screen area. Alpha test and discard are equally evil to the pipeline, but not as bad as depth output. They work fine with hi-z, but not with early-z unless you disable depth and stencil writes. Generally dynamic branching is preferable if you have decent coherency and you don't neccesarily have to kills writes to color, depth and stencil buffers.

  8. #98
    Senior Member OpenGL Guru Humus's Avatar
    Join Date
    Mar 2000
    Location
    Stockholm, Sweden
    Posts
    2,345

    Re: OpenGL 3 Updates

    Quote Originally Posted by Eric Lengyel
    Will ATI's OGL3 driver support XP?
    I don't have any up to date plans as I'm not working there anymore, but the GL driver used in Vista (which is a total from scratch rewrite) works in XP too for all hardware R300 and up. However, it's not been publicly exposed there yet. I don't know when it will be. I believe the latest Linux drivers are using that driver though. I believe the reason it's not exposed in XP is because the legacy driver is more compatible with current games and applications. However, R600 generation uses that driver for XP too, and a few select applications use the new driver as well for XP.

    Anyway, I take for granted that there will be no GL3 work done to the legacy driver, so for GL3 support it'll be in the new driver. I don't know what the plans are, but it would surprise me if ATI didn't expose GL3 on XP.

  9. #99
    Senior Member OpenGL Guru
    Join Date
    Mar 2001
    Posts
    3,576

    Re: OpenGL 3 Updates

    You can't ignore this any more than you can ignore early-z and hi-z optimizations
    Yes, but all hardware targeted by GL 3.0 will have early-z and some form of coarser z-culling. Not all 3.0 targeted hardware has dynamic branching.

    the R400 generation kills the texture lookups for dead quads.
    Say what? Since when? If this is true, why isn't this information well known?

    Bloating a new API with legacy functionality for that seem like a waste far beyond what's normally accepted for supporting legacy hardware.
    It's hardly bloat. At the very least, it isn't significant bloat.

    Generally dynamic branching is preferable if you have decent coherency
    And are dealing with hardware that has dynamic branching support, rather than the R300/R400/NV30 which has to do both sides of the branch and pick one or the other at the end.

  10. #100
    Intern Contributor
    Join Date
    Feb 2003
    Posts
    85

    Re: OpenGL 3 Updates

    Quote Originally Posted by Overmind
    I wonder if you understand what you're talking about. n! is not the number of combinations, it's the number of permutations. The number of possible combinations of n properties with m values each is m^n, and that's usually a lot less than n! (but still too much to test everything).
    You are correct. For yes/no values where order is not important, the total number of possible combinations is 2^n. However, you can't argue that eliminating one feature cuts the number of possible failures in half.

    Quote Originally Posted by Overmind
    And again, please enlighten us with your genious idea about a decent query system. If you don't have a better idea, you have no right to complain.
    There nothing genius about telling a developer that a particular feature is not supported:

    Format_t *ptFormat = 0;

    if (unsigned long ulFeatures = GetSupportedFeatures())
    {
    if (!ptFormat && (GL_SUPPORTS_FEATURE_X & ulFeatures))
    {
    // Try to create an X capable format ...
    }
    if (!ptFormat && (GL_SUPPORTS_FEATURE_Y & ulFeatures))
    {
    // No luck so far, try for a a Y capable format ...
    }
    }

    if (!ptFormat)
    {
    // Do this in software
    }

    This is a very broad brush query and does not eliminate the need for fallback logic because of incompatible or unavailable feature combinations -- it simply narrows the search for a valid format.

    As for complaints that D3D or GL2 "lie" about a particular "feature" or that the query information is out of date, the feature bits are provided by the driver. Format creation success offers no additional security or benefit it's just that the developer is forced to discover lack of functionality. Even with a simple query for supported features, the application still has to create the format and handle all exceptions and failures - there is no argument from me on that.

    I will do what I have to do but I don't have to like it. Harping over.

    -- peace

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •