Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 19 of 63 FirstFirst ... 9171819202129 ... LastLast
Results 181 to 190 of 623

Thread: The ARB announced OpenGL 3.0 and GLSL 1.30 today

  1. #181
    Senior Member OpenGL Guru
    Join Date
    Mar 2001
    Posts
    3,576

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    Exactly, now that those 2.1 + extensions have been ratified as core, we can finally see driver support from other vendors. This alone is very important.
    No, it is not.

    Intel won't be supporting jack. They've never had good OpenGL support, and GL "3.0" isn't going to change that. Reliance on Intel's GL implementation is flat-out stupid.

    ATi may claim support for OpenGL, but it is a minefield. You never know when an ATi driver will crash or choke on some shader. Worst still, you never know when it will choke on some shader after you ship.

    Sure it does, finally have cross platform support for a majority of the current GPU features.
    No, those are features. They don't resolve fast-path issues.

    I'm personally targeting DX10 level and up
    Well golly gee wilikers, isn't that nice for you. The rest of us recognize that there are millions of DX9 cards out there that need support too.

    in IMO the fast path on that hardware is very well defined if you are keeping up with the hardware design
    Really? Then does ATi's hardware support normalized unsigned shorts as vertex attributes? Does nVidia's? How many attributes can be separated in different vertex buffers on their hardware? What "hardware design" should we be keeping up with to answer these questions?

    Think the fact that GL3 provides DX10 level features on XP (assuming ATI and Intel build GL3 drivers for XP), would be enough.
    Vista marketshare is only going one way: up. That fact might have been useful a year ago, or two years ago. But that ship has sailed.

    For smaller developers, having a larger market share (ie add in Apple as well) could be very important.
    But that would be much more expensive, which smaller developers can't afford. They're doing good to test on XP and Vista. You'd be asking them to test on XP, Vista, and MacOS X. Not to mention having to develop for MacOS X to begin with.

    Be happy for what you have, and do the best to use it to your advantage.
    That's like saying that it's OK that you were promised a steak dinner and are given dog poo. At least you aren't starving; after all, you've got that nice dog poo.

  2. #182
    Junior Member Regular Contributor
    Join Date
    Jul 2007
    Location
    Alexandria, VA
    Posts
    211

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    Quote Originally Posted by pudman
    If you are a developer on Windows what would be the deciding factors for choosing OpenGL over D3D?
    Think the fact that GL3 provides DX10 level features on XP (assuming ATI and Intel build GL3 drivers for XP), would be enough. For smaller developers, having a larger market share (ie add in Apple as well) could be very important.
    Let me rephrase, is cross platform compatibility the ONLY reason you stay with OpenGL? Can you iterate the features in OpenGL (we're talking programming features AND hardware supported features) that you'd be without if you switched to D3D?

    OpenGL will always be useful for its cross platform nature, there's no disputing that. Regardless of whether the ARB continues to drag its heels, we multiplatform folk have no alternative. Surely you can convince me that I'm missing something?

  3. #183
    Junior Member Regular Contributor
    Join Date
    Aug 2007
    Location
    Adelaide, South Australia
    Posts
    206

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    Quote Originally Posted by Timothy Farrar
    "ByteCode GLSL, " - If you really get into the internals of both the order and (especially) newer hardware you will see that a common byte code for all vendors is a bad idea because hardware is way too different. All vendors would still have to re-compile and re-optimize into the native binary opcodes. So all you would be saving is parsing strings into tokens which really isn't much of a savings.
    Quote Originally Posted by ektor
    You're forgetting the #1 advantage of the token approach: Updated drivers will not break your shader compiles. Currently, nVidia can fix a bug in its parsing that renders a previously parsable program illegal, or a program can be illegal on ATI but legal on nVidia, while using just basic features but slightly out of spec syntax. This problem, which is a major one, DOES SIMPLY NOT EXIST on D3D and is the major reason why D3D has you compile the shaders into tokens first.
    Thats why i put Bytecode and Binaries as seperate items, i want BOTH.
    ByteCode GLSL would just parse the strings into a more compact format that was still high-level language (something like P-code would be ideal)
    As ektor said, the main advantage is that you dont get unexpected syntax errors when the customer recompiles it with a different driver.
    Other advantages are:
    2/ Some optimisations can be done at the tokenisation stage such as dead code removal and combining variables with different scope into one register.
    3/ The bytecode is smaller and faster to load (especially for those that have hundreds of shaders)
    4/ Those of us who prefer languages that are not 'C' can write our own front-end in our language of choice.
    5/ The hardware vendors only need to write the back-end compilor.
    6/ The load or run-time compilation will be slightly faster.
    7/ As the bytecode has been pre-optimised you will get a better assesment of which hardware it will run on.
    8/ The source code is not distributed so those who like trade secrets can make it harder for others to see what they did.

    Quote Originally Posted by Timothy Farrar
    Due to all the different hardware, shaders in the form of pre-compiled binaries really only makes sense in the form of caching on a local machine after compile
    Thats exactly what i want it for, i want to get the driver to pre-compile the shaders at installation and not have to recompile them until the hardware or driver changes.

  4. #184
    Senior Member OpenGL Guru
    Join Date
    Mar 2001
    Posts
    3,576

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    As ektor said, the main advantage is that you dont get unexpected syntax errors when the customer recompiles it with a different driver.
    Unless the bytecode interpreter has a bug in it. Granted, this is less likely than making a parser bug, but it can still happen.

    4/ Those of us who prefer languages that are not 'C' can write our own front-end in our language of choice.
    Technically, there's nothing stopping you from doing that now. You'd just be writing glslang code rather than assembly.

    8/ The source code is not distributed so those who like trade secrets can make it harder for others to see what they did.
    Since the bytecode format would have to be very public, it would only make it slightly harder.

  5. #185
    Senior Member OpenGL Pro Ilian Dinev's Avatar
    Join Date
    Jan 2008
    Location
    Watford, UK
    Posts
    1,290

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    To add some facts about Intel's GPUs:
    - do not support textures larger than 512x512
    - do not support anything DX or OpenGL.
    - crash randomly even on simplest, cleanest, tutorial-code
    - cull triangles randomly, switch to wireframe-mode randomly, generates bogus triangles randomly
    - not even one of the hundreds of queries to DX and OpenGL caps return valid results. I.e it states texture-size up to 2048; as you continue verifying query-results, things become even more horrifying.
    Overall, Intel's cards are absolutely useless for any accelerated 3D or even 2D. Only GDI works - by a software-fallback from MS, definitely.
    The millions of Intel IGPs sold do not show whether that miserable ancient silicon stays enabled. The price difference when buying a mobo is $4; people that tried to run games or CAD on it definitely saw they need a real GPU for $20+.

    I tried to add support for Intel cards to my commercial software [just ortho 2D stuff, using SM1 shaders or FF in DX9 or OGL] - but it proved impossible.




    ATi are quite silent about GL3, are missing from the GL3 credits that I saw, and have no extensions in the spec. So it's safe to think they'll completely ignore it. Also, it's funny how the "extensions, promoted to core" seem to be largely vendor-specific and thus could be guaranteed to be missing in most cases. Apple's VARs and the nv-half-float vtx-attrib, for instance. It kind of looks like GL3 is only bound only to nVidia+Apple. So much for a cross-platform API.


    OpenGL2.1's driver model (FIFOs) on WinXP is great, imho. So, how about providing 2 GL2 renderers for WinXP users (one SM3 model, another with SM4), and a DX10 one for Vista? Cg will be invaluable in such a model. If game/gui features in your software can be wrapped like that, you'll be giving users a lot of freedom on using their favorite OS and gpu.

  6. #186
    Junior Member Regular Contributor
    Join Date
    Aug 2007
    Location
    Adelaide, South Australia
    Posts
    206

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    Quote Originally Posted by Timothy Farrar
    "Tesselator" - Lack of current cross platform hardware support, no reason to think about this now.
    If its going to be in DX11 then we should get an extension for it when the DX11 driver is released, if not before.

    Quote Originally Posted by Timothy Farrar
    "Post-rasterisation instancing (for multipass deferred shaders)" - What? I think you need to describe what you are looking for here, any why you cannot do this type of thing with current GL3 functionality
    At the moment i do my first pass normally and then do several passes using a screen-aligned quad for post-processing effects like motion blur.
    This extension would allow me to specify several fragment shaders that are to be run as seperate passes without needing to setup screen-aligned quads or run the vertex processor every time.
    The 2nd and following shaders would simply be run for each pixel of the framebuffer (with the framebuffer/G-Buffer data being prefetched 'in' varyings instead of requiring a texture lookup)
    This is purely a way to make deferred shading more efficient and more intuitive.

    Quote Originally Posted by Timothy Farrar
    "Updatable resolution LOD textures" - What do you mean here?
    I have a 9 level 256x256 mipmap texture loaded for a background object.
    It comes twice as close to the camera so i now need a 10 level mipmap.
    I want to be able to stream the new 512x512 texture level onto the card and tell the card firmware to combine it with the existing levels to create a new MipMap that then replaces the old one.
    And also remove a mipmap level when objects move away again.

  7. #187
    Junior Member Newbie
    Join Date
    Aug 2008
    Location
    In the woods
    Posts
    4

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    I agree Intel's GPUs suck and I think always will, but the intel GPU in my Mac mini has been working fairly well so far. OSX uses OpenGL in the desktop rendering which works fine and I can play Quake 3 on it and it works perfect. Quake 3's graphics engine isn't some really basic code ripped from a tutorial either. But, this may be due to apple going into the drivers and fixing Intel's incompetence with the graphics themselves, I'm not sure.

    I also agree that AMD/ATI needs to pull their heads out of their asses and make a GL driver worth a damn.
    TroutButter...come get some.

  8. #188
    Senior Member OpenGL Guru
    Join Date
    Mar 2001
    Posts
    3,576

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    ATi are quite silent about GL3, are missing from the GL3 credits that I saw, and have no extensions in the spec. So it's safe to think they'll completely ignore it.
    Maybe. Or, maybe not.

    Blizzard, being a MacOS developer, uses OpenGL. They may have a D3D rendering mode for Windows, but they will be using OpenGL. If ATi/AMD is pledging their support, then at least that means that they'll be taking GL "3.0" seriously, to some degree.

    Apple's VARs and the nv-half-float vtx-attrib, for instance.
    Um, what? VAO (not VAR) can be entirely server-side (aka, a lie); there isn't and never will be a hardware-equivalent. What it does is allow the implementation to do the necessary vertex format checks once, instead of every time you draw with that vertex format.

    And the half-float stuff is probably supportable in ATi hardware too.

    I would point out that, while ATi may not have written any of the specs, they still voted for it.

    This extension would allow me to specify several fragment shaders that are to be run as seperate passes without needing to setup screen-aligned quads or run the vertex processor every time.
    You really think that the 4 vertices you use for your screen-aligned quad takes up any real time? I mean seriously now.

    I want to be able to stream the new 512x512 texture level onto the card and tell the card firmware to combine it with the existing levels to create a new MipMap that then replaces the old one.
    Yeah, you can forget that.

  9. #189
    Advanced Member Frequent Contributor Mars_999's Avatar
    Join Date
    Mar 2001
    Location
    Sioux Falls, SD, USA
    Posts
    519

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    Quote Originally Posted by knackered
    I don't understand why you're having a hard time accepting this, mars. Your shader might not even use matrices.
    Not following you here? My shader doesn't use matrices as of now, but that isn't what I am referring to, what I am referring to is this

    Code :
    glTranslatef();
    glRotatef();
     
    //now new way with GL3.0?
    Matrix4x4 translate;
    Matrix4x4 rotate;
    Matrix4x4 result;
    result = translate * rotate;
    glMultMatrixf(result.matrix);

    That is what I am getting at... This is how DX does things you had in DX9 some GL type functions for moving objects around and such, but the main idea was to do the latter in the above code, and if I am understanding correctly this will be the new way in GL3.0... I don't care if it is, I just wanted to clear it up...

  10. #190
    Senior Member OpenGL Guru
    Join Date
    Mar 2001
    Posts
    3,576

    Re: The ARB announced OpenGL 3.0 and GLSL 1.30 tod

    No, that is not how you do things. This is:

    Code :
    //Full GL "3.0":
    glTranslatef();
    glRotatef();
     
    //GL "3.0" with deprecated features removed.
    Matrix4x4 myMat = //Get some modelviewprojection matrix.
    glUniform4fv(<Insert your matrix uniform here>, &amp;myMat[0]);
    glUniform4fv(<Insert your matrix uniform here> + 1, &amp;myMat[1]);
    glUniform4fv(<Insert your matrix uniform here> + 2, &amp;myMat[2]);
    glUniform4fv(<Insert your matrix uniform here> + 3, &amp;myMat[3]);

    That is, you have to do everything yourself. You must create a uniform in your glslang shader to represent the matrix in the form you want it in. You must load that uniform yourself for each program that uses it. You must change that uniform in each appropriate program if it's value changes. And so on.

    There are no built-in uniforms anymore at all.

    And there's still the "3.0" full context if you don't want to get rid of the cruft.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •