Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 7 of 7

Thread: OpenGL version/pipeline

  1. #1
    Intern Newbie
    Join Date
    Nov 2012
    Posts
    33

    OpenGL version/pipeline

    Hi everyone,

    I'm a little stuck on the specifics of OpenGL versioning. A bit of background...

    I'm writing an application (using old-fashioned WinAPI) and I'm working on converting the graphics system to OpenGL. Originally, I did all the math on the CPU and used GDI+ for rendering, which worked fine until the polygon count of my tests started increasing. It proved to be far too slow when dealing with ~20,000 triangles.

    Since I'm new to OpenGL, I started with the NeHe tutorials, which, as I later found, are quite outdated. So as of now, it's written in pretty basic OpenGL 1.1 fixed function pipeline. Not too much has been done, so conversion is not a big problem.

    As I understand it, OpenGL up until version 2.1 (or perhaps 3.0, not counting the deprecations) is built in "layers"; you would only need the latest version that supported all your needs. However, 3.1 and after completely discard FFP, so I'm not sure where they fit in. Also, I've heard that some Intel integrated GPUs only support 1.5, but modern GPUs only emulate FFP... eh, I just don't know.

    Now, what I'm asking is: Given that I only need the functionality of OpenGL 1.1 (it's nothing fancy, just a lot of brute force polys), would it be better for me to stay with the FFP, convert to OpenGL 2.1 shaders, or take a step further and discard all deprecated functionality in favor of OpenGL 3.1+ (in the interest of future-proofing)?

    The program is a data visualization aid intended for academic/scientific research purposes, so ideally I'd like it to work on as many computers as possible, not just those that have the latest hardware.

    Thanks for your time.

  2. #2
    Super Moderator OpenGL Guru
    Join Date
    Feb 2000
    Location
    Montreal, Canada
    Posts
    4,264
    If you want to support old Intel GPUs, use GL 1.1. Some of them support GL 1.4 but the drivers are too buggy so using new functionality is a problem.
    Use display lists to render your objects (if they are static). It should work fine, even on old Intel chips.
    You might want to check what GL version is supported on your target computer.

    Yes, NeHe tutorials are really old.
    ------------------------------
    Sig: http://glhlib.sourceforge.net
    an open source GLU replacement library. Much more modern than GLU.
    float matrix[16], inverse_matrix[16];
    glhLoadIdentityf2(matrix);
    glhTranslatef2(matrix, 0.0, 0.0, 5.0);
    glhRotateAboutXf2(matrix, angleInRadians);
    glhScalef2(matrix, 1.0, 1.0, -1.0);
    glhQuickInvertMatrixf2(matrix, inverse_matrix);
    glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
    glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);

  3. #3
    Intern Newbie
    Join Date
    Nov 2012
    Posts
    33
    Thanks for your response, V-man.

    About the display list approach though... I am generating my objects mathematically (with z as the output of a function that takes x and y as parameters) and displaying them like one might display terrain in a game. If I wanted to "animate" this while allowing simultaneous user-controlled orbit, would I use many pregenerated display lists or a VBO?

  4. #4
    Super Moderator OpenGL Guru
    Join Date
    Feb 2000
    Location
    Montreal, Canada
    Posts
    4,264
    If by animate, you mean to change the vertices, then you should not use display lists because you would have to recompile a display list and that is slow.
    VBOs have been introduced in GL 1.5. If you want to, use VBO.
    ------------------------------
    Sig: http://glhlib.sourceforge.net
    an open source GLU replacement library. Much more modern than GLU.
    float matrix[16], inverse_matrix[16];
    glhLoadIdentityf2(matrix);
    glhTranslatef2(matrix, 0.0, 0.0, 5.0);
    glhRotateAboutXf2(matrix, angleInRadians);
    glhScalef2(matrix, 1.0, 1.0, -1.0);
    glhQuickInvertMatrixf2(matrix, inverse_matrix);
    glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
    glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);

  5. #5
    Intern Newbie
    Join Date
    Nov 2012
    Posts
    33
    Okay, if that is the case, I probably will just ignore the old Intel GPUs and step up to using OpenGL 2.1; custom shaders will provide some flexibility in my display options, I think. It's probably unlikely that I would hit upon an old Intel GPU anyway, since research budgets usually allow for computer upgrades (and laptops are less common).

    Thanks again for the pointers.

  6. #6
    Senior Member OpenGL Pro Aleksandar's Avatar
    Join Date
    Jul 2009
    Posts
    1,078
    If you rely only on desktops, or on Intel's CPUs based on Sandy or Ivy Bridge (the last two generations of i-processors/HD2000+), then stepping further to GL 3.3 is much better solution. Not because of deprecation, but because of advantages.

  7. #7
    Intern Newbie
    Join Date
    Nov 2012
    Posts
    33
    If I am limited to OpenGL 3.2 at the highest (due to a binding restriction on one of my intended platforms), would it still be a good idea to step up to that point?

    EDIT: I work primarily on an older i7 laptop (Nehalem microarchitecture) with a dedicated GPU, but my target computers will likely have Sandy Bridge or newer. I'm assuming that the Intel integrated GPUs will be irrelevant if they have a relatively up-to-date dedicated GPU -- is that right?
    Last edited by Aestivae; 12-05-2012 at 06:50 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •