Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 2 of 2 FirstFirst 12
Results 11 to 15 of 15

Thread: Creating an OpenGL context for use with > 2.1 core

  1. #11
    Junior Member Newbie
    Join Date
    Jun 2018
    Posts
    14
    Quote Originally Posted by GClements View Post
    glBegin/glEnd requires more effort from the CPU, but only when you actually call those functions. If you're putting the calls inside a display list, then that's only happening when you create the list, not when you execute it. The implementation may be optimising the GPU-side data based upon the state at the point the list is executed, in which case it will need to store the raw data so that it can re-build the GPU-side state where necessary.

    The main problem is that glBegin/glEnd isn't a good fit for modern GPUs. They're designed around vertex arrays, and anything else requires some kind of translation.

    Display lists are simply a recorded sequence of commands. Their original purpose was to avoid the need to repeatedly send the same commands from the client application to the X server each frame (possibly over a network connection). They aren't limited to vertex data, so there isn't much that the implementation can do to optimise the general case.
    Ok, so what I will be doing is to place my data into a VBO if the GPU will give me one. I have about 1.7 million triangles. Of course these triangles make of multiple objects. As all of this will be new to me as far as using shaders and VBO/VAO, I am going to need a bit of direction here. I will start a new message thread for this so that it has a good noticeable subject. One last question before I switch to a new thread. If I am able to put most of this into a VBO, can I expect to see my CPU memory usage drop in a big way? All of my objects are static and will on need to be run through a transform matrix which can be done on the GPU side. Then it comes down to basically a few glDraw* commands. Most of my state is maintained through my application. What happens if the GPU can't allocate enough Buffer space? Do I go by way of vertex arrays on the client side?

    Thanks

  2. #12
    Senior Member OpenGL Guru
    Join Date
    Jun 2013
    Posts
    2,957
    You aren't going to get close to exhausting video memory.

    Even if you have no shared vertices (so 3 vertices per triangle), and you use 9 floats per vertex (which is excessive; colours only need to use bytes, normals don't need a full 32-bit float per component), that still only works out at 1.7 * 3 * 9 * 4 = 183.6 MB for the vertices plus 1.7 * 3 * 4 = 20.4 MB for the indices, so ~200 MB in total.

    Realistically, you only need 3 bytes for colour and 4 bytes (GL_INT_2_10_10_10_REV)for normals (2 bytes is usually sufficient, but requires a bit more work), which gets you down to 1.7 * 3 * 19 = 96.9 MB for the vertex data (or 1.7 * 3 * 20 = 102 MB if you want 4-byte alignment).

    Sharing vertices will reduce the memory consumption further, and also reduce the vertex shader workload.

  3. #13
    Junior Member Newbie
    Join Date
    Jun 2018
    Posts
    14
    Quote Originally Posted by GClements View Post
    You aren't going to get close to exhausting video memory.

    Even if you have no shared vertices (so 3 vertices per triangle), and you use 9 floats per vertex (which is excessive; colours only need to use bytes, normals don't need a full 32-bit float per component), that still only works out at 1.7 * 3 * 9 * 4 = 183.6 MB for the vertices plus 1.7 * 3 * 4 = 20.4 MB for the indices, so ~200 MB in total.

    Realistically, you only need 3 bytes for colour and 4 bytes (GL_INT_2_10_10_10_REV)for normals (2 bytes is usually sufficient, but requires a bit more work), which gets you down to 1.7 * 3 * 19 = 96.9 MB for the vertex data (or 1.7 * 3 * 20 = 102 MB if you want 4-byte alignment).

    Sharing vertices will reduce the memory consumption further, and also reduce the vertex shader workload.

    That is about correct to my calculations as well, however, from what I read is that whether I use 1,2,3,4 elements per vertex, the GPU always allocates 4 (X,Y,Z,W) and the same was true for the colors. I was not aware of the normal vectors being reduced, but really normal vectors are 1/3 what is necessary as per vertex. Currently my scene has about 1.7 Million, but it will grow to be much more once I finish. My overall goal is to have as little memory footprint on the CPU side as possible. In either event, I am going to give it a go and see what happens. Just going to need help from the group to get just one object working under the current definition of what I currently have.

    Thanks for your information in this thread.

  4. #14
    Junior Member Newbie
    Join Date
    Jun 2018
    Posts
    14
    Quote Originally Posted by GClements View Post
    You aren't going to get close to exhausting video memory.

    Even if you have no shared vertices (so 3 vertices per triangle), and you use 9 floats per vertex (which is excessive; colours only need to use bytes, normals don't need a full 32-bit float per component), that still only works out at 1.7 * 3 * 9 * 4 = 183.6 MB for the vertices plus 1.7 * 3 * 4 = 20.4 MB for the indices, so ~200 MB in total.

    Realistically, you only need 3 bytes for colour and 4 bytes (GL_INT_2_10_10_10_REV)for normals (2 bytes is usually sufficient, but requires a bit more work), which gets you down to 1.7 * 3 * 19 = 96.9 MB for the vertex data (or 1.7 * 3 * 20 = 102 MB if you want 4-byte alignment).

    Sharing vertices will reduce the memory consumption further, and also reduce the vertex shader workload.
    I haven't read all the info yet on normal vectors but does the GL automatically convert from float [3] to just four bytes for the entire normal [X,Y,Z] when the array is sent to the VBO?

  5. #15
    Senior Member OpenGL Lord
    Join Date
    May 2009
    Posts
    6,048
    Quote Originally Posted by williajl View Post
    I haven't read all the info yet on normal vectors but does the GL automatically convert from float [3] to just four bytes for the entire normal [X,Y,Z] when the array is sent to the VBO?
    Buffer objects contain exactly and only what you put into them. If you only put 3 floats into them, then they contain 3 floats. OpenGL doesn't know that any particular buffer is supposed to contain vertices of a particular format until it comes time to render from them. So there is no way for OpenGL to automatically convert data to some other data within the buffer.

    OpenGL can automatically convert the data when it reads it in vertex rendering. But this would be converting a 4-byte signed normalized normal value into a 3-float normal.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •