Part of the Khronos Group

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 2 of 2

Thread: Converting from glBegin/glVertex*/glEnd to using Buffer Objects and shaders

  1. #1
    Junior Member Newbie
    Join Date
    Jun 2018

    Converting from glBegin/glVertex*/glEnd to using Buffer Objects and shaders

    Hello All,

    I am just beginning to do a large leap from the classical way of using OpenGL 1.2 to the more modern way by using VAO/VBO and shaders. I am going to need a bit of help with some of the understanding as the books that I have read don't go into the details enough in which they rely heavily on GLUT and GLEW for which I will not be using.

    So, let me start by explaining my data. I am reading in several VRML files which is nothing more than a definition of an object by way of materials, coordinated, normal vectors and indexes for the later. Generally these files are structured as having a Materials Section always, a coordinates section always and may or may not include normals. The materials are always defined as per primitive, I.E. per triangle. As well as the data, there is always a coordinate index table and an material index table. In the coordinates table, there is always a restart index for the triangle primitive of -1. So this means that to describe a triangle there is a sequence in the index table like [0,3,10,-1][3,10,15,-1]....

    In my existing code, I hold all the materials data (Ambient,Diffuse,Specular,Shininess,Transparency) in separate arrays. As well, I hold all of the coordinate vertexes in a separate array. For the normal vectors, if they are not included in the file, then they are created from the triangles and stored into an array. Yes my normal vectors are per triangle/primitive. Along with the actual data is stored two index array tables, one for vertexs and one for materials. Generally speaking, the count of elements in the materials index array are either the vertex index count/3 or exactly one. This allows me to assign the material either per object, or per primitive/triangle. My rendering loop goes like this:

    Code cpp:
            LMatIndex = 0;
                if(FAmbientCount==1) //If there is only one entry in the material data arrays
                    LMatIndirect = FMaterialIndexList[0]; //Grab index into material table
                                    LAmbient = FAmbientList[LMatIndirect]; //Get Ambient Color
                    LDiffuse = FDiffuseList[LMatIndirect];     //Get Diffuse Color
                    LSpecular = FSpecularList[LMatIndirect]; //Get Specular Color
                    glMaterialfv(GL_FRONT, GL_AMBIENT, LAmbient.ColorArray);  //Set the ambient
                    glMaterialfv(GL_FRONT, GL_DIFFUSE, LDiffuse.ColorArray);     //Set the diffuse
                    glMaterialfv(GL_FRONT, GL_SPECULAR, LSpecular.ColorArray);//Set the specular
                    glMaterialf(GL_FRONT, GL_SHININESS, FShininessList[LMatIndirect]*128.0); //Set the shininess
                for (LIndex = 0;LIndex<FPointIndexCount-3;LIndex+=4)
                        LMatIndirect = FMaterialIndexList[LMatIndex];
                            LAmbient = FAmbientList[LMatIndirect];
                            LDiffuse = FDiffuseList[LMatIndirect];
                            LSpecular = FSpecularList[LMatIndirect];
                            glMaterialfv(GL_FRONT, GL_AMBIENT, LAmbient.ColorArray);
                            glMaterialfv(GL_FRONT, GL_DIFFUSE, LDiffuse.ColorArray);
                            glMaterialfv(GL_FRONT, GL_SPECULAR, LSpecular.ColorArray);
                            glMaterialf(GL_FRONT, GL_SHININESS, FShininessList[LMatIndirect]*128.0);

    Ok, so you might be getting the idea, that the material is set once per triangle and there is also one normal per triangle. Also, if you notice the for loop increment my counter is LIndex+=4 when there is only three vertexs per triangle, this is because one index implies a restart for the primitive which is best suited for vertex arrays. I just skip over it but have kept it so that I can move forward with OpenGL.

    Now, with that explained on over, how do I move to using shaders with VAO/VBO's and the fact that I only need to set normal and materials once per triangle/primitive. Some of my objects will be using a transformation matrix to change its location in a scene, but none of my objects will ever be modified. The data and color are set in stone the moment they are loaded.

    I am just starting to understand how to use VBO and a very, very basic shader for vertex and fragment, but none of the books I read really deal with the disconnect of number of vertex versus to the number of normals and materials. My application has some 1.7 million triangles and will grow to be three times this before I am finished.

    I have all of the foundation work done to give me the ability to create rendering context up to the latest version of OpenGL 4.6 as well as give me access to all of the extensions that go along with it. But I am absolutely not using any GLUT or GLEW. I am only using the native OpenGL plus the extensions provided by the Kronos group

    Ok, now that is over, how do I begin to describe this so that in the end, I end up with a single glDraw* call?

    What I do know is that I need to allocate a buffer object, one per object, but how do I deal with the material and normal vectors. I read that there is a gl command that lets me do something per primitive in the geometry shader, and that there is a way to do something in the fragment shader, but nothing I read was clear on these topics. They always focused on per vertex colors/materials and normal vectors.

    Thanks for any guidance on this.

    Last edited by Dark Photon; 06-08-2018 at 04:58 PM.

  2. #2
    Senior Member OpenGL Guru
    Join Date
    Jun 2013
    Quote Originally Posted by williajl View Post
    but how do I deal with the material and normal vectors. I read that there is a gl command that lets me do something per primitive in the geometry shader, and that there is a way to do something in the fragment shader, but nothing I read was clear on these topics.
    For a first attempt, give every triangle 3 unique vertices and just set the same material and normal for each of them. Once you have that working, you can think about sharing vertices.

    If you need to specify several parameters for a material, but the number of materials is small, you should have a uniform array of materials and just supply a material index per vertex. Alternatively, if contiguous sections of the mesh tend to share a common material, you could just split the mesh by material, so each draw call uses a single material (if same-material faces are largely non-contiguous, then then that would reduce vertex sharing, which is undesirable).

    Forget about geometry shaders; the performance penalty is too high for them to be useful here. If you want to implement per-face properties directly, you can use gl_PrimitiveID in a fragment shader to index into a table of per-face data (given the number of triangles, this would have to be stored in either a texture or a SSBO). Note that gl_PrimitiveID isn't available to a geometry shader, although you could achieve the same ends with an atomic counter.

    In terms of sharing vertices, note that if a vertex shader output has the "flat" qualifier, the value from the last vertex of a triangle is used for the entire triangle, and the values from the other two are ignored. So when you have 3 triangles sharing the same vertex position, you can construct the index array so that the vertex is only the last vertex for one of the 3 triangles. The vertex can then be used to hold per-triangle data for that particular triangle; the other two triangles can get their data from a different vertex. So you may only need one vertex per triangle rather than 3 (in practice, the ratio tends to come out to 2.5-2.8 triangles per vertex, depending upon the mesh and how much effort you put into optimising it).

    And for flat-shaded faces, you don't need to store face normals, you can calculate them in the fragment shader. If the vertex shader outputs the eye-space position as an attribute, the fragment shader can apply the dFdx() and dFdy() functions to that position obtain the eye-space tangents; the cross product of the tangents is the face normal. This is more computationally expensive than storing face normals as vertex attributes, but it can help to reduce memory consumption if you have many vertices with few attributes each.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts