Weird glVertexAttribPointer bug?

Hey, I am new to OpenGL programming and if this is not a bug, I will delete the thread. But I have experienced some weird stuff while trying to debug my code.

First my settings:
OpenGL Version 4.1
OS X
OpenGL Driver: INTEL-10.14.73

Here is my Code that works. And everything renders well.
My data gets loaded to the VBO like [pos, pos, …, pos, col, col, …, col]. But since my 1st triangle only has position verteces, mAttribCount is only 1 and the for-loop runs only once, so that the VBO will look like this [pos, pos, pos] with each pos-vertex consisting of 3 coordinates.

void Mesh::mLoadDataToVAO() {
    glGenVertexArrays(1, &mVAO);
    glBindVertexArray(mVAO);
    glGenBuffers(1, &mVBO);
    glBindBuffer(GL_ARRAY_BUFFER, mVBO);
    
    GLuint offset = 0;
    Offset* ptr1 = mOffsets;
    for(int i = 0; i < mAttribCount; i++) {
        // Getting whole data Size in bytes
        GLuint dataSize = 0;
        Offset* ptr2 = mOffsets;
        for(int i; i < mAttribCount; i++) {
            dataSize = dataSize + (*ptr2).vertexCount * (*ptr2).vertexSize;
            ptr2++;
        }
        dataSize = dataSize*sizeof(float);
        
        glBufferData(GL_ARRAY_BUFFER, dataSize, mData, GL_STATIC_DRAW);
        
        // Specifying which location index the data gets in the vbo and telling oGL how to interpret the data
        std::cout<<"VERTEX SIZE: "<<(*ptr1).vertexSize<<std::endl;
        std::cout<<"STRIDE: "<<(*ptr1).vertexSize*sizeof(GLfloat)<<std::endl;
        std::cout<<"OFFSET: "<<offset<<std::endl;
        glVertexAttribPointer(i, (*ptr1).vertexSize, GL_FLOAT, GL_FALSE, (*ptr1).vertexSize*sizeof(GLfloat), (GLvoid*) offset);
        glEnableVertexAttribArray(i);
        
        // updating the offset for the next attrib in the vbo
        offset = offset + (*ptr1).vertexCount*(*ptr1).vertexSize*sizeof(GLfloat);
        ptr1++;
    }
    
    glBindBuffer(GL_ARRAY_BUFFER, 0);
    glBindVertexArray(0);
}

Now, if I run the same Code but I deleted all the std::couts, the code will not run and I get an error when it calls glDrawArrays: EXC_BAD_ACCESS (code=1, address=0x0).

The same happens if I put the numeric values that get printed out into the glVertexAtrribPointer. Thus, if I change the offset pointer to the numeric value 0
glVertexAttribPointer(i, (ptr1).vertexSize, GL_FLOAT, GL_FALSE, (ptr1).vertexSizesizeof(GLfloat), (GLvoid) 0);
or the stride to numeric value 12
glVertexAttribPointer(i, (ptr1).vertexSize, GL_FLOAT, GL_FALSE, 12, (GLvoid) offset);

I hope that you guys can help me understand this issue.

I apologize, if this is not bug and just some C++ memory usage that I don’t understand.

Help for my Code:
Here is my Offset struct, but I don’t think, that it has to do with this

struct Offset {
    GLuint vertexCount, vertexSize;
};

And mOffsets is just a pointer to an array of Offsets for each attribute. In my simple triangle example, it is


Offset offset1;
offset1.vertexCount = 3;
offset1.vertexCount = 3;

Offset offsets[] = {offset1};
Offset* mOffset = &offsets;

The for-loop within the for-loop just calculates how big the data i sent in is in bytes.

first of all: not each mesh should have its own buffer object + vertex array object
its sufficient to have just 1 buffer + vertex array for ALL meshes (as long as they share their vertex layout, position & color)
you just have to preload all meshes, and then put everything into the buffer
the only thing a “mesh” then needs is the arguments for the draw call:

glDrawArrays(GLenum primitive, GLuint offset, GLuint count);


struct Mesh
{
unsigned int Primitive{GL_TRIANGLES};
unsigned int Offset{0};
unsigned int Count{0};
};

another thing that would make you code easier to understand is using a “Vertex” type
it simplifies almost everything (understanding the code, determine offsets, determine buffer size, etc)


struct Vertex
{
float Position[3];
float Color[4];
};

using that Vertex struct, you know how many attributes you have, so setting up your vertex array object also wil be simplier:


glBindVertexArray(vertexarray);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glVertexAttribPointer(0, 3, GL_FLOAT, false, sizeof(Vertex), (void*)(sizeof(float) * 0));
glVertexAttribPointer(1, 4, GL_FLOAT, false, sizeof(Vertex), (void*)(sizeof(float) * 3));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glBindVertexArray(0);

allocating memory for your buffer can be done later when all meshes are loaded:


std::vector<Vertex> vertices;

// load all mesh data and set the offset / count / primitive pair, store these anywhere (e.g. std::list<Mesh> meshes;)

// finally put all vertices into 1 buffer
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * vertices.size(), vertices.data(), GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);

later when you render stuff, you only need to bind the vertex array object once, and call:


glBindVertexArray(vertexarray);

for (auto& mesh : meshes)
{
// set model transformation matrix etc.

// render each mesh
glDrawArrays(mesh.Primitive, mesh.Offset, mesh.Count);
}

Thanks for your reply. I will keep this in mind!