Can't get model to render correctly

I’m trying to render this model: https://postimg.org/image/4t4bn87pz/

Here is my result: https://postimg.org/image/etpfloya9/

I can’t figure out why it is connecting shapes that dont belong together. I separated all the data from the OBJ file and put them in arrays like…

static const GLfloat g_vertex_buffer_data[] = {

-1.557376f, 0.094970f, 0.171995f, 
-1.565967f, 0.098142f, 0.171995f, 
-1.557376f, 0.094970f, -0.048469f, 
-1.565967f, 0.098142f, -0.048469f, ..... };

int indicies[] = {

1, 2, 3, 
2, 4, 3, 
3, 4, 5, 
4, 6, 5, 
5, 6, 7, 
6, 8, 7,  ..... };

static const GLfloat g_color_buffer_data[] = {

0.204205f, 0.204205f, 0.204205f, 
0.204205f, 0.204205f, 0.204205f, 
0.204205f, 0.204205f, 0.204205f, 
0.204205f, 0.204205f, 0.204205f, ... };

static const GLfloat g_normal_buffer_data[] = {

-0.341900f, -0.939700f, -0.000000f, 
0.000000f, 0.000000f, -1.000000f, 
0.222000f, 0.609800f, -0.760800f,  ... };

, I checked the data and it matches the OBJ file. Why would it not draw correctly? I also tried it without color buffers and it still connects things that should not be connected.

buffers:

//Vertex array object												 
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);

//Vertex buffer
GLuint vertexbuffer;
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);

//color buffer
GLuint colorbuffer;
glGenBuffers(1, &colorbuffer);
glBindBuffer(GL_ARRAY_BUFFER, colorbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_color_buffer_data), g_color_buffer_data, GL_STATIC_DRAW);


//normal buffer
GLuint normalbuffer;
glGenBuffers(1, &normalbuffer);
glBindBuffer(GL_ARRAY_BUFFER, normalbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_normal_buffer_data), g_normal_buffer_data, GL_STATIC_DRAW);


//index buffer
GLuint elementbuffer;
glGenBuffers(1, &elementbuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elementbuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indicies), indicies, GL_STATIC_DRAW);

rendering:

	//vertex buffer
	glEnableVertexAttribArray(0);
	glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
	glVertexAttribPointer(
		0,			//index	
		3,			//size
		GL_FLOAT,	//type
		GL_FALSE,	//normalized?
		0,			//stride
		0	        //array buffer offset
	);
	
	//normal buffer
	glEnableVertexAttribArray(1);
	glBindBuffer(GL_ARRAY_BUFFER, normalbuffer);
	glVertexAttribPointer(
		2,			//index
		3,			//size
		GL_FLOAT,	//type
		GL_FALSE,	//normalized?
		0,			//stride
		0			//array buffer offset
	);

	//color buffer
	glEnableVertexAttribArray(2);
	glBindBuffer(GL_ARRAY_BUFFER, colorbuffer);
	glVertexAttribPointer(
		1,			//index	
		3,			//size
		GL_FLOAT,	//type
		GL_FALSE,	//normalized?
		0,			//stride
		0	        //array buffer offset
	);
		
	glDrawElements(
		GL_TRIANGLES,	 //mode
		(sizeof(indicies)/sizeof(int)),	 //count
		GL_UNSIGNED_INT, //type
		(void*)0		 //element array buffer offset
	);

Not sure where I am going wrong, I can post more code if needed. Thanks

when setting the “stride” for each vertexAttribPointer, you should put in there a reasonable value (> 0), something like
(sizeof(float) * X), where X is the number of floats between 2 attribute values

example: color = 4 floats
if you have all colors in 1 buffer, and 1 colors size = 4 floats, then replace X by 4

another thing:
a vertex array object stores some of the settings you set every time when rendering (= every frame = about 60 times per second)
you need to do it only once:


glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(0, ...);

finished. when rendering you dont have to call these functions anymore, your VAO (vertex array object) knows that:
attribute 0 uses buffer “vertexbuffer” with the specs provided, and is enabled (switched “on”)

when rendering, you only need to:


glUseProgram(myshader);
// load some uniforms if needed ...
glBindVertexArray(VAO);
glDraw*(...);
// finished.

another point:
glDrawElements(…) needs an index buffer, the VAO also stores the currently one bound to GL_ELEMENT_ARRAY_BUFFER
but the drawcommand uses that index buffer for ALL attributes
that means you have to bring ALL vertex positions together with their corresponding normals / texcoords in order, like:

  1. read 3 face element indices
  2. create new vertex
    2.a) read its position from positions array at index “element index 1”
    2.b) read its normal from normals array at index “element index 2”
    2.c) read its texcoords from texcoords array at index “element index 3”
  3. append new vertex to an array

when done, you have all vertex positions together with their normals & texcoords, which means you can use glDrawArrays(…) anyway (without any index buffer)

another point:
using a struct “Vertex” would make sense:


struct Vertex {
vec3 position;
vec3 normal
vec2 texcoords;
// etc.
};

then put all vertices in 1 buffer, and use 3 vertex attribpointers for the 3 attributes
stride would be easy: sizeof(Vertex)
offset parameter (the last argument):
for attrib 0 (position): 0
for attrib 1 (normals): (void*)(sizeof(vec3))
for attrib 2 (texcoords): (void*)(sizeof(vec3) + sizeof(vec3))
for attrib 3 (.unused.): (void*)(sizeof(vec3) + sizeof(vec3) + sizeof(vec2))

[QUOTE=john_connor;1283324]when setting the “stride” for each vertexAttribPointer, you should put in there a reasonable value (> 0), something like
(sizeof(float) * X), where X is the number of floats between 2 attribute values

example: color = 4 floats
if you have all colors in 1 buffer, and 1 colors size = 4 floats, then replace X by 4

another thing:
a vertex array object stores some of the settings you set every time when rendering (= every frame = about 60 times per second)
you need to do it only once:


glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(0, ...);

finished. when rendering you dont have to call these functions anymore, your VAO (vertex array object) knows that:
attribute 0 uses buffer “vertexbuffer” with the specs provided, and is enabled (switched “on”)

when rendering, you only need to:


glUseProgram(myshader);
// load some uniforms if needed ...
glBindVertexArray(VAO);
glDraw*(...);
// finished.

another point:
glDrawElements(…) needs an index buffer, the VAO also stores the currently one bound to GL_ELEMENT_ARRAY_BUFFER
but the drawcommand uses that index buffer for ALL attributes
that means you have to bring ALL vertex positions together with their corresponding normals / texcoords in order, like:

  1. read 3 face element indices
  2. create new vertex
    2.a) read its position from positions array at index “element index 1”
    2.b) read its normal from normals array at index “element index 2”
    2.c) read its texcoords from texcoords array at index “element index 3”
  3. append new vertex to an array

when done, you have all vertex positions together with their normals & texcoords, which means you can use glDrawArrays(…) anyway (without any index buffer)

another point:
using a struct “Vertex” would make sense:


struct Vertex {
vec3 position;
vec3 normal
vec2 texcoords;
// etc.
};

then put all vertices in 1 buffer, and use 3 vertex attribpointers for the 3 attributes
stride would be easy: sizeof(Vertex)
offset parameter (the last argument):
for attrib 0 (position): 0
for attrib 1 (normals): (void*)(sizeof(vec3))
for attrib 2 (texcoords): (void*)(sizeof(vec3) + sizeof(vec3))
for attrib 3 (.unused.): (void*)(sizeof(vec3) + sizeof(vec3) + sizeof(vec2))[/QUOTE]

Thank you, this is very helpful.

Stride 0 is actually valid, by the way; please see: https://www.opengl.org/sdk/docs/man/html/glVertexAttribPointer.xhtml