Issues setting up OpenGL (segmentation error)

Hiiiii. I’ve just started trying to learn OpenGL and i’ve been following along with the OpenGL Red Book 4th edition. I’ve tried to set up my code the same way they did to avoid confusion and make it easier for myself to understand it. When I try to compile my program, it compiles with no problems (no errors/ warnings), but when I try to execute my program, I get: “segmentation fault”. I’m not exactly sure what this error means but I’m guessing I might have stored my data in the buffers improperly or perhaps because I haven’t set up glut to open a window? Some one please help me with this, I want to get past the initializing part and get in to the 3D graphics part. The code’s quite short so it shouldn’t be too hard to spot any mistakes.
Cheers.

#include “main.h” //g++ main.cpp -lGLEW -lGL -lglfw -lGLU -lglut -o triangle
#define BUFFER_OFFSET(offset) ((void *)(offset))
using namespace std;

enum VAO_IDs {Triangles, NumVAOs};
enum Buffer_IDs {ArrayBuffer, NumBuffers};
enum Attrib_IDs {vPosition = 0};
GLuint VAOs[NumVAOs];
GLuint Buffers[NumBuffers];
const GLuint NumVertices = 6;

														//assigning vertex data to buffer objects and preparing to send to 

void init() //Vertex Shaders.
{
glGenVertexArrays(NumVAOs, VAOs);
glBindVertexArray(VAOs[Triangles]);

GLfloat vertices [NumVertices] [2] =
{
	{ -0.90, -0.90	},	//Triangle 1
	{  0.85, -0.90	},
	{ -0.90,  0.85	},
	{  0.90, -0.85	},	//Triangle 2
	{  0.90,  0.90	},
	{ -0.85,  0.90	}
};

glGenBuffers(NumBuffers, Buffers);
glBindBuffer(GL_ARRAY_BUFFER, Buffers[ArrayBuffer]);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);

ShaderInfo shaders[] = 
{
	{ GL_VERTEX_SHADER, "triangles.vert"},
	{ GL_FRAGMENT_SHADER, "triangle.frag"},
	{ GL_NONE, NULL}
};

glVertexAttribPointer(vPosition, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(vPosition );

}

void display()
{
glClearColor(1,0,1,0);
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VAOs[Triangles]); //?
glDrawArrays(GL_TRIANGLES, 0, NumVertices);

glFlush();

}

int main()
{
init();
display();
return 0;
}

Also here are my shader programs…

triangle.frag:

#version 430 core

out vec4 fcolor;

void main()
{
fcolor = vec4(0.0, 0.0, 1.0, 1.0);
}

triangle.vert:

#version 430 core

layout(location = 0) in vec4 vPostion;

void main()
{
gl_Position = vPosition;
}

This is precisely because you haven’t yet created a window, or more specifically, you haven’t yet created an OpenGL context and made it current (which GLUT - which I see that you’re linking to - will automatically do for you as part of creating a window). Note that GL context management is OS-specific, so this is normally handled by a framework such as GLUT, GLFW or SDL, or else by including platform-specific code.

Even after you do this, you’re probably still going to get a fault as you’ll most likely need to load function pointers for your OpenGL calls. I see that you’re also linking to GLEW, so a single call to glewInit is all that’s required here, and you should do that after you create the window via GLUT.

A third thing to watch out for is that your graphics hardware and driver must support the OpenGL version that you’re trying to use. With #version 430 in your GLSL code you’re effectively limiting yourself to NVIDIA hardware (no other vendor has a full 4.3+ driver yet) and I’m not sure if that’s what you want; for better compatibility you might want to drop that to 330.

Finally, your display function is set up for the classic old GLUT head-f*ck of using a single-buffered GL context. That’s a bad habit that you’re better off unlearning as soon as possible, and now seems a good time. When creating your window use GLUT_DOUBLE instead of GLUT_SINGLE in your glutInitDisplayMode call, and use glutSwapBuffers instead of the glFinish at the end of your display function. That’s literally all that’s required and double-buffered GL contexts are no more complex than single-buffered ones so you needn’t be concerned that you’re taking on an advanced topic too soon here.

[QUOTE=mhagain;1256556]This is precisely because you haven’t yet created a window, or more specifically, you haven’t yet created an OpenGL context and made it current (which GLUT - which I see that you’re linking to - will automatically do for you as part of creating a window). Note that GL context management is OS-specific, so this is normally handled by a framework such as GLUT, GLFW or SDL, or else by including platform-specific code.

Even after you do this, you’re probably still going to get a fault as you’ll most likely need to load function pointers for your OpenGL calls. I see that you’re also linking to GLEW, so a single call to glewInit is all that’s required here, and you should do that after you create the window via GLUT.

A third thing to watch out for is that your graphics hardware and driver must support the OpenGL version that you’re trying to use. With #version 430 in your GLSL code you’re effectively limiting yourself to NVIDIA hardware (no other vendor has a full 4.3+ driver yet) and I’m not sure if that’s what you want; for better compatibility you might want to drop that to 330.

Finally, your display function is set up for the classic old GLUT head-f*ck of using a single-buffered GL context. That’s a bad habit that you’re better off unlearning as soon as possible, and now seems a good time. When creating your window use GLUT_DOUBLE instead of GLUT_SINGLE in your glutInitDisplayMode call, and use glutSwapBuffers instead of the glFinish at the end of your display function. That’s literally all that’s required and double-buffered GL contexts are no more complex than single-buffered ones so you needn’t be concerned that you’re taking on an advanced topic too soon here.[/QUOTE]

Thank you so much! This helps a lot, I would have thought the red book would point out the best way to approach these things, but obviously not… Maybe it’s just a more basic way of looking at it? Idk thanks for your help thoughhhh