Tutorial: OpenGL 3.1 The First Triangle (C++/Win)

From OpenGL.org
Jump to: navigation, search


This is just a short tutorial about drawing primitives in OpenGL 3.x without using deprecated functionality. The code uses Visual Studio and a link to download a freeGLUT version is available.

Adding GLEW Support

Dealing with OpenGL 3.1 is hard enough, so I'll skip gymnastics with OpenGL extension and use OpenGL Extension Wrangler Library (GLEW). GLEW is a cross-platform open-source C/C++ extension loading library, and can be freely downloaded from the following site: http://glew.sourceforge.net. The following snippet of code includes support for GLEW, and should be placed somewhere in your code. If you are building a Visual Studio MFC application, which I recommend, the best place for that is somewhere at the end of stdafx.h file.

A cross-platform version of this code (which uses freeGLUT for windowing) is available on github

and freeGLUT can be downloaded from http://freeglut.sourceforge.net

//--- OpenGL ---
#include "glew.h"
#include "wglew.h"
#pragma comment(lib, "glew32.lib")

GLRenderer Class

We will start with creation of class CGLRenderer. This class should gather together all OpenGL related code. My students will recognize the functions I insisted on during the lectures. The header file is the same as in good old OpenGL 2.1, but the implementation will be severely changed.

class CGLRenderer
	virtual ~CGLRenderer(void);
	bool CreateGLContext(CDC* pDC); 	// Creates OpenGL Rendering Context
	void PrepareScene(CDC* pDC);		// Scene preparation stuff
	void Reshape(CDC* pDC, int w, int h);	// Changing viewport
	void DrawScene(CDC* pDC);		// Draws the scene
	void DestroyScene(CDC* pDC);		// Cleanup

	void SetData();	                        // Creates VAO and VBOs and fill them with data

	HGLRC	 m_hrc;                        // OpenGL Rendering Context 
	CGLProgram* m_pProgram;	               // Program
	CGLShader* m_pVertSh;		       // Vertex shader
	CGLShader* m_pFragSh;		       // Fragment shader
	GLuint m_vaoID[2];			// two vertex array objects, one for each drawn object
	GLuint m_vboID[3];			// three VBOs

Rendering Context Creation

First we have to create an OpenGL Rendering Context. This is the task for CreateGLContext() function.

bool CGLRenderer::CreateGLContext(CDC* pDC)
	memset(&pfd, 0, sizeof(PIXELFORMATDESCRIPTOR));
	pfd.nSize  = sizeof(PIXELFORMATDESCRIPTOR);
	pfd.nVersion   = 1;
	pfd.iPixelType = PFD_TYPE_RGBA;
	pfd.cColorBits = 32;
	pfd.cDepthBits = 32;
	pfd.iLayerType = PFD_MAIN_PLANE;
	int nPixelFormat = ChoosePixelFormat(pDC->m_hDC, &pfd);
	if (nPixelFormat == 0) return false;
	BOOL bResult = SetPixelFormat (pDC->m_hDC, nPixelFormat, &pfd);
	if (!bResult) return false; 
	HGLRC tempContext = wglCreateContext(pDC->m_hDC);
	wglMakeCurrent(pDC->m_hDC, tempContext);
	GLenum err = glewInit();
	if (GLEW_OK != err)
		AfxMessageBox(_T("GLEW is not initialized!"));
	int attribs[] =
        if(wglewIsSupported("WGL_ARB_create_context") == 1)
		m_hrc = wglCreateContextAttribsARB(pDC->m_hDC,0, attribs);
		wglMakeCurrent(pDC->m_hDC, m_hrc);
	{	//It's not possible to make a GL 3.x context. Use the old style context (GL 2.1 and before)
		m_hrc = tempContext;

	//Checking GL version
	const GLubyte *GLVersionString = glGetString(GL_VERSION);

	//Or better yet, use the GL3 way to get the version number
	int OpenGLVersion[2];
	glGetIntegerv(GL_MAJOR_VERSION, &OpenGLVersion[0]);
	glGetIntegerv(GL_MINOR_VERSION, &OpenGLVersion[1]);

	if (!m_hrc) return false;
	return true;

Choosing and setting pixel format are the same as in previous version of OpenGL. The new tricks that should be done are:

  • Create standard OpenGL (2.1) rendering context which will be used only temporarily (tempContext), and make it current
HGLRC tempContext = wglCreateContext(pDC->m_hDC);
  • Initialize GLEW
GLenum err = glewInit();
  • Setup attributes for a brand new OpenGL 3.1 rendering context
int attribs[] =
  • Create new rendering context
m_hrc = wglCreateContextAttribsARB(pDC->m_hDC,0, attribs);
  • Delete tempContext

Have you noticed something odd in this initialization? In order to create new OpenGL rendering context you have to call function wglCreateContextAttribsARB(), which is an OpenGL function and requires OpenGL to be active when it is called. How can we fulfill this when we are at the beginning of OpenGL rendering context creation? The only way is to create an old context, activate it, and while it is active create a new one. Very inconsistent, but we have to live with it!

Scene Preparation

After we have created rendering context, the next step is to prepare scene. In the function PrepareScene() we will do whatever we have to do just once, before the scene is drawn for the first time.

void CGLRenderer::PrepareScene(CDC *pDC)
	glClearColor (1.0, 1.0, 1.0, 0.0);
	m_pProgram = new CGLProgram();
	m_pVertSh = new CGLShader(GL_VERTEX_SHADER);
	m_pFragSh = new CGLShader(GL_FRAGMENT_SHADER);
	m_pProgram->BindAttribLocation(0, "in_Position");
	m_pProgram->BindAttribLocation(1, "in_Color");


Vertex shader is very simple. It just sends input values to the output, and converts vec3 to vec4. Constructors are the same as in previous versions of GLSL. The main difference, in regard to GLSL 1.2, is that there is no more attribute and varying qualifiers for variables inside shaders. Attribute variables are now in(put) and varying variables are out(put) for the vertex shaders. Uniforms stay the same.

// Vertex Shader – file "minimal.vert"

#version 140

in  vec3 in_Position;
in  vec3 in_Color;
out vec3 ex_Color;

void main(void)
	gl_Position = vec4(in_Position, 1.0);
	ex_Color = in_Color;

Fragment shader is even simpler. Varying variables in fragment shaders are now declared as in variables. Take care that the name of in(put) variable in fragment shader must be the same as out(put) variable in vertex shader.

// Fragment Shader – file "minimal.frag"

#version 140

precision highp float; // needed only for version 1.30

in  vec3 ex_Color;
out vec4 out_Color;

void main(void)
	out_Color = vec4(ex_Color,1.0);

If you have problem with compiling shader’s code (for the reason OpenGL 3.1 is not supported), just change the version number. Instead of 140, put 130. These shaders are so simple that the code is the same in GLSL version 1.3 and version 1.4.

Setting Data

Function SetData() creates VAOs and VBOs and fill them with data.

void CGLRenderer::SetData()
	// First simple object
	float* vert = new float[9];	// vertex array
	float* col  = new float[9];	// color array

	vert[0] =-0.3; vert[1] = 0.5; vert[2] =-1.0;
	vert[3] =-0.8; vert[4] =-0.5; vert[5] =-1.0;
	vert[6] = 0.2; vert[7] =-0.5; vert[8]= -1.0;

	col[0] = 1.0; col[1] = 0.0; col[2] = 0.0;
	col[3] = 0.0; col[4] = 1.0; col[5] = 0.0;
	col[6] = 0.0; col[7] = 0.0; col[8] = 1.0;

	// Second simple object
	float* vert2 = new float[9];	// vertex array

	vert2[0] =-0.2; vert2[1] = 0.5; vert2[2] =-1.0;
	vert2[3] = 0.3; vert2[4] =-0.5; vert2[5] =-1.0;
	vert2[6] = 0.8; vert2[7] = 0.5; vert2[8]= -1.0;
	// Two VAOs allocation
	glGenVertexArrays(2, &m_vaoID[0]);

	// First VAO setup
	glGenBuffers(2, m_vboID);
	glBindBuffer(GL_ARRAY_BUFFER, m_vboID[0]);
	glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), vert, GL_STATIC_DRAW);
	glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0); 

	glBindBuffer(GL_ARRAY_BUFFER, m_vboID[1]);
	glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), col, GL_STATIC_DRAW);
	glVertexAttribPointer((GLuint)1, 3, GL_FLOAT, GL_FALSE, 0, 0);

	// Second VAO setup	

	glGenBuffers(1, &m_vboID[2]);

	glBindBuffer(GL_ARRAY_BUFFER, m_vboID[2]);
	glBufferData(GL_ARRAY_BUFFER, 9*sizeof(GLfloat), vert2, GL_STATIC_DRAW);
	glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0); 


	delete [] vert;
	delete [] vert2;
	delete [] col;

Vertex buffer objects (VBO) are familiar item since OpenGL version 1.5, but the vertex array objects require more explanation. Vertex array objects (VAO) encapsulate vertex array state on the client side. These objects allow applications to rapidly switch between large sets of array state.

VAO saves all states for all vertex attributes. The maximum number supported by your video card can be obtained by calling glGetIntegerv(GL_MAX_VERTEX_ATTRIBS, &MaxVertexAttribs).

A VAO stores the states of the vertex attribute arrays (if each of them is enabled, their sizes, stride, type, if they are normalized or not, if they contain unconverted integers, vertex attribute array pointers, element array buffer bindings and attribute array buffer bindings). In order to test how it works, we will create two separate (simple) objects with different VAOs.

Setting Viewport

Reshape() function just sets a viewport.

void CGLRenderer::Reshape(CDC *pDC, int w, int h)
	glViewport(0, 0, w, h); 


DrawScene(), as its name implies, draws the scene.

void CGLRenderer::DrawScene(CDC *pDC)
	glBindVertexArray(m_vaoID[0]);		// select first VAO
	glDrawArrays(GL_TRIANGLES, 0, 3);	// draw first object

	glBindVertexArray(m_vaoID[1]);		// select second VAO
	glVertexAttrib3f((GLuint)1, 1.0, 0.0, 0.0); // set constant color attribute
	glDrawArrays(GL_TRIANGLES, 0, 3);	// draw second object


As we can see, VAO binding changes all vertex attribute arrays settings. But be very careful! If any vertex attribute array is disabled, VAO loses its binding to corresponding VBO. In that case, we have to call again glBindBuffer() and glVertexAttribPointer() functions. The specification tells nothing about this feature, but it is what we have to do with current version of NVidia drivers.

Cleaning up

And, at the end we have to clean up the whole mass...

void CGLRenderer::DestroyScene(CDC *pDC)
	glBindBuffer(GL_ARRAY_BUFFER, 0);
	glDeleteBuffers(3, m_vboID);

	glDeleteVertexArrays(2, m_vaoID);

	delete m_pProgram;
	m_pProgram = NULL;
	delete m_pVertSh;
	m_pVertSh = NULL;
	delete m_pFragSh;
	m_pFragSh = NULL;
	wglMakeCurrent(NULL, NULL);
		m_hrc = NULL;

Final Result


Checking For Extensions

This project did not use GL to check for extensions, however, if you want to check for extensions once your GL 3 context is created, you can chose one of these methods :

 const GLubyte *extensions_string=glGetString(GL_EXTENSIONS);


 int NumberOfExtensions;
 glGetIntegerv(GL_NUM_EXTENSIONS, &NumberOfExtensions);
 for(i=0; i<NumberOfExtensions; i++)
   const GLubyte *one_string=glGetStringi(GL_EXTENSIONS, i);

For GL 3.0, you can use either of those 2 methods. glGetString(GL_EXTENSIONS) has been around since GL 1.0. Yes, of course you need additional code for searching the long string returned by glGetString(GL_EXTENSIONS). That is why some people use a library function such as glewIsSupported from GLEW.

Starting with GL 3.1, you must use the second method.