My first openGL 3.0 program.

I’m trying to recreate the first NeHE tutorial without using deprecated openGL function… I think that openGL will became a lot more difficult to learn for newbie. >_<

I have used wxWidget.
Basically it works, but I have a couple of questions for the most expert.

I don’t post the whole code cause is’t 5 file and a lot of wxWidget setup, if you can create a glu program, then you can re-create this one.

Here the context creation… I derived a class from wxGLContext I can’t get the “wglCreateContextAttribsARB” pointer without a context so I have to create two context.
The first one is created by the wxGLContext constructor.


ogl3Contex::ogl3Contex(wxGLCanvas *win, const wxGLContext* other)
: wxGLContext(win, other)
{
  const char *extensionsList = NULL;
  HGLRC sharedContext = NULL;

  wglMakeCurrent((HDC)win->GetHDC(), m_glContext);

   wglCreateContextAttribsARB = (PFNWGLCREATECONTEXTATTRIBSARBPROC) wglGetProcAddress("wglCreateContextAttribsARB");

  int attribs[] = {
	  WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
	  WGL_CONTEXT_MINOR_VERSION_ARB, 0,
	  //		WGL_CONTEXT_FLAGS_ARB, WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,
//sorry, nVidia beta driver don't support forward compatible context
	  0}; //terminator "I'll be back" :o)

  if(other != NULL)
	  sharedContext = other->GetGLRC();
  if(wglCreateContextAttribsARB){
	  m_ogl3Supported = true;
	  m_newContext = wglCreateContextAttribsARB((HDC) win->GetHDC(), sharedContext, attribs);
	  glGetError();
  }else{
	  m_ogl3Supported = false;
  }
  init_extensions();
}

here the first question:
With glInterceptor I get this:

wglCreateContextAttribsARB( ??? )
----->wglCreateLayerContext(0xa7011c56,0)
----->----->wglGetPixelFormat(0xa7011c56)=7
----->----->wglDescribePixelFormat(0xa7011c56,7,40,0x12f62c)=138
----->----->wglGetPixelFormat(0xa7011c56)=7 glGetError() =GL_INVALID_OPERATION
----->----->wglDescribePixelFormat(0xa7011c56,1,0,0x0000)=138

Why wglGetPixelFormat return an invalid operation? :confused:

the init_extensions function is only a long list of wglGetProcAddress to init all the function pointer.

Here my init function, it’s called before rendering the first frame.

void TestGLCanvas::initGL()
{
  GLfloat* vertices = new GLfloat[12];  // vertex array
  GLfloat* colors = new GLfloat[12];	// color array
  GLint ret;
  
//Clear color and clear depth value
  glClearColor(0.2f, 0.2f, 0.2f, 0.0f);
  glClearDepth(1.0f);
  glEnable(GL_DEPTH_TEST);
  glDepthFunc(GL_LEQUAL);
//Btw, depth test is useless in this application, it render only a triangle	

//Datas
  vertices[0] = 0.0;
  vertices[1] = 1.0;
  vertices[2] =-1.0;
  vertices[3] = 1.0;

  vertices[4] =-1.0;
  vertices[5] =-0.5;
  vertices[6] =-1.0;
  vertices[7] = 1.0;

  vertices[8] = 1.0;
  vertices[9] =-0.5;
  vertices[10] =-1.0;
  vertices[11] = 1.0;

  colors[0] = 0.0;
  colors[1] = 0.0;
  colors[2] = 1.0;
  colors[3] = 1.0;

  colors[4] = 0.0;
  colors[5] = 1.0;
  colors[6] = 0.0;
  colors[7] = 1.0;

  colors[8] = 1.0;
  colors[9] = 0.0;
  colors[10] = 0.0;
  colors[11] = 1.0;

  //create and enable vertex array object
  glGenVertexArrays(1, &m_vaoId);
  glBindVertexArray(m_vaoId);

  //generate a two VBO and get the associated IDs
  glGenBuffers(2, m_vboId);

  glBindBuffer(GL_ARRAY_BUFFER, m_vboId[0] );	// enable the first VBO
  glBufferData(GL_ARRAY_BUFFER, 12*sizeof(GLfloat), vertices, GL_STATIC_DRAW); // fill the VBO with data
  glVertexAttribPointer((GLuint)0, 4, GL_FLOAT, GL_FALSE, 0, 0); // VBO point to the 0 attribute

  glBindBuffer(GL_ARRAY_BUFFER, m_vboId[1]);	// Same as before, but with the second VBO
  glBufferData(GL_ARRAY_BUFFER, 12*sizeof(GLfloat), colors, GL_STATIC_DRAW);
  glVertexAttribPointer((GLuint)1, 4, GL_FLOAT, GL_FALSE, 0, 0);
  
//I can't use glVertexPointer neither glColorPointer they are deprecated.
//Then I need a shader to get my custom attributes 
//	Shader definition
// Vertex shader
  m_vxShaderId = glCreateShader(GL_VERTEX_SHADER);
  glShaderSource(m_vxShaderId, 10, (const GLchar**)g_vertexShader, NULL);
  glCompileShader(m_vxShaderId);

//Check if something is wrong with the shader
  glGetShaderiv(m_vxShaderId, GL_COMPILE_STATUS, &ret);
  if(ret == false){
	  GLchar buffer[4096];	// You are lazy... >_<
	  GLsizei l;

	  wxLogError(wxT("unable to compile the vertex shader!"));
	  glGetShaderInfoLog(m_vxShaderId, 4096, &l, buffer);
	  wxLogError(buffer);
	  m_run = false;
  }

//Fragment shader
  m_fgShaderId = glCreateShader(GL_FRAGMENT_SHADER);
  glShaderSource(m_fgShaderId, 7, (const GLchar**)g_fragmentShader, NULL);
  glCompileShader(m_fgShaderId);
  glGetShaderiv(m_fgShaderId, GL_COMPILE_STATUS, &ret);
//Check if something is wrong with the shader
  if(ret == false){
	  GLchar buffer[4096];
	  GLsizei l;

	  wxLogError(wxT("unable to compile the fragment shader!"));
	  glGetShaderInfoLog(m_fgShaderId, 4096, &l, buffer);
	  wxLogError(buffer);
	  m_run = false;	// error, don't render
  }

//Create a program with my 
  m_ProgramId = glCreateProgram();
  glAttachShader(m_ProgramId, m_vxShaderId);
  glAttachShader(m_ProgramId, m_fgShaderId);
  glBindAttribLocation(m_ProgramId, 0, "mPosition");
  glBindAttribLocation(m_ProgramId, 1, "mColor");
  glLinkProgram(m_ProgramId);

  glGetProgramiv(m_ProgramId, GL_LINK_STATUS, &ret);
  if(ret == false){
	  GLchar buffer[4096];
	  GLsizei l;

	  wxLogError(wxT("unable to link the program!"));
	  glGetProgramInfoLog(m_ProgramId, 4096, &l, buffer);
	  wxLogError(buffer);
	  m_run = false;	// error, don't render
  }else{
	  m_matId = glGetUniformLocation(m_ProgramId, "matMPV");
  }

// my datas are in video memory, I can delete these one. 
  delete [] vertices;
  delete [] colors;
  m_timer.Start();	// I timer to compute fps
  m_glInitialized = true;	// don't call this funtion anymore
}

This part in the nehe tutorial is no more than 5 line of code (with four line of comment).

If the setup is very long the drawing part is very short


void TestGLCanvas::Render()
{
//and identity matrix, just to try.
  GLfloat mpvMatrix[] = { 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0 };

  if(!m_glInitialized)
	  initGL();
  glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
  if(m_run){
	  glUseProgram(m_ProgramId); // select the shaders program
	  glUniformMatrix4fv(m_matId, 1, false, mpvMatrix);	// set the uniform matrix
	  glEnableVertexAttribArray(0);		// enable vertex attribute 0 (mPosition)
	  glEnableVertexAttribArray(1);		// enable vertex attribute 1 (mColor)
	  glBindVertexArray(m_vaoId);		// select the vertex array object
	  glDrawArrays(GL_TRIANGLES, 0, 3);	// draw the array (at the speed of light)
	  glDisableVertexAttribArray(0); 	// disable attribute array (just to keep a clean code)
	  glDisableVertexAttribArray(1);
// these two lines sometime cause crash... comment them if you have a page fault in the driver (I think it's a bug)
  }
  SwapBuffers();	// send the buffer on front page ;)
}

The good news is that to render a triangle with a simple shader or to render a very large mesh with a complex shader the code here is basically the same. :slight_smile:

The shaders are very simple

GLchar *g_vertexShader[] = {"#version 130
",
  "uniform mat4 matMPV;
",
  "in		 vec4 mPosition;
",
  "in		 vec4 mColor;
",
  "out	 vec4 fColor;
",
  "void main(void)
",
  "{
",
  "	gl_Position = matMPV*mPosition;
",
  "	fColor = mColor;
",
  "}
"
};

GLchar *g_fragmentShader[] = {"#version 130
",
  "in	vec4 fColor;
",
  "out  vec4 outColor;
",
  "void main(void)
",
  "{
",
  "  outColor = fColor;
",
  "}
"
};

Another question, gl_FragColor is deprecated… I have to use a custom output variable… but I didn’t understand why.

Here the result of my “two days documentation reading” work.

Another question, gl_FragColor is deprecated… I have to use a custom output variable… but I didn’t understand why.

It’s because gl_fragColor maps to gl_fragData[0], And there’s MAXCOLORATTACHMENTS of gl_fragData array. It’s so you can name the color attachments with glBindFragDataLocation. Check out FrameBufferObject as well.

texOut = vec(1,1,1);
texOutWithRedMissing = vec(0,1,1);

better than gl_fragData[0], gl_fragData[1]

wglCreateContextAttribsARB( ??? )
----->wglCreateLayerContext(0xa7011c56,0)
----->----->wglGetPixelFormat(0xa7011c56)=7
----->----->wglDescribePixelFormat(0xa7011c56,7,40,0x12f62c)=138
----->----->wglGetPixelFormat(0xa7011c56)=7 glGetError() =GL_INVALID_OPERATION
----->----->wglDescribePixelFormat(0xa7011c56,1,0,0x0000)=138

I’m not much of a windows guy for Opengl. who called glGetError, the glinterceptor?

Thanks a lot, I have read the “shader output” section of the spec. It’s more clear now.

glInterceptor don’t really call the glGetError…
I’m not sure how does it works because if I put a glGetError after that command I still get the invalid operation status.
I know that glInterceptor replace the openGL library, I think he can read the error status without altering it. :-
The arrows means that these function are called internally by the driver.

So wGlCreateContexAttribsARB call CreateLayerContext and so on.
Basically it workd like this, getPixelFormat ask “What is the pixel format of this device context?”
describePixelFormat ask “Is this pixel format supported? Can you give me a similar supported pixel format?” But nobody call the setPixelFormat that say “Ok this format works, use it”

Till now I always used a frameworks, so I don’t really care of the internal driver stuff. I guess the nVidia programmer are smart enough to make it works. :slight_smile:
I was only wondering if I’m doing something wrong.

glInterceptor calls glGetError. But it also caches its output, so it can return correct error value to your glGetError call.

Yes, this is what it does internally.

FYI - I have no plans to update GLIntercept to fully work with OpenGL3.0 - but as long as you are only using it for logging/error reporting it should be OK.