Migration from Fixed pipeline to Programmable pipeline

We have decided to convert our legacy application’s OpenGL code away from the old fixed pipeline functions/immediate mode to using the programmable pipeline.

I’ve searched around and found that although it is possible to mix fixed and programmable pipeline, it isn’t really clear about how one would do that. Should I start at the beginning when my OpenGL context is first created? Or can I start by changing out specific shapes are drawn and move outwards from there?

I’ve started by replacing our cube drawing code with an example from the OpenGL superbible but my application crashes soon after I set the data for a GL_ELEMENT_ARRAY_BUFFER. My guess is that some kind of initialization is missing for OpenGL.

We will eventually have to rewrite all our rendering logic as right now it’s going through each vertex that needs to be drawn and calling the OpenGL immediate functions (lots of overhead). I’m hoping for some advice as to how we can

You could be right about the initialization missing for OpenGL.
First on the checklist is to gain access to opengl functions: you can do it manually or typically it is strongly recommended to use GLEW: with glew you set a current context, then initialise glew (calling glewinit()) to obtain access to the opengl functions.

I do know OpenGL superbible 5 has a good example of sorta emulating the the fixe pipeline.

OK, there are two steps here and I’d advise that it will go a LOT easier for you if you take them one at a time rather than trying to do both at the same time.

First step is to move away from immediate mode to buffer objects.
Second step is to move from fixed vertex and fragment processing to shaders.

I recommend that you take the second step first, and the reason why is that having a shader infrastructure already in place can make some of the decisions you’ll be making about the move from immediate mode to buffer objects a little easier.

This isn’t as daunting as it may seem on the surface. You can, for example, port to a roughly GL 2.1 equivalent just converting your vertex transform and lighting, and your fragment processing (i.e glTexEnv calls) to shaders that do the same thing. A realistic objective at this stage is to have a functionally identical program that no longer uses the fixed pipeline at the end of the port. Trying to add extra features at this stage is, IMO, premature.

When doing this you should be able to disable all of your scene drawing, then bring it back in one step at a time. I’ve done a few similar ports before and I normally prefer to start with the 2D GUI code as the drawing is relatively simple and there are only going to be a small handful of shader combinations used. So disable (i.e comment-out) all drawing except your 2D GUI, then write a simple vertex shader that transforms position by your ortho matrix, write a simple fragment shader that outputs a flat colour (white, or whatever is easily visible to confirm that it worked), link them into a program, glUseProgram it, and draw. Fix any problems, then add in textures. Then bring in colours. Then bring in any other GUI effects you might have. Nice and slow, one step at a time, and you’ll get there.

Having completed the 2D GUI you should have a shader-loading infrastructure set up, you should have a decent understanding of how to activate and switch shaders, you’ll know about uniforms and maybe a little about attributes, you’ll know how to pass data between the shader stages. Then pick something else, un-comment it, and repeat. You’ll go a little faster this time as you’ll already have much of the groundwork done. Continue repeating until this first stage is completed.

Now you can begin porting to buffer objects. Again, it’s going to be easier if you don’t try to jump over too many GL_VERSIONs at the same time, so stick to the fixed vertex attribs (glVertexPointer, glTexCoordPointer, etc) for now. Go through your drawing routines and divide your vertex data into two main groups: data that doesn’t change from frame-to-frame and data that does. Then look at the second group more closely: is it all really data that does change? E.g, you may have some keyframe animation, and that can be handled in a vertex shader instead of updating the data all the time; move it back to the first group. You may have a particle system and again the particle movement may be able to be handled by sending an unchanging position but applying a simple physics equation in a vertex shader; first group again. This is why I recommended doing shaders first: if you hadn’t then this line of approach wouldn’t have been available to you.

For the data that absolutely does need to change you have a few options. You can just continue using immediate mode: it’ll still work with GL 2.1. You can convert it to client-side vertex arrays. You can look at getting some buffer object streaming set up.

After having completed this you should have your functionally identical program that no longer uses the fixed pipeline and (possibly mostly) no longer uses immediate mode. Then it’s time to jump up to a higher GL_VERSION, if you wish (you may decide that you’re happy with the result and call it mission accomplished), or you may instead prefer to add some extra features (you’ve probably got some per-vertex work that you’ll get a better quality result from by moving it to per-fragment; lighting is a classic example), optimize a little, or whatever.