Deprecated : Fixed Pipeline Vertex Procesing ???

This is some kind of a joke?

The function glTranslatef was marked as deprecated in OpenGL version 3.0 and was removed from OpenGL at version 3.1.
It is recommended to avoid using deprecated functions, as their compatibility with future OpenGL versions is not assured.
Instead, implement a vertex shader that performs the required vertex processing
Complete list of deprecated functions in this category:
glColorPointer, glEdgeFlagPointer, glFogCoordPointer, glIndexPointer, glNormalPointer, glSecondaryColorPointer, glTexCoordPointer, glVertexPointer, glEnableClientState, glDisableClientState, and glInterleavedArrays, glClientActiveTexture, glFrustum, glLoadIdentity, glLoadMatrix, glLoadTransposeMatrix, glMatrixMode, glMultMatrix, glMultTransposeMatrix, glOrtho*, glPopMatrix, glPushMatrix, glRotate*, glScale*, and glTranslate*, glMaterial*, glLight*, glLightModel*, and glColorMaterial, glShadeModel, and glClipPlane*.

Does it mean people who use OpenGL for things like custom user interface will have to write vertex shaders? LOL.
But seriously, how much time do you think I have to remove those function from my code? This hurts.

For many of these, it doesn’t take near the time you might think to switch to “the new way”, and many don’t involve shaders at all.

However, see if your vendor’s implementation offers a 3.1 context with ARB_compatibility extension (link). If so, you don’t “have” to give up your old ways cold turkey. Just stew on them when you have time to see if you can eek out a bit of performance (or just simplify the code) by switching. The first 8 for instance are just switching to a single new unified call (glVertexAttribPointer).

All new OpenGL drivers provide version 2.1 as the default, you have to specifically ask for an OpenGL 3.0 or 3.1 context to get that version of the API.
So to use the fixed function pipeline you dont need to change anything, your existing code will always work.
Many of the OpenGL3 enhancements have been added to the 2.1 context as extensions anyway.
You dont need to ever worry about converting to version 3 unless you want to use integer shaders or write very high performance animated graphics.
For things like a custom user interface, just keep using version 2.1

“Does it mean people who use OpenGL for things like custom user interface will have to write vertex shaders? LOL.”

fortunately for those up to their armpits in legacy code there will be fallbacks for the foreseeable future.

otherwise if you want to give your customers the best possible experience today (i.e. the slickest possible UI) you’ll just have to find the time. i’m a fan of stick and rudder GL too, but as time and hardware marches on it makes more sense to code/port against gl3; the less legacy code generated now means future deprecations get easier.

I find this move rather odd. I thought the GL stood for Graphics Library or Graphics Language. As such, I have some minimum expectations, one being that it will handle certain details for me, while allowing me to handle them myself if I so choose.

If I must handle all vertex operations myself, there is little to nothing to keep me from switching to DirectX. Lighting and vertex operations are still handled by the DirectX library. I don’t find these functions to be “legacy code”. They perform an essential function in a graphics library and having the library handle them leaves me more time to focus on other issues.

When does it end? Will we be required to write our own texture mapping functions in GL version 5?

I can offer a partial answer to your question
http://www.opengl.org/wiki/FAQ#glTranslate.2C_glRotate.2C_glScale

OpenGL is suppose to be a thin layer over the hardware, just like Direct3D. All those things listed aren’t supported by Direct3D 10 and above.

That does answer it and it makes a lot of sense. I just cannot be lazy anymore. :frowning:

I am a computer graphics professor. I teach a university computer graphics course and I think removing fixed pipeline functions from OpenGL was a really bad decision. It will hurt OpenGL in the future.

Yes, hardware has evolved. But – the fixed pipeline functions are very intuitive. They make it very easy for novices to learn and adopt OpenGL.

So, now, rather than teaching new students these simple concepts, we have to dive in right to shaders. Shaders are advanced material – when many students are struggling just with the basic concepts of computer graphics.

As an example: in math, you have to learn fractions and arithmetics first before you can do calculus. You don’t do it all at once. If you try it, you will get poor education results. In computer graphics, the end result will be that there will be fewer people learning and adopting OpenGL.

A simple solution would have been to keep the fixed functions in OpenGL, indefinitely. It is only a handful of functions. I don’t buy the argument that this would be hard for vendors to do, or that this would somehow violate the spirit of OpenGL. It would be a minimal amount of work for any serious OpenGL vendor.

Secondly, this move will break a lot of old OpenGL code. There is a huge OpenGL codebase out there. Simply asking people to use 2.1 is not a good solution – because that version may become unavailable in the future. At best, it is now risky to use OpenGL, because parts of the API may disappear in the future.

Would you program in a computer language for which you knew the basic functions may at some point disappear, breaking all your code? Of course not. You would invest your time and money into something else.

If you think people will happily rewrite their code using shaders – they won’t. They will instead look for alternatives to OpenGL.

I sincerely hope that the fixed pipeline functions will be re-introduced in a future version of OpenGL. I doubt it will happen, however.

I don’t believe the fixed functionality stuff is going to go from OpenGL at least within the foreseeable future. At the very least I’d expect NVIDIA - who have a proven history of great OpenGL support - to retain it as long as possible, and I don’t see Intel releasing an OpenGL 3+ driver any time soon either. Plus there’s the CAD/workstation market to consider, which has always been a critical part of the OpenGL userbase, and which always moves quite a bit slower than the consumer/gaming market.

I say read the text again, specifically this part (with my added emphasis): “It is recommended to avoid using deprecated functions, as their compatibility with future OpenGL versions is not assured.”

So you can still use them perfectly well if you write OpenGL code to the 2.1 or lower spec. Yes, they are removed on all modern hardware, but the driver generates shaders to emulate the fixed pipeline, so you can still use them.

Consider this an early warning rather than an absolute “Thou Shalt Not”. Consider it a heads up that if you have legacy code that you need to be able to run on modern hardware for the foreseeable future, now is the time to start working on porting it or replacing it while you still have a time window measurable in years, rather than waiting till the last minute and then doing a rush job on it.

If you think people will happily rewrite their code using shaders – they won’t. They will instead look for alternatives to OpenGL.

I’m afraid the only viable alternative to OpenGL is Direct3D. That won’t work out in your situation for two reasons. Firstly it’s not portable, which I would expect would be an important factor for you. Secondly, modern versions of D3D (10 and 11) also have the fixed pipeline removed.

I’m on the fence when it comes to learning things the shader/VBO only way from the outset. On the one hand I can see that it makes sense to start with the way that’s going to be there for the longer run, and it also means that you don’t pick up any bad habits. On the other - and no matter what people say - I don’t believe that the new way is necessarily simpler. Sure, writing a shader is easy, and issuing a glDraw(Range)Elements call is easy, but there’s a whole heap of infrastructure - including planning your VBO strategy (what elements are static, what are dynamic, how big should my buffers be, what buffers can I reuse for multiple different types of object, how do I fail gracefully, etc) - that you need to put in place before you do anything. Besides, if you’re going to learn things this way then you’re probably better off learning D3D9 instead, where the API is cleaner and more intuitive (at least once you get past CreateDevice and handling D3DERR_DEVICELOST properly).

Would you program in a computer language for which you knew the basic functions may at some point disappear, breaking all your code?

Sorry to say this, but back when I was in college I used to be able to just grab an address and read directly from or write directly to a parallel port or video memory. A mere 5 years later and I couldn’t any more. And you know what? With hindsight this was a good thing, because those days were pretty damn hairy overall. Everybody wrote “the C program that completely hosed your PC”. Twice. During the first day we were learning about pointers. Talk about learning by getting your fingers burned!

That’s the nature of progress - old stuff that seemed good at the time turns out to be not so good when a few years pass. Despite my own personal fondness for the fixed pipeline as an easy introduction, I’ll happily look forward to the days when everybody writes the OpenGL program that only draws 2000 triangles but yet runs at 10 FPS come to a close.

As far as the real world is concerned, every OpenGL implementation that supports 3.2 core and above also supports 3.2 compatibility and above. Which means that the fixed function pipeline (unfortunately) hasn’t gone anywhere. A user can opt out of it if they wish, but it’s still around if you want it.

Shaders are advanced material

I keep hearing people say this, and I’ve never understand it. What is so “advanced” about shaders? If you start off explaining where shaders are in the pipeline and how they work, they’re not particularly complicated.

Complex shaders can be complex, but simple shaders are simple. Indeed, shaders make understanding certain things much easier. Take lighting for example. The basic diffuse lighting equation is this:

float lightValue = clamp(dot(normal, lightDir), 0, 1);
finalColor = lightValue * lightIntensity * diffuseColor;

You can set this up in OpenGL in a few function calls. For per-vertex lighting, at any rate. But should per-fragment lighting be considered “advanced” material? Per-fragment lighting has been commonplace for years; calling it “advanced” is doing your students a disservice.

Furthermore, look at how the equation changes when you have a texture:

vec4 diffuseColor = texture(diffuseSpl, texCoord);
float lightValue = clamp(dot(normal, lightDir), 0, 1);
finalColor = lightValue * lightIntensity * diffuseColor;

It doesn’t change. The only thing that has changed is where the diffuse color comes from. This is standard programming practice, and anyone learning how to write graphics code should be able to follow that.

In terms of fixed-function, you have to set up a glTexEnv stage to do the last multiply, which is rather different from the setup for using the per-vertex interpolated color for the last multiply.

With shaders, it’s a very minor change to the shader. It’s very obvious what you’re doing (accessing a texture and using that value as a diffuse color in the lighting equation).

How does it change when a shadow map is added?

vec4 diffuseColor = texture(diffuseSpl, texCoord);
float lightValue = clamp(dot(normal, lightDir), 0, 1);
float lightMap = textureShadow(shadowMapSpl, shadowCoord);
finalColor = lightMap * lightValue * lightIntensity * diffuseColor;

It’s really simple and obvious what’s going on. Compare that to the fixed-function code necessary to emulate this.

Fixed function is fast for throwing something on the screen. But for teaching people how to actually assemble a desired effect, it isn’t very good. It’s obtuse, inflexible, and you’ll have to abandon it anyway to do anything real.

And the last part is really the sticking point for me. See, calculus does not invalidate fractions and arithmetic. You need basic arithmetic to be able to use calculus; calculus itself requires these things. Shaders invalidates the use of glLighti, glTexEnv, and the rest of the fixed function pipeline.

Honestly, I’ve been using shaders for so long I don’t even remember how glTexEnv works. Nobody can possibly say, “I’ve been using calculus for so long I don’t even remember how addition works.”

If you think people will happily rewrite their code using shaders – they won’t. They will instead look for alternatives to OpenGL.

If by “alternatives to OpenGL” you mean Direct3D, sorry: D3D10 and above abandoned fixed functionality too. OpenGL ES 2.0 dropped it as well. Indeed, only desktop OpenGL has fixed function anymore; all that has changed is that you have to ask for it.

You are doing your students a disservice by not introducing them to a shader-based world as soon as possible. That’s the reality that they’re going to have to deal with, so it’s better for them to deal with it ASAP.

Sure, writing a shader is easy, and issuing a glDraw(Range)Elements call is easy, but there’s a whole heap of infrastructure - including planning your VBO strategy (what elements are static, what are dynamic, how big should my buffers be, what buffers can I reuse for multiple different types of object, how do I fail gracefully, etc) - that you need to put in place before you do anything.

But these are all performance optimizations. If you’re using immediate mode or client memory pointers, it’s obvious that performance is not one of your primary concerns. None of these “need” to be in place for simple teaching or demo purposes. They’re only useful for developing high-performance code.

And if you’re developing high-performance rendering code, you’re probably past the “learning” phase and fixed-function isn’t going to be of any value to you.