GLSL: The return to fixed functinoality

This seems an obvious question but my google chops seem to be lacking here…

So with OGL 3 after using a shader you can return to the fixed function pipeline with glUseProgram( 0 ). However with 4 “If program is zero, then the current rendering state refers to an invalid program object and the results of shader execution are undefined.” So what is the proper way to return to fixed functionality when you are done with your shader program?

The core profile of OpenGL 3.2 and above does not allow a “return to fixed functionality.” There is no fixed functionality. It was deprecated in 3.0 and removed in 3.1. The compatibility profile of 3.2 brings it back.

Yeah, bottom line: you want the compatibility profile if you need the fixed-function pipeline. You get this by default.

Sorry for the explain it like I’m 5. This means that if I create my context without setting the FORWARD_COMPATIBLE and CONTEXT_CORE_PROFILE bits that glUseProgram( 0 ) will cause a return to the old fixed functionality pipeline. Correct?

First, you shouldn’t be using the FORWARD_COMPATIBLE bit at all. Second, if you want a compatibility profile context, you set the CONTEXT_COMPATIBILITY_PROFILE.

Ah. Got it. Ta.

This was something I was wondering about lately: The Forward-comp flag is said to actually REMOVE all Features marked deprecated. Does this mean - in practice and theory - that immediate-draw functionality etc. is masked out, eg. requesting pointers to entry Points will return 0?
Edit: And shouldn’t have the Forward-comp Feature been a context-profile and not a flag? Or are there REALLY plans going on to remove (meaning as above) old Features like glVertex in core-contexts?

You shouldn’t be using the forward-compatibility flag at all. It became obsolete when the majority of the deprecated stuff was removed in 3.1.

If by “plans,” you mean “happened four years ago in OpenGL 3.1”, then yes there are “plans” to do that. Perhaps you should read more about profiles on the Wiki.

Hmm - how I understood that:
compat-profile gives you all functionality PLUS the whole compatibility state variables in shaders, which necessarily get removed in the core-profile.
So I tend to read the compat-profile as everything written for earlier Version works.
Core Profile is required to remove cluttering compat-state variables and stuff but is not required to remove entry Points
whereas Forward-flag should make it impossible to do anything not marked as core.
This may Sound odd as one would Need the compat-state variables in shaders to retrieve data send by glVertex but I remember to have read attribute-indices to be used internally by the gl-implementations. But I must say I’m not sure about that source stating that those were actully useable without them being bound to eg gl_Vertex.
And reading this all again my Interpretation seems a lot less plausible to myself then when the idea formed in my head. Right now I’m a Little too tired for reading technically heavy stuff. I’ll have a look at the specs etc. sometime else

Core Profile is required to remove cluttering compat-state variables and stuff but is not required to remove entry Points

Core means the OpenGL Specification Core Profile. If it’s not in that document, and it’s not provided by an extension, it is not available in the core profile. Did you think they were just kidding about that entire document labeled “Core Profile”? If they wanted it to only matter in forward compatibility mode, they would have called it the “forward compatibility profile.”

That’s why the forward compatibility bit is pointless.

The figure
<–B--------------C---------------F—>
C is the moving Point of the actual core-profile for any Point on the line.
B is the backward-compatible context Profile.
F is the Forward-bit in which the core Profile is moving spec by spec.
What does this mean apart from technical Details?
That means: If you know, you aren’t up to date, set a B means compat-context.
If you want to get the finger layed on things that aren’t up to date anymore, set F.
If you make your final release, just set C - it is the core.

So far ideally - do not ask me how this should be realized. The best thing would be to impose the Need to (somehow) include the Information, which GL-core Version the application was built against.
That’s my bid: CreateContext without Version number out of the core!
Except of course CreateContext already tags the application binary :wink:

None of what you’re talking about is how anything in OpenGL works. Please don’t confuse people who are struggling to understand how these things actually work with ideas about how you want them to work.

Yes and no (meaning I deserve that scolding posting after drinking some beers).
But: If creating a context and not giving any information about the version wanted - how is it decided what context to create? If something similar happens in the degree of development from 4.X to version Y as in the development from 1.0 to 3.2 one cannot expect a Version-Y-core context to provide the functionality that was core today. Or were those flags really designed for a one-time purpose without the intention to give them some semantics that will stay in place?
And: How is that related to the small incident of CreateContext returning a context for which neither core- nor compat-bits is set? Seems hard to decide that apps built against earlier gl-version using CreateContext should not work anymore because the Default-flag is CORE. Returning 0 for profile-mask would mean it’s neither a core- nor a compat context. Would result in 0-entry-poits if you ask me.

how is it decided what context to create?

Have you ever hear about default values and/or default behavior?

For instance:

How is that related to the small incident of CreateContext returning a context for which neither core- nor compat-bits is set?

First of all, neither glxCreateContext nor wglCreateContext take an attrib list. If it doesn’t take an attrib list, it cannot take a bit setting the compat or core profile. [wgl|glx]CreateContextAttribsARB is the appropriate API. Second, setting neither is an error - however, that means you provide the CONTEXT_PROFILE_MASK name with a value that is neither CONTEXT_CORE_PROFILE_BIT nor CONTEXT_COMPATIBILITY_PROFILE_BIT. If you don’t specify a <name, value> pair specifying the CONTEXT_PROFILE_MASK at all, the default value, i.e. CONTEXT_CORE_PROFILE_BIT, is used.

Just read the corresponding extension specs …

I guess that’s cause I only used to read it partially or this is really a contradiction:
Default value is 1.0 for Version which requires compatibility.
Default for profile is core.
wglCreateContext = wglCreateContextAttribs(…, 0, 0).
The sentence

The default value for WGL_CONTEXT_PROFILE_MASK_ARB is WGL_CONTEXT_CORE_PROFILE_BIT_ARB.

could have been extended by “if not required otherwise by the version number requested” or something like that. Of course this makes no sense…

Default value is 1.0 for Version which requires compatibility.
Default for profile is core.

That’s where you’ve confused yourself.

There is no such thing as “compatibility 1.0”. The core/compatibility distinction did not exist until 3.2 or later. That’s why setting the CONTEXT_COMPATIBILITY_PROFILE_BIT on any version request before 3.2 is an error.

The core and compatibility profiles refer to OpenGL specification documents. There is a 3.2 core specification and a 3.2 compatibility specification. There is a 1.1 core specification, but there is no 1.1 compatibility specification.

You keep trying to think of “compatibility” as “fixed-function” or whatever. It’s not.

May that be the actual reason wglCreateContext returns a context with neither profile bit set (which nonetheless is a compatibility-context)?

EDIT: My stupid - the GLenum for the query likely is unknown to the context returned and hence it gives 0…

EDIT2: However glGetError() returns 0. Just checked this.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.