GLSL Shader possibilities/Limitations

Hello,

I’m working with shaders and have some questions that I would really appreciate if someone could have the time helping me out since I can’t seem to find any information on this online.

  • Does any type of shader remove the fog property of any object using it? So if I have a scene using glFog and if I have one object using a simple shader, it seems to loose the fog effect and become totally visible even when other objects aren’t. Is there any solution for this? btw, this shader would be filling the object with a simple color, for example.

  • If I have multiple shaders for one object, how can I accomplish this without joining all the effect ins one, thus adapting the code. Is it even possible?

  • If I’m writing a light-based shader is there any way to access the light sources and their exact position/intensity/fading properties without the need to pass them as parameters?

Regards,
Jamiro

I can’t seem to find any information on this online.

You cannot get answers to these questions online because the answers are kind of obvious once you understand what shaders do. I would advise you to focus less on trying to make shaders fit your idea of them and focus more on understanding what they do.

Does any type of shader remove the fog property of any object using it? So if I have a scene using glFog and if I have one object using a simple shader, it seems to loose the fog effect and become totally visible even when other objects aren’t.

Broadly speaking, you should not be trying to get fixed-function operations like glFog to work with shaders. You can, but you really shouldn’t.

Fog is a part of fixed-function per-fragment processing. So if a fragment shader is active, you lose all fixed-function per-fragment processing. However, the fixed-function fog computations are ultimately based on values computed in vertex processing. So if a vertex shader is active, that overrides all fixed-function vertex processing. So you lose your fog coordinates.

You have two options, but both are different spellings of the same thing. You can read the OpenGL compatibility specification, see how it computes fog, and implement it yourself by using the specific compatibility profile values. That is, VS’s get a gl_FogCoord that’s based on something generated from glFog values (I don’t remember how fixed-function fog works). You would use that and the gl_Fog built-in uniform struct to do the same per-vertex fog computations that the fixed-function pipeline would have done. The output would be written to gl_FogFragCoord. And in your fragment shader, you take gl_FogFragCoord and the gl_Fog parameters to do the same per-fragment fog computations that the fixed-function pipeline would have done.

Alternatively, you can implement it entirely yourself. That is, work out a scheme for implementing fog, and then have you shader compute the fog value and incorporate that into your lighting model.

Either way, you’re doing the work yourself. It’s simply a matter of whether you’re trying to work with the OpenGL model or freeing yourself from what OpenGL wants and fulfilling your needs.

If I have multiple shaders for one object, how can I accomplish this without joining all the effect ins one, thus adapting the code.

Broadly speaking, you don’t. You might be able to build some kind of system based on SPIR-V or something, but shaders are not things you just paint on an object.

If I’m writing a light-based shader is there any way to access the light sources and their exact position/intensity/fading properties without the need to pass them as parameters?

… huh? A shader cannot access anything without passing it as a “parameter”. Shaders execute in a very limited environment; they only have access to the information you give them.

As Alfonse says, a shader stage replaces the fixed-function processing for that stage. So if you have a vertex shader, you’d need to calculate gl_FogFragCoord yourself (e.g. based upon gl_FogCoord or the eye-space vertex position). If you have a fragment shader, you need to modify the colour yourself (e.g. based upon gl_FogFragCoord and gl_Fog).

You need to write a shader which combines the effects. There’s no mechanism for automatically combining multiple shaders for a single stage.

In the compatibility profile, you can obtain the lighting parameters set by glLight() and glLightModel() via uniform variables. These are listed in chapter 7 of the GLSL specification.

Note that it isn’t possible for a shader to emulate fixed-function processing exactly, as not all of the relevant settings are available via compatibility uniforms. E.g. most of the glFog() parameters are available via gl_Fog, but GL_FOG_MODE isn’t.

For a broad definition of “any shader”, the old ARB assembly shaders allowed fixed pipeline fog by using one of the “option ARB_fog_*” options.

Thanks everyone,

some of those answers were already expected, I suspect that I needed to rewrite a Fog shader for myself.

Alternatively, you can implement it entirely yourself. That is, work out a scheme for implementing fog, and then have you shader compute the fog value and incorporate that into your lighting model.

Thats where I’m goin got, if I need to implmement it, I better do it from strach with a full range of possibilities.

Thanks, I’ll give it a look.

Thanks for the help, it was quite expected and the more or less the same what I ended up knowing today while reading a bit more.

Regards,
Jamiro

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.