Creating a shader with fog

Hello guys!

I’m quite new to OpenGL and I am trying to visualize a Mobile with spheres, squares, triangles, and so on. I have also made some light sources and moving animations.
Also rotating the camera is possible.
But now I’m quite stuck creating some fog.

I’ve figured out that standard GL_FOG is not working when using vertex and fragment shaders.

So now I want to ask you if you can help me creating such shaders.

BTW here are my shaders:

fragmentshader:

#version 330 core

in vec4 vColor;
out vec4 FragColor;
in vec2 UVcoords;
uniform sampler2D myTexture1Sampler;

void main()
{
vec4 Color;
float fog;

const float LOG2E = 1.442695;
fog = exp2( -gl_Fog.density * gl_Fog.density *
          gl_FogFragCoord * gl_FogFragCoord * LOG2E );
          
fog = clamp( fog, 0.0, 1.0 );
//fog = 0.0; // Uncomment this to prove that fog _can_ work

vec3 color = mix( vec3( gl_Fog.color ), Color.rgb, fog );
gl_FragColor = vec4( color, Color.a );

FragColor = texture2D(myTexture1Sampler, UVcoords) * vec4( color, Color.a );

}

vertexshader

#version 330 core

uniform mat4 ProjectionMatrix;
uniform mat4 ViewMatrix;
uniform mat4 ModelMatrix;
uniform vec3 LightPosition1;
uniform vec3 LightPosition2;
uniform vec3 LightColor1;
uniform vec3 LightColor2;
uniform float DiffuseFactor;
uniform float SpecularFactor;
uniform float AmbientFactor;

layout (location = 0) in vec3 Position; //this somehow is effecting from the glVertexAttribPointer index
layout (location = 1) in vec3 Color;
layout (location = 2) in vec3 Normal;

//texturing
layout (location = 3) in vec2 UV;

out vec2 UVcoords;

out vec4 vColor;

void main()
{

mat4 normalMatrix = transpose(inverse(ViewMatrix * ModelMatrix));
vec3 normal = normalize((normalMatrix * vec4(normalize(Normal), 1.0)).xyz);

vec3 lightPosition1 = (ViewMatrix * vec4(LightPosition1, 1.0)).xyz;
vec3 lightPosition2 = (ViewMatrix * vec4(LightPosition2, 1.0)).xyz;
vec4 vertexPositionModelSpace = (ViewMatrix * ModelMatrix) * vec4(Position,1.0);
vec3 vertexNormalized = normalize(-vertexPositionModelSpace.xyz);

vec3 lightVector1 = normalize(lightPosition1 - vertexPositionModelSpace.xyz);
vec3 lightVector2 = normalize(lightPosition2 - vertexPositionModelSpace.xyz);
vec3 halfVector1 = normalize(lightVector1 + vertexNormalized);
vec3 halfVector2 = normalize(lightVector2 + vertexNormalized);   

vec3 diffusePart = (clamp(dot(normal,lightVector1), 0.0, 1.0)*LightColor1 + clamp(dot(normal,lightVector2), 0.0, 1.0)*LightColor2);
vec3 specularPart = (pow(clamp(dot(normal,halfVector1),0.0,1.0),127.0 )*LightColor1 + pow(clamp(dot(normal,halfVector2),0.0,1.0 ),127.0)*LightColor2);
vec3 ambientPart = vec3(Color * AmbientFactor);
diffusePart *= vec3(DiffuseFactor);
specularPart *= vec3(SpecularFactor);


vColor = vec4(Color*diffusePart + specularPart + ambientPart, 1.0);
gl_Position = ProjectionMatrix*ViewMatrix*ModelMatrix*vec4(Position.x, Position.y, Position.z, 1.0);
UVcoords = UV;

}

Thank you in advance!

I don’t remember the exact algorithm off the top of my head, but it’s pretty straight forward. You need the distance between the camera and the pixel. Anything that is not drawn will not receive fog like a blank sky background. I think you draw your skybox without fog too.

If I recall correctly there is a begin distance and an end distance. Anything closer than the begin distance receives 0% fog. Anything further than the end distance receives 100% fog. Anything in between will get a percentage of it’s distance between the begin and end distance by interpolating the value (LERP).

The fog itself is just assigning a color to the pixel. Probably that would be in the fragment shader. So, you’ll need the pixel position in world space. You should be able to do that by bypassing the worldviewprojection calculation and saving that world space vertex position and passing it straight on to the fragment shader. You still do the calculation for the vertex position but you save the world position of the vertex position and pass it as a variable to the fragment shader.

When the fragment shader receives vertices, it interpolates the values between the 3 vertices of the triangle in the rasterizer and passes an interpolated pixel value to the fragment shader. So, it will automatically give that world position of the pixel to you that way.

You could probably get the camera position from the view matrix you are passing in although it may be easier to just get that in your C++ code and pass it in. You can use vector subtraction to calculate the distance between the pixel and the camera. If both are vectors subtracting one from the other will give a vector that points between them and has the length of the distance between them. Then it’s just a matter of filtering out anything that is further than the begin distance, subtracting the begin distance, and assigning a percentage of the color based on a comparison of the end distance.

Of course, make sure the rest of your shader is working as expected first so that you know you are only having this one problem.