Getting Worldspace Coordinates from Stored Coordinates

Hello there. I’m trying to retrieve the coordinates, in world space, of fragments, by storing clip space coordinates packed into a color buffer

I’m using this for a hybrid of deferred lighting and deferred shading which I thought of on the toilet.

The problem is of course loc_world in the pointlight fragment shader (Bottom of post)

i’m getting obviously incorrect values displayed on the screen, which change depend on where I position the camera, meaning they are not worldspace coordinates but rather some weird distortion of cameraspace coordinates

Noteworthy suspicions (Cannot confirm nor deny):

  • glm::inverse does not actually work properly on a mat4 projection * view matrix.
  • the SceneCamera->GetViewProjection (Which returns projection * glm::lookAt(pos, pos + forward, up)) is not invertible or is in some way not behaving as expected when inverted
  • the ScreenPosition x and y in my screenquad function are (somehow) not the right ones to be part of the clipspace coordinates
  • I’m unpacking something wrong
  • I’m misusing matrix multiplication
  • I’m being stupid and ignorant in some way

C++ Code of draw function (NOTE: Ignore everything about the skybox, it’s unimportant, you only care about the camera matrices, initial opaque pass, and PointLight stuff):


void GkScene::drawPipeline(){
	/*
	
	FUNCTION STRUCTURE:
	* Initial Opaque Pass
		* SkyboxShader pass
		* InitialOpaquePassShader pass
                * PointlightShader pass
	* Showtex to screen (one of the pointlightshader pass color buffers is shown)
	*/
	
	
	if (!SceneCamera || !InitialOpaqueShader || !ShowTextureShader || !InitialOpaqueUniforms || !PointLightShader){ //if one is not good
		return; //gtfo
	}
	
	
	
	
	
	
	//Setup Camera Stuff.
	InitialOpaqueCameraMatrix = SceneCamera->GetViewProjection();
	InitialOpaqueCameraViewMatrix = SceneCamera->GetViewMatrix();
	InitialOpaqueCameraProjectionMatrix = SceneCamera->GetProjection();
	InversedInitialOpaqueCameraMatrix = glm::inverse(SceneCamera->GetViewProjection());
	
	//Debug glm::inverse
	// std::cout << "
Camera matrix Not Inverted:";
	// std::cout << "
 " <<InitialOpaqueCameraMatrix[0][0] << " | " <<InitialOpaqueCameraMatrix[0][1] << " | "  <<InitialOpaqueCameraMatrix[0][2] << " | " <<InitialOpaqueCameraMatrix[0][3] << " 
 " <<InitialOpaqueCameraMatrix[1][0] << " | " <<InitialOpaqueCameraMatrix[1][1] << " | " <<InitialOpaqueCameraMatrix[1][2] << " | " <<InitialOpaqueCameraMatrix[1][3] << " 
 " <<InitialOpaqueCameraMatrix[2][0] << " | " <<InitialOpaqueCameraMatrix[2][1] << " | " <<InitialOpaqueCameraMatrix[2][2] << " | " <<InitialOpaqueCameraMatrix[2][3] << " 
 " <<InitialOpaqueCameraMatrix[3][0] << " | " <<InitialOpaqueCameraMatrix[3][1] << " | " <<InitialOpaqueCameraMatrix[3][2] << " | " <<InitialOpaqueCameraMatrix[3][3];
	
	// std::cout << "
Camera matrix Inverted:";
	// std::cout << "
 " <<InversedInitialOpaqueCameraMatrix[0][0] << " | " <<InversedInitialOpaqueCameraMatrix[0][1] << " | "  <<InversedInitialOpaqueCameraMatrix[0][2] << " | " <<InversedInitialOpaqueCameraMatrix[0][3] << " 
 " <<InversedInitialOpaqueCameraMatrix[1][0] << " | " <<InversedInitialOpaqueCameraMatrix[1][1] << " | " <<InversedInitialOpaqueCameraMatrix[1][2] << " | " <<InversedInitialOpaqueCameraMatrix[1][3] << " 
 " <<InversedInitialOpaqueCameraMatrix[2][0] << " | " <<InversedInitialOpaqueCameraMatrix[2][1] << " | " <<InversedInitialOpaqueCameraMatrix[2][2] << " | " <<InversedInitialOpaqueCameraMatrix[2][3] << " 
 " <<InversedInitialOpaqueCameraMatrix[3][0] << " | " <<InversedInitialOpaqueCameraMatrix[3][1] << " | " <<InversedInitialOpaqueCameraMatrix[3][2] << " | " <<InversedInitialOpaqueCameraMatrix[3][3];
	/*
	
	BEGIN INITIAL OPAQUE PASS
	
	*/
	FboArray[OPAQUE_INITIAL]->BindRenderTarget(); //Bind this FBO as the Render Target. We render the skybox here too...
	FBO::clearTexture(0.0,0.0,0.2,0.0);//Clears the Opaque Initial target. The screen isn't cleared for a long time.
	
	/*
	
	INITIAL OPAQUE PASS- SKYBOX
	
	
	*/
	//This draws the skybox in the background
	if(SkyBoxCubemap && SkyboxShader)
	{
		SkyboxShader->Bind();
		if (!haveInitializedSkyboxUniforms)
		{
			SkyboxUniforms[SKYBOX_WORLD2CAMERA] = SkyboxShader->GetUniformLocation("World2Camera");
			SkyboxUniforms[SKYBOX_MODEL2WORLD] = SkyboxShader->GetUniformLocation("Model2World");
			SkyboxUniforms[SKYBOX_VIEWMATRIX] = SkyboxShader->GetUniformLocation("viewMatrix");
			SkyboxUniforms[SKYBOX_PROJECTION] = SkyboxShader->GetUniformLocation("projection");
			SkyboxUniforms[SKYBOX_WORLDAROUNDME] = SkyboxShader->GetUniformLocation("worldaroundme");
			haveInitializedSkyboxUniforms = true; //We've done it boys.
		}
			//Have to bind before we set stuff!
		
		//Camera stuff.
		//We need to do a if(hasntrun) for this part of the code and get the locations and save them permanently instead of this bullshit. 
		glUniformMatrix4fv(SkyboxUniforms[SKYBOX_WORLD2CAMERA], 1, GL_FALSE, &InitialOpaqueCameraMatrix[0][0]);
		glUniformMatrix4fv(SkyboxUniforms[SKYBOX_VIEWMATRIX], 1, GL_FALSE, &InitialOpaqueCameraViewMatrix[0][0]);
		glUniformMatrix4fv(SkyboxUniforms[SKYBOX_PROJECTION], 1, GL_FALSE, &InitialOpaqueCameraProjectionMatrix[0][0]);
		skybox_transform.reTransform(glm::vec3(0,0,0),glm::vec3(0,0,0),glm::vec3(SceneCamera->jafar * 0.5,SceneCamera->jafar * 0.5,SceneCamera->jafar * 0.5));
	
		
	
	
		Texture::SetActiveUnit(1);
		SkyBoxCubemap->Bind(1); //Bind to 1
		glDepthMask(GL_FALSE); //We want it to appear infinitely far away
		glUniform1i(SkyboxUniforms[SKYBOX_WORLDAROUNDME], 1);
		glEnableVertexAttribArray(0); //Position
		//glEnableVertexAttribArray(1); //Texture
		glEnableVertexAttribArray(2); //Normal
			Skybox_Transitional_Transform = skybox_transform.GetModel();
			glUniformMatrix4fv(SkyboxUniforms[SKYBOX_MODEL2WORLD], 1, GL_FALSE, &Skybox_Transitional_Transform[0][0]);
			m_skybox_Mesh->DrawGeneric();
		glDepthMask(GL_TRUE);//We would like depth testing to be done again.
	
	}
	
	
	/*
	INITIAL OPAQUE PASS- INIT OPAQUE SHADER
	*/
	InitialOpaqueShader->Bind(); //Bind the shader!
	
	
	//Runs whenever the window is resized or a shader is reassigned, so that we only need to get uniform locations once. It's not optimized fully yet...
	if(HasntRunYet){
			screensize.x = width;
			screensize.y = height;

		HasntRunYet = false;
		InitialOpaqueUniforms[INITOPAQUE_DIFFUSE] = InitialOpaqueShader->GetUniformLocation("diffuse"); //Literal texture unit
		InitialOpaqueUniforms[INITOPAQUE_WORLD2CAMERA] = InitialOpaqueShader->GetUniformLocation("World2Camera"); //World --> NDC
		InitialOpaqueUniforms[INITOPAQUE_MODEL2WORLD] = InitialOpaqueShader->GetUniformLocation("Model2World"); //Model --> World
		InitialOpaqueUniforms[INITOPAQUE_AMBIENT] = InitialOpaqueShader->GetUniformLocation("ambient"); //Ambient component of the material
		InitialOpaqueUniforms[INITOPAQUE_SPECREFLECTIVITY] = InitialOpaqueShader->GetUniformLocation("specreflectivity"); //Specular reflectivity
		InitialOpaqueUniforms[INITOPAQUE_SPECDAMP] = InitialOpaqueShader->GetUniformLocation("specdamp"); //Specular dampening
		InitialOpaqueUniforms[INITOPAQUE_EMISSIVITY] = InitialOpaqueShader->GetUniformLocation("emissivity"); //emissivity... currently unused
		InitialOpaqueUniforms[INITOPAQUE_DIFFUSIVITY] = InitialOpaqueShader->GetUniformLocation("diffusivity"); //Diffusivity... currently unused
		InitialOpaqueUniforms[INITOPAQUE_RENDERFLAGS] = InitialOpaqueShader->GetUniformLocation("renderflags");
		InitialOpaqueUniforms[INITOPAQUE_WORLDAROUNDME] = InitialOpaqueShader->GetUniformLocation("worldaroundme");
		InitialOpaqueUniforms[INITOPAQUE_ENABLE_CUBEMAP_REFLECTIONS] = InitialOpaqueShader->GetUniformLocation("enableCubeMapReflections");
		InitialOpaqueUniforms[INITOPAQUE_CAMERAPOS] = InitialOpaqueShader->GetUniformLocation("CameraPos");
		InitialOpaqueUniforms[INITOPAQUE_JANEAR] = InitialOpaqueShader->GetUniformLocation("janear");
		InitialOpaqueUniforms[INITOPAQUE_JAFAR] = InitialOpaqueShader->GetUniformLocation("jafar");
		InitialOpaqueShader->setUniform1f("windowsize_x", (width
		* Initial_Opaque_Pass_Approximation_Factor
		));
		InitialOpaqueShader->setUniform1f("windowsize_y", (height
		* Initial_Opaque_Pass_Approximation_Factor
		));
	}
	glUniform3f(InitialOpaqueUniforms[INITOPAQUE_CAMERAPOS], SceneCamera->pos.x, SceneCamera->pos.y, SceneCamera->pos.z);
	glUniform1f(InitialOpaqueUniforms[INITOPAQUE_JAFAR], SceneCamera->jafar); //Needed for depth.
	glUniform1f(InitialOpaqueUniforms[INITOPAQUE_JANEAR], SceneCamera->janear); //Needed for depth.
	glUniform1f(InitialOpaqueUniforms[INITOPAQUE_ENABLE_CUBEMAP_REFLECTIONS], 1.0f);
	
	
	GLenum communism;
	Texture::SetActiveUnit(0);
	//InitialOpaqueShader->setUniform1i("diffuse", 0);//Texture unit 0 is reserved for the textures of objects.
	glUniform1i(InitialOpaqueUniforms[INITOPAQUE_DIFFUSE],0);
	
	if(SkyBoxCubemap)
	{
		Texture::SetActiveUnit(1);
		SkyBoxCubemap->Bind(1); //Bind to 1
		//std::cout << "
 Successfully bound our cubemap to 1";
		glUniform1i(InitialOpaqueUniforms[INITOPAQUE_WORLDAROUNDME], 1);//Cubemap unit 1 is reserved for the cubemap representing the world around the object, for reflections.
	}
	
	/* Error Check
	
	// communism = glGetError(); //Ensure there are no errors listed before we start.
	// if (communism != GL_NO_ERROR) //if communism has made an error (which is pretty typical)
	// {
		// std::cout<<"
 OpenGL reports an ERROR!";
		// if (communism == GL_INVALID_ENUM)
			// std::cout<<"
 Invalid enum.";
		// if (communism == GL_INVALID_OPERATION)
			// std::cout<<"
 Invalid operation.";
		// if (communism == GL_INVALID_FRAMEBUFFER_OPERATION)
			// std::cout <<"
 Invalid Framebuffer Operation.";
		// if (communism == GL_OUT_OF_MEMORY)
		// {
			// std::cout <<"
 Out of memory. You've really done it now. I'm so angry, i'm going to close the program. ARE YOU HAPPY NOW, DAVE?!?!";
			// std::abort();
		// }
	// }
	*/
	glUniformMatrix4fv(InitialOpaqueUniforms[INITOPAQUE_WORLD2CAMERA], 1, GL_FALSE, &InitialOpaqueCameraMatrix[0][0]);
	InitialOpaqueShader->setUniformMatrix4fv("viewMatrix", 1, GL_FALSE, &InitialOpaqueCameraViewMatrix[0][0]);

	//Now that we have the shader stuff set up, let's get to rendering!
	glEnableVertexAttribArray(0); //Position
	glEnableVertexAttribArray(1); //Texture
	glEnableVertexAttribArray(2); //Normal
	glEnableVertexAttribArray(3); //Color
		if (Meshes.size() > 0) //if there are any
			for (size_t i = 0; i < Meshes.size(); i++) //for all of them
				if (Meshes[i]) //don't call methods on nullptrs
				{
					unsigned int flagerinos = Meshes[i]->getFlags(); //Set flags
					glUniform1ui(InitialOpaqueUniforms[INITOPAQUE_RENDERFLAGS], flagerinos); //Set flags on GPU
					Meshes[i]->DrawInstancesPhong(
						InitialOpaqueUniforms[INITOPAQUE_MODEL2WORLD], 		//Model->World transformation matrix
						InitialOpaqueUniforms[INITOPAQUE_AMBIENT], 		//Ambient material component
						InitialOpaqueUniforms[INITOPAQUE_SPECREFLECTIVITY], 	//Specular reflective material component
						InitialOpaqueUniforms[INITOPAQUE_SPECDAMP], 		//Specular dampening material component
						InitialOpaqueUniforms[INITOPAQUE_DIFFUSIVITY],   //Diffusivity. Reaction to diffuse light.
						InitialOpaqueUniforms[INITOPAQUE_EMISSIVITY], 		//Emissivity material component
						InitialOpaqueUniforms[INITOPAQUE_ENABLE_CUBEMAP_REFLECTIONS],
						false		//Yes, we're using textures.
					);
				}
				
		
	//glDisableVertexAttribArray(0); //Position. We use this for the screenquad to the screen, so we shouldn't disable it.
	glDisableVertexAttribArray(1); //Texture
	glDisableVertexAttribArray(2); //Normal
	glDisableVertexAttribArray(3); //Color

	/*
	* 
	* SHADOWLESS LIGHTS
	* 
	*/
	
	//Setup for lights
	glEnable(GL_BLEND);
	glBlendEquation(GL_FUNC_ADD);
	glBlendFunc(GL_ONE, GL_ONE);
	glDisable(GL_DEPTH_TEST); //Disable depth testing
	FboArray[LIGHT_ACCUMULATOR]->BindRenderTarget();
	FBO::clearTexture(0.0,0.0,0.0,0.0); //Clear to black
	//POINT LIGHTS
		//Prep
		PointLightShader->Bind();
		//Avoid glGetUniformLocation every single frame
		if (!haveInitializedPointlightUniforms){
			PointlightUniforms[POINTLIGHT_TEX1] = PointLightShader->GetUniformLocation("tex1");
			PointlightUniforms[POINTLIGHT_TEX2] = PointLightShader->GetUniformLocation("tex2");
			PointlightUniforms[POINTLIGHT_TEX3] = PointLightShader->GetUniformLocation("tex3");
			PointlightUniforms[POINTLIGHT_POS] = PointLightShader->GetUniformLocation("position");
			PointlightUniforms[POINTLIGHT_COLOR] = PointLightShader->GetUniformLocation("color");
			PointlightUniforms[POINTLIGHT_RANGE] = PointLightShader->GetUniformLocation("range");
			PointlightUniforms[POINTLIGHT_DROPOFF] = PointLightShader->GetUniformLocation("range_dropoff");
			PointlightUniforms[POINTLIGHT_INVERSE_MATRIX] = PointLightShader->GetUniformLocation("invCamMatrix");
			/*
				POINTLIGHT_JAFAR,
				POINTLIGHT_JANEAR,
				POINTLIGHT_NOTMYWORLD2CAMERA
			*/
			PointlightUniforms[POINTLIGHT_JAFAR] = PointLightShader->GetUniformLocation("jafar");
			PointlightUniforms[POINTLIGHT_JANEAR] = PointLightShader->GetUniformLocation("janear");
			PointlightUniforms[POINTLIGHT_NOTMYWORLD2CAMERA] = PointLightShader->GetUniformLocation("NotMyWorld2Camera"); //Stores the Initial Opaque world2camera matrix
			haveInitializedPointlightUniforms = true;
		}
		// PointLightShader->setUniform1i("tex1", 0);
		// PointLightShader->setUniform1i("tex2", 1);
		// PointLightShader->setUniform1i("tex3", 2);
		glUniform1i(PointlightUniforms[POINTLIGHT_TEX1],0);
		glUniform1i(PointlightUniforms[POINTLIGHT_TEX2],1);
		glUniform1i(PointlightUniforms[POINTLIGHT_TEX3],2);
		
		//Send in the initial opaque pass buffers!
		FboArray[OPAQUE_INITIAL]->BindasTexture(0,0);
		FboArray[OPAQUE_INITIAL]->BindasTexture(1,1);
		FboArray[OPAQUE_INITIAL]->BindasTexture(2,2);
		//We will need to convert back into world space from NDC
		glUniformMatrix4fv(PointlightUniforms[POINTLIGHT_INVERSE_MATRIX], 1, GL_FALSE, &InversedInitialOpaqueCameraMatrix[0][0]);
		//Needed to get depth back.
		glUniform1f(PointlightUniforms[POINTLIGHT_JAFAR], SceneCamera->jafar); 
		glUniform1f(PointlightUniforms[POINTLIGHT_JANEAR], SceneCamera->janear); 
		glUniformMatrix4fv(PointlightUniforms[POINTLIGHT_NOTMYWORLD2CAMERA], 1, GL_FALSE, &InitialOpaqueCameraMatrix[0][0]); //I think we need this.
		//Now do the point lights
		if (SimplePointLights.size() > 0)
		{ //for all point lights
			SimplePointLights[0]->bindToUniformLight(PointlightUniforms[POINTLIGHT_POS], PointlightUniforms[POINTLIGHT_COLOR], PointlightUniforms[POINTLIGHT_RANGE], PointlightUniforms[POINTLIGHT_DROPOFF]);
			ScreenquadtoFBO(PointLightShader);
		}
	
	//DONE WITH SHADOWLESS LIGHTS
	glEnable(GL_DEPTH_TEST); //We want it back!
	glDisable(GL_BLEND);
	
	FBO::unBindRenderTarget(width, height);
	FBO::clearTexture(0.0,0.0,0.0,0.0);
	//Screenquad the results
	ShowTextureShader->Bind();
	ShowTextureShader->setUniform1i("diffuse", 0);//NOTE TO SELF: Avoid glGetUniformLocation repeats
	FboArray[LIGHT_ACCUMULATOR]->BindasTexture(0,0); //See pointlight.fs
	ScreenquadtoScreen(ShowTextureShader);
	//Beautiful, isn't it?
} //eof drawPipeline

My Initial Opaque Vertex Shader:


#version 330

//INITIAL_OPAQUE.VS

//List of flags. Some of these are no longer implemented, they caused too much of a performance problem. I do not recommend you enable them.
const uint GK_RENDER = uint(1); // Do we render it? This is perhaps the most important flag.
const uint GK_TEXTURED = uint(2); // Do we texture it? if disabled, only the texture will be used. if both this and colored are disabled, the object will be black.
const uint GK_COLORED = uint(4);// Do we color it? if disabled, only the texture will be used. if both this and textured are disabled, the object will be black.
const uint GK_FLAT_NORMAL = uint(8); // Do we use flat normals? If this is set, then the normals output to the fragment shader in the initial opaque pass will use the flat layout qualifier. 
const uint GK_FLAT_COLOR = uint(16); // Do we render flat colors? the final, provoking vertex will be used as the color for the entire triangle.
const uint GK_COLOR_IS_BASE = uint(32); //Use the color as the primary. Uses texture as primary if disabled.
const uint GK_TINT = uint(64); //Does secondary add to primary?
const uint GK_DARKEN = uint(128);//Does secondary subtract from primary?
const uint GK_AVERAGE = uint(256);//Do secondary and primary just get averaged?
const uint GK_COLOR_INVERSE = uint(512);//Do we use the inverse of the color?
const uint GK_TEXTURE_INVERSE = uint(1024);//Do we use the inverse of the texture color? DOES NOT invert alpha.
const uint GK_TEXTURE_ALPHA_MULTIPLY = uint(2048);//Do we multiply the color from the texture by the alpha before doing whatever it is we're doing? I do not recommend enabling this and alpha culling, especially if you're trying to create a texture-on-a-flat-color-model effect (Think sega saturn models)
const uint GK_ENABLE_ALPHA_CULLING = uint(4096); //Do we use the texture alpha to cull alpha fragments
const uint GK_TEXTURE_ALPHA_REPLACE_PRIMARY_COLOR = uint(8192); //if the alpha from the texture is <0.5 then the secondary color will replace the primary color.


layout( location = 0 ) in vec3 vPosition;
layout( location = 1 ) in vec2 intexcoord;
layout( location = 2 ) in vec3 Normal;
layout( location = 3 ) in vec3 VertexColor;

out vec2 texcoord;
out vec3 normout;
flat out vec3 flatnormout;
out vec3 Smooth_Vert_Color;
out vec3 ND_out;
out vec2 window_size;
flat out vec3 Flat_Vert_Color;
out vec3 vert_to_camera;
out vec2 accompanyinginfo; //the x and y we gave to gl_Position
out float ourdepth;
out float isFlatNormal;
out float isTextured;
out float isColored;
out float isFlatColor;
out float ColorisBase;
out float AlphaReplaces;
out float isTinted;
out float isDarkened;
out float isAveraged;
out float isNotAnyofThose;


vec3 worldpos; //Position of the fragment in the world!


uniform uint renderflags;
uniform float windowsize_x;
uniform float windowsize_y;
uniform mat4 World2Camera; //the world to camera transform. I figure this is faster than calculating VP seperately per vertex.
uniform mat4 Model2World; //Model->World
uniform vec3 CameraPos; //Camera position in world space
void
main()
{
	window_size = vec2(windowsize_x, windowsize_y);
	//The position of this vertex in the world coordinate system.
	worldpos = (Model2World * vec4(vPosition,1.0)).xyz;
	vec4 big_gay = World2Camera * Model2World * vec4(vPosition,1.0);
	texcoord = intexcoord; //this is faster (than what I was doing before)
	gl_Position = big_gay;
	accompanyinginfo = big_gay.xy; //Accompanies depth info. I didn't want to touch the depth code.
	ourdepth = big_gay.z; //Depth
	normout = (Model2World * vec4(Normal, 0.0)).xyz;
	flatnormout = normout;
	ND_out = big_gay.xyz;
	Smooth_Vert_Color = VertexColor;
	Flat_Vert_Color = VertexColor;
	
	vert_to_camera = CameraPos  - worldpos;
}

My Initial Opaque Fragment Shader:


#version 330
// #extension GL_ARB_conservative_depth : enable
// out vec4 fColor[2];
// INITIAL_OPAQUE.FS
// layout (depth_greater) out float gl_FragDepth;
// ^ should probably re-enable that later

//List of flags. Some of these are no longer implemented, they caused too much of a performance problem. I do not recommend you enable them.
const uint GK_RENDER = uint(1); // Do we render it? This is perhaps the most important flag.
const uint GK_TEXTURED = uint(2); // Do we texture it? if disabled, only the texture will be used. if both this and colored are disabled, the object will be black.
const uint GK_COLORED = uint(4);// Do we color it? if disabled, only the texture will be used. if both this and textured are disabled, the object will be black.
const uint GK_FLAT_NORMAL = uint(8); // Do we use flat normals? If this is set, then the normals output to the fragment shader in the initial opaque pass will use the flat layout qualifier. 
const uint GK_FLAT_COLOR = uint(16); // Do we render flat colors? the final, provoking vertex will be used as the color for the entire triangle.
const uint GK_COLOR_IS_BASE = uint(32); //Use the color as the primary. Uses texture as primary if disabled.
const uint GK_TINT = uint(64); //Does secondary add to primary?
const uint GK_DARKEN = uint(128);//Does secondary subtract from primary?
const uint GK_AVERAGE = uint(256);//Do secondary and primary just get averaged?
const uint GK_COLOR_INVERSE = uint(512);//Do we use the inverse of the color?
const uint GK_TEXTURE_INVERSE = uint(1024);//Do we use the inverse of the texture color? DOES NOT invert alpha.
const uint GK_TEXTURE_ALPHA_MULTIPLY = uint(2048);//Do we multiply the color from the texture by the alpha before doing whatever it is we're doing? I do not recommend enabling this and alpha culling, especially if you're trying to create a texture-on-a-flat-color-model effect (Think sega saturn models)
const uint GK_ENABLE_ALPHA_CULLING = uint(4096); //Do we use the texture alpha to cull alpha fragments
const uint GK_TEXTURE_ALPHA_REPLACE_PRIMARY_COLOR = uint(8192); //if the alpha from the texture is <0.5 then the secondary color will replace the primary color.


//Utility functions
// vec4 when_eq(vec4 x, vec4 y) {
  // return 1.0 - abs(sign(x - y));
// }

// vec4 when_neq(vec4 x, vec4 y) {
  // return abs(sign(x - y));
// }

// vec4 when_gt(vec4 x, vec4 y) {
  // return max(sign(x - y), 0.0);
// }

// vec4 when_lt(vec4 x, vec4 y) {
  // return max(sign(y - x), 0.0);
// }

// vec4 when_ge(vec4 x, vec4 y) {
  // return 1.0 - when_lt(x, y);
// }

// vec4 when_le(vec4 x, vec4 y) {
  // return 1.0 - when_gt(x, y);
// }




uniform sampler2D diffuse; //This is actually the texture unit. limit 32. This one happens to be for the literal object's texture.
uniform samplerCube worldaroundme; //This is the cubemap we use for reflections.

in vec2 texcoord;
in vec3 normout;
flat in vec3 flatnormout;
flat in vec3 Flat_Vert_Color;
in vec3 Smooth_Vert_Color;
in vec3 ND_out;
in vec2 window_size;
in vec3 vert_to_camera;
in vec2 accompanyinginfo;
in float ourdepth;
//Logic from the vertex level
in float isFlatNormal;
in float isTextured;
in float isColored;
in float isFlatColor;
in float ColorisBase;
in float AlphaReplaces;
in float isTinted;
in float isDarkened;
in float isAveraged;
in float isNotAnyofThose;



uniform float ambient;
uniform float specreflectivity;
uniform float specdamp;
uniform float emissivity;
uniform float jafar;
uniform float janear;
uniform float enableCubeMapReflections;
uniform float diffusivity;

vec2 bettertexcoord;
vec4 texture_value;
vec3 color_value;
vec3 primary_color;
vec3 secondary_color;
vec3 finalcolor = vec3(0,0,0); //default value. Does it work?
uniform uint renderflags;

void main()
{
	bettertexcoord = vec2(texcoord.x, -texcoord.y); //Looks like blender
	vec3 UnitNormal;
	vec3 usefulNormal;

	
	
	UnitNormal = ((normalize(flatnormout) + vec3(1.0,1.0,1.0)) * 0.5)* float((renderflags & GK_FLAT_NORMAL) > uint(0))+ ((normalize(normout) + vec3(1.0,1.0,1.0)) * 0.5) * (1-float((renderflags & GK_FLAT_NORMAL) > uint(0)));
	
	// if (UnitNormal.x > 1 || UnitNormal.y > 1 || UnitNormal.z > 1)
		// UnitNormal = vec3(1.0);
	
	usefulNormal = normalize(flatnormout) * float((renderflags & GK_FLAT_NORMAL) > uint(0)) + normalize(normout) * (1-float((renderflags & GK_FLAT_NORMAL) > uint(0)));
	

	
	texture_value = (texture2D(diffuse, bettertexcoord)) * float((renderflags & GK_TEXTURED) > uint(0)) + vec4(0.0,0.2,0.0,1.0) * (1-float((renderflags & GK_TEXTURED) > uint(0)));
	
	//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	//UNCOMMENT THIS LINE IF YOU WANT ALPHA CULLING! It will slow down your application, be weary!
	// if ((renderflags & GK_ENABLE_ALPHA_CULLING) > uint(0))
		// if (texture_value.w == 0)
			// discard;
	//~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	 
	
	//Color stuff
	// if ((renderflags & GK_FLAT_COLOR & GK_COLORED) > uint(0))
	// {
			// color_value = Flat_Vert_Color;
	// } else {
		// if ((renderflags & GK_COLORED) > uint(0))
		// {
			// color_value = Smooth_Vert_Color;
		// } else {
			// color_value = vec3(0.0,0.0,0.0);
		// }
	// }
	float flat_color = float((renderflags & GK_FLAT_COLOR) > uint(0));
	float colored_at_all = float((renderflags & GK_COLORED) > uint(0));
	
	color_value = flat_color * colored_at_all * Flat_Vert_Color + (1-flat_color) * colored_at_all * Smooth_Vert_Color + (1-flat_color) * (1-colored_at_all) * vec3(0,0,0);
	
	//primary_color and secondary_color stuff
	
	// if ((renderflags & GK_COLOR_IS_BASE) > uint(0))
	// {
		// primary_color = color_value;
		// secondary_color = texture_value.xyz;
	// } else {
		// primary_color = texture_value.xyz;
		// secondary_color = color_value;
	// }
	float colorbase = float((renderflags & GK_COLOR_IS_BASE) > uint(0));
	
	primary_color = colorbase * color_value + (1-colorbase) * texture_value.xyz;
	secondary_color = (1-colorbase) * color_value + colorbase * texture_value.xyz;
	
	 // if ((renderflags & GK_TEXTURE_ALPHA_REPLACE_PRIMARY_COLOR) > uint(0))
	 // {
		 // extremely clever programming results in perfect effect...
		// primary_color = (primary_color * (1-texture_value.w)) + (texture_value.xyz * texture_value.w);
	 // }
	
	
	float alphareplace = float((renderflags & GK_TEXTURE_ALPHA_REPLACE_PRIMARY_COLOR) > uint(0));
	primary_color = primary_color * (1-alphareplace) + (primary_color * (1-texture_value.w)) + (texture_value.xyz * texture_value.w) * alphareplace;
	//EQUATION TIME!
	
	//This will be hell to break up. Maybe the compiler will do it for me?
	// if ((renderflags & GK_TINT) > uint(0))
		// finalcolor = primary_color + secondary_color;
	// else if ((renderflags & GK_DARKEN) > uint(0))
		// finalcolor = primary_color - secondary_color;
	// else if ((renderflags & GK_AVERAGE) > uint(0))
		// finalcolor = vec3(
		// (primary_color.x + secondary_color.x)/2.0,
		// (primary_color.y + secondary_color.y)/2.0,
		// (primary_color.z + secondary_color.z)/2.0);
	// else
		// finalcolor = primary_color;
	
	//Floating point logic tables?!?! 
	float isTint = float((renderflags & GK_TINT) > uint(0)); // 1 if true, 0 if false
	float isNotTint = 1-isTint;//swaps with the other value
	float isDarken = float((renderflags & GK_DARKEN) > uint(0));
	float isNotDarken = 1-isDarken;
	float isAverage = float((renderflags & GK_AVERAGE) > uint(0));
	float isNotAverage = 1-isAverage;
	//it is none of those if:
	//* More than one of them is true
	//* All of them are false
	float isNoneofThose = isTint * isDarken * isAverage + isNotTint * isAverage * isDarken + isTint * isNotAverage * isDarken + isTint * isAverage * isNotDarken + isNotTint * isNotAverage * isNotDarken;
	float isNotNoneofThose = 1-isNoneofThose;
	
	//Calc finalcolor;
	finalcolor = (primary_color + secondary_color) * isTint * isNotNoneofThose + (primary_color - secondary_color) * isDarken * isNotNoneofThose + vec3((primary_color.x + secondary_color.x)/2.0,(primary_color.y + secondary_color.y)/2.0,(primary_color.z + secondary_color.z)/2.0) * isAverage * isNotNoneofThose + primary_color * isNoneofThose;
	
	
	
	// (diffuse component * texture) + specular
	// gl_FragData[0] = mix(vec4(finalcolor,specreflectivity),vec4(texture(worldaroundme,reflect(-vert_to_camera, usefulNormal)).xyz, specreflectivity),specreflectivity/2.0); //Lol
	vec4 cubemapData = texture(worldaroundme,reflect(-vert_to_camera, usefulNormal));
	gl_FragData[0] = vec4(mix(finalcolor, cubemapData.xyz, specreflectivity * enableCubeMapReflections),specreflectivity);
	

	gl_FragData[1] = vec4(UnitNormal,specdamp/128.0); //Normals. Specular dampening goes as high as 128 in OpenGL Immediate Mode, so it has to be allowed up there. Note that we are using 16 bit floating-point accuracy, so dividing will not seriously reduce our abilities with regards to specdamp

	//INTENSE DEBUGGING OF FRAG DEPTH
	// if ((ourdepth+janear)/(jafar+janear) >= 0 && (ourdepth+janear)/(jafar+janear)<= 1.0)
	// {
		gl_FragData[2] = vec4((ourdepth+janear)/(jafar+janear), diffusivity, 1.0, emissivity/2.0 + 0.5); //Masked with Emissivity
		//gl_FragData[3] = vec4(accompanyinginfo.x, accompanyinginfo.y, 0, 0);
	// }
	// else if ((ourdepth+janear)/(jafar+janear) < 0)
	// {
		// gl_FragData[2] = vec4(1.0,0.0,0.0,0.0);
	// } else if ((ourdepth+janear)/(jafar+janear) > 1.0){
		// gl_FragData[2] = vec4(0.0,1.0,0.0,0.0);
	// }
}

My Point Light vertex shader


#version 330

layout( location = 0 ) in vec3 vPosition;

out vec2 texcoord;
out vec2 ScreenPosition;
vec3 worldpos; //Position of the fragment in the world!

uniform mat4 World2Camera; //the world to camera transform. I figure this is faster than calculating MVP seperately per vertex.
uniform mat4 NotMyWorld2Camera; //The world to camera transform we used in the Initial Opaque pass.

void
main()
{
	vec4 newpos = World2Camera * vec4(vPosition,1.0);
	gl_Position = newpos;
	texcoord.x = (newpos.x + 1.0)*0.5;
	texcoord.y = (newpos.y + 1.0)*0.5;
	ScreenPosition.x = newpos.x;
	ScreenPosition.y = newpos.y;
}

My Pointlight Fragment Shader:

#version 330
out vec4 fColor[2];

in vec2 texcoord;
in vec2 ScreenPosition;
// uniform sampler2D diffuse; //This is actually the texture unit. limit 32. This one happens to be for the literal object's texture.

uniform sampler2D tex1;
uniform sampler2D tex2;
uniform sampler2D tex3;
uniform vec3 position;
uniform vec3 color;
uniform float range;
uniform float range_dropoff;
uniform float jafar;
uniform float janear;
uniform mat4 invCamMatrix; //Passes in the camera matrix but inverted!
void main()
{
	//Save the values from the initial opaque pass.
	vec4 tex1_value = texture2D(tex1, texcoord);
	vec4 tex2_value = texture2D(tex2, texcoord);
	vec4 tex3_value = texture2D(tex3, texcoord);
	//Grab Emissivity
	float emissivity = (tex3_value.w - 0.5) * 2.0;
	float dist_cameraspace = (tex3_value.x * (jafar+janear))- janear; //Cameraspace distance from the point. I believe this works.
	vec4 loc_cameraspace = vec4(ScreenPosition.x,ScreenPosition.y,dist_cameraspace, 1.0); //Cameraspace location
	vec4 loc_world = invCamMatrix * loc_cameraspace; //World Location of the fragment. We are having issues here.
	float diffusivity = tex3_value.y; //PHONG related value
	float masked = float(tex3_value.w == 0); //We are masking in emissivity. THIS WORKS.
	vec3 surface_normal = (tex2_value.xyz * 2) - vec3(1.0,1.0,1.0);
	float specular_dampening = tex2_value.w * 128.0;
	float specular_reflectivity = tex1_value.w;
	
	//Now let's calculate the vector from the frag to the light
	vec3 frag_to_light = position - loc_world.xyz;
	float distance_from_light = length(frag_to_light.xyz);
	frag_to_light = normalize(frag_to_light);
	
	
	
	//Now that we have all the information from the Initial Opaque pass, let's get to PHONG!
	//Calculate Light direction
	vec3 lightDir = -frag_to_light;
	//Calculate nDotL
	float nDotL = max(dot(surface_normal, frag_to_light),0.0);
	//Find the diffuse value.
	vec3 betterdiffuse = nDotL * color * (1-masked);
	//Find the reflection vector.
	vec3 reflectedLightDir = reflect(lightDir, surface_normal);
	
	//Right now we're just debugging...
	//fColor[0] = tex1_value;
	fColor[0] = loc_world * (1-masked);
	fColor[1] = tex2_value;
}

IMPORTANT NOTES:

  • All my FBOs use 16 bit floats (RGBA16f)
  • I am targeting OpenGL 3.3 although my code seems to run on 4.5 even though I told GLFW to use 3.3
  • I just finished calculus in high school so I haven’t taken linear algebra yet (My knowledge of matrix maths may be inferior to most board members)

VIDEO DESCRIBING AND SHOWING PROBLEM:

I don’t have the time to analyse the code in detail, but the first thing which I notice is that you’re ignoring loc_world.w. If the camera matrix includes a perspective projection, that isn’t going to work. You need to divide loc_world.xyz by loc_world.w to get Euclidean world coordinates; you can’t just discard the W component unless you know that it will be equal to 1 (which isn’t the case for a perspective projection, or any combination which includes a perspective projection, or the inverse of such).

I tried dividing loc_world.xyz by loc_world.w before displaying it to the screen
I still got nonsensical results.

I’m not exactly sure what you mean by that btw
I tried doing this since you made this comment:

loc_world = vec4(loc_world.xyz/loc_world.w, loc_world.w);

But it isn’t correct.

Are you talking about loc_cameraspace?

Why would the W component matter with mat4 transforms? Isn’t it always fine to be 1.0?

This:


	vec3 frag_to_light = position - loc_world.xyz;

should be:


	vec3 frag_to_light = position - loc_world.xyz/loc_world.w;

Likewise for anywhere else where you need it as a 3D position.

It isn’t. If you want to normalise it, use e.g.


loc_world /= loc_world.w;

This will result in a vector with w=1; thereafter, you can just use loc_world.xyz.

It won’t be 1.0 after transforming by a matrix which contains a projective component.

If you have:

P[sub]clip[/sub] = M[sub]vp[/sub] · P[sub]world[/sub]
P[sub]NDC[/sub] = P[sub]clip[/sub] / P[sub]clip[/sub].w

then:

P[sub]clip[/sub] = P[sub]clip[/sub].w * P[sub]NDC[/sub]
P[sub]world[/sub] = M[sub]vp[/sub]-1 · P[sub]clip[/sub] = P[sub]clip[/sub].w * M[sub]vp[/sub]-1 · P[sub]clip[/sub]

You don’t directly know what P[sub]clip[/sub].w was, but if you know that P[sub]world[/sub].w should be 1, you can use any value (e.g. 1) for P[sub]clip[/sub].w then just divide P[sub]world[/sub] by P[sub]world[/sub].w.

I would also like to ask if my computation of depth is correct

float dist_cameraspace = (tex3_value.x * (jafar+janear))- janear; //Cameraspace distance from the point. I believe this works.

Where the value i’m trying to get was originally stored using this:

gl_FragData[2] = vec4((ourdepth+janear)/(jafar+janear), diffusivity, 1.0, emissivity/2.0 + 0.5); //Masked with Emissivity

I’m still not getting the right results…

You also may want to look at this wiki article on the subject.

UPDATE:
The problem is not solved, but I have discovered that this code always returns the camera’s position, because the screen is always grey when I subtract the camera’s position



#version 330
out vec4 fColor[2];

in vec2 texcoord;
in vec2 ScreenPosition;
// uniform sampler2D diffuse; //This is actually the texture unit. limit 32. This one happens to be for the literal object's texture.

uniform sampler2D tex1;
uniform sampler2D tex2;
uniform sampler2D tex3;
uniform mat4 inverse_view_projection_matrix;
uniform vec3 lightpos;
uniform vec3 lightcolor;
uniform vec3 camerapos;
uniform float range;
uniform float dropoff;
//far and near clip planes
uniform float jafar;
uniform float janear;
/* Should work!
vec3 calculate_world_position(vec2 texture_coordinate, float depth_from_depth_buffer)
{
    vec4 clip_space_position = vec4(texture_coordinate * 2.0 - vec2(1.0), 2.0 * depth_from_depth_buffer - 1.0, 1.0);

    //vec4 position = inverse_projection_matrix * clip_space_position; // Use this for view space
    vec4 position = inverse_view_projection_matrix * clip_space_position; // Use this for world space

    return(position.xyz / position.w);
}
*/


void main()
{
	// gl_FragData[0] = texture2D(tex1, texcoord);
	vec4 tex1_value = texture2D(tex1, texcoord);
	vec4 tex2_value = texture2D(tex2, texcoord);
	vec4 tex3_value = texture2D(tex3, texcoord);
	vec4 clipSpacePos = vec4(ScreenPosition.x,ScreenPosition.y, (tex3_value.x * (jafar + janear))-janear, 1.0);
	// vec4 clipSpacePos = vec4(ScreenPosition.x,ScreenPosition.y, 2.0 * tex3_value.x - 1.0, 1.0);
	vec4 world_pos_mymethod = inverse_view_projection_matrix * clipSpacePos;
	world_pos_mymethod /= world_pos_mymethod.w; //Extremely helpful man says this is extremely important on OpenGL Forums
	world_pos_mymethod -= vec4(camerapos,0.0); //I subtract the camera position just for fun
	float mask = float(tex3_value.x != 0);
	// if (world_pos_mymethod.w == 1)
		fColor[0] = (world_pos_mymethod * 1/500 + 0.5) * mask + vec4(1.0,1.0,0.0,0.0) * (1-mask); //This is always center-grey, indicating that camerapos = worldpos for all possible values.
	// else 
		// fColor[0] = vec4(tex3_value.x);
	fColor[1] = tex2_value;
}

I am so darned close to a solution, I can feel it…

I’ll give whoever manages to solve my problem 1,000,000,000,000,000,000,000 internet points

why does it take so long to get replies?!?!

Bad news

Turns out projection matrices are non-invertible

using glm::inverse gave me a bad matrix when I called it on the projection matrix and the viewprojection matrix

I need to find some other way of undoing the projection matrix transformation…

any suggestions?

[QUOTE=Geklmin;1291455]Bad news

Turns out projection matrices are non-invertible

using glm::inverse gave me a bad matrix when I called it on the projection matrix and the viewprojection matrix[/quote]

I’m curious as to how you know that. If you get a “bad matrix” from the inverse of a projection matrix, you should be able to easily write a short GLM-based application that proves this.

For example, when I do this:


void print_matrix(glm::mat4 mat)
{
	for (int row = 0; row < 4; ++row)
	{
		for (int col = 0; col < 4; ++col)
		{
			std::cout << mat[col][row] << "	";
		}

		std::cout << "
";
	}
}

int main()
{
	auto proj = glm::perspective<float>(90.0f, 1.0f, 1.0f, 1000.0f);
	auto inv_proj = glm::inverse(proj);
	auto inv_inv_proj = glm::inverse(inv_proj);

	print_matrix(proj);
	std::cout << "
";
	print_matrix(inv_proj);
	std::cout << "
";
	print_matrix(inv_inv_proj);
}

If perspective projection matrices were truly not invertible, then inv_inv_proj would not produce the same matrix as proj. But it very much does.

So odds are good that this isn’t a GLM or matrix problem; you’re doing something wrong.

Also, do read the wiki article I linked earlier. It gives you the math to do what you’re trying to do step-by-step.

why does it take so long to get replies?!?!

It was one hour since your post. It is not reasonable to expect other people to be sitting behind their computers, refreshing the page with baited breath for your next update to this thread. This is a forum, after all; not a chat room.

Also, spamming the forum with multiple threads on the same topic isn’t going to get you an answer any faster.

UPDATE: I was wrong about bad projection matrix… a little. X and Y are processed fine, but depth is totally arse’d. It’s always a single value after coming out of the projection matrix (0)

the view matrix IS bad. It always comes out as all Zeroes.

Right now, I have a fragment shader that demos what I believe to be correct values for the Eye space.

I have to calculate the Z value manually from depth since the matrix doesn’t like me

Code:


#version 330
out vec4 fColor[2];

in vec2 texcoord;
in vec2 ScreenPosition;

uniform sampler2D tex1;
uniform sampler2D tex2;
uniform sampler2D tex3;
uniform mat4 inverse_view_projection_matrix; //Does not work, always says every fragment is at the camerapos. In fact, it doesn't actually matter what vec4 you use with this matrix, it will always return the camera position, or at least something really close to it.
uniform mat4 inverse_projection_matrix; //Works, but not on Z. produces incorrect Z values. X and Y values look approximately correct, but I'm suspicious.
uniform mat4 inverse_view_matrix; //Does not work, always says every fragment is at the camerapos. In fact, it doesn't actually matter what vec4 you use with this matrix, it will always return the camera position, or at least something really close to it.
uniform vec3 lightpos;
uniform vec3 lightcolor;
uniform vec3 camerapos;
uniform float range;
uniform float dropoff;
//far and near clip planes
uniform float jafar;
uniform float janear;

//Another thing that should work
vec4 WorldPosFromDepth(float depth) {
    float z = depth * 2.0 - 1.0;

    vec4 clipSpacePosition = vec4(texcoord * 2.0 - 1.0, z, 1.0);
    vec4 viewSpacePosition = inverse_projection_matrix * clipSpacePosition;


	//I don't actually know if this correct, and if the viewMatrix has any scaling, then this is incorrect. I have no idea how to solve this.
	float depthRange = jafar - janear;
	float farin = depth * depthRange + janear;
	viewSpacePosition.z = farin; // Because the STUPID inverse projection matrix won't handle Z correctly! Not sure if supposed to be negative, not sure if supposed to be scaled.


	//We now have the world relative to the camera. We have to move the world back to where it's supposed to be.
        //(means I haven't written it yet)
     
     vec4 worldSpacePosition = viewSpacePosition; //if I put in inverse_view_matrix * viewSpacePosition it always comes out as being at the camera position.

    return worldSpacePosition;
}

void main()
{
	//Grab the values from the initial opaque pass buffers
	vec4 tex1_value = texture2D(tex1, texcoord);
	vec4 tex2_value = texture2D(tex2, texcoord);
	vec4 tex3_value = texture2D(tex3, texcoord);
	
	//Get the Clip-Space Position (gl_Position of the fragment)
	// vec4 clipSpacePos = vec4(ScreenPosition.x,ScreenPosition.y, (tex3_value.x * (jafar + janear))-janear, 1.0);
	// vec4 clipSpacePos = vec4(ScreenPosition.x,ScreenPosition.y, 2.0 * tex3_value.x - 1.0, 1.0);
	
	//Find the world position. PROJECTION MATRICES ARE NON-INVERTIBLE!!!
	vec4 world_pos = WorldPosFromDepth(tex3_value.x);
	//world_pos -= camerapos;
	float mask = float(tex3_value.w != 0);
	// fColor[0] = tex1_value;
	fColor[0] = (world_pos ) * mask;
	fColor[1] = tex2_value;
}

IF YOU KNOW HOW I SHOULD PROCEED, PLEASE TELL ME. I’M BUMBLING ABOUT AT THIS POINT.

None of the matrix maths is working like how any tutorial shows it working.

if you know what changes I should make to use matrices instead of having to bodge, i’d be more than thankful

SO TURN ON YOUR BRAINS AND START THINKIN’

Also that wiki article you linked me (which I barely noticed because of poor highlighting and only noticed now because I accidentally hovered over it) doesn’t explain how to get WORLD COORDINATES from GL_POSITION SPACE

I have recreated (I believe successfully, but please read my code and see if i’m right) the gl_Position value of the fragment in the initial opaque (g-buffer… whatever) pass

I have the inverse view matrix and inverse projection matrices, as well as the inverse viewprojection matrix in the shader

I also have the near and far plane values to use

I NEED TO TAKE THAT INFORMATION AND RECREATE THE WORLD COORDINATES

As in

AFTER MODEL MATRIX

BEFORE VIEWMATRIX

That’s what I need for my lighting calculations

I don’t know what the heck “window coordinates” are and since I’ve begun coding this engine my understanding of the OpenGL coordinate systems has faded away as I’ve found buttloads of stuff in OpenGL that doesn’t actually work

Take gl_FragDepth for instance

Did you know it doesn’t actually work?

It will always return the same value for every pixel on the screen.

I verified this myself in a test program

in fact, NONE of the gl_FragCoord stuff works AT ALL in my testing, and in order to get ANY meaningful results, I have to manually pass in vec2’s and vec3’s containing clip space coordinates

Thank you for helping me so far, but i’m far from a solution

I really can’t believe that i’m the first one to solve this f***ing problem

this is, by far, the single hardest part of OpenGL programming… other than actually getting your compiler working

IF YOU KNOW HOW I SHOULD PROCEED, PLEASE TELL ME. I’M BUMBLING ABOUT AT THIS POINT.

Step 1: CALM DOWN! Shouting is not going to get people to help you. I get that you’re frustrated, but stop treating the people who are trying to help you like your own personal stress ball.

Step 2: Stop blaming your tools:

I don’t know what the heck “window coordinates” are and since I’ve begun coding this engine my understanding of the OpenGL coordinate systems has faded away as I’ve found buttloads of stuff in OpenGL that doesn’t actually work

OpenGL actually works. As evidenced by the numerous programs that work in OpenGL, many of which use deffered rendering. It’s a poor craftsman who blames his hammer for the way the nails get driven into the wood.

Step 3: Reduce your code to a minimal, complete, verifiable example.

It is extremely hard to follow your code as it currently stands. There’s just too much other stuff there: all of your complex lighting code with its the manifold options, the numerous uniforms, and so forth. It’s impossible to separate out the stuff that’s important (what values you’re writing to your gbuffer, and how you’re using them in the lighting passes) from the stuff that’s not (literally everything else).

Try to reduce things down to the bare minimum.

Also, it would really help to have some kind of guide to your code. Understanding what kind of texture is being gl_FragData[2], and what tex3 means, for example. It’s difficult to know what image formats you’re using for these textures, or whether they even are the same texture at all.

That being said, I see several things that appear… dubious to me.

For example:


vec4 big_gay = World2Camera * Model2World * vec4(vPosition,1.0);
...
ourdepth = big_gay.z; //Depth
...
...
(ourdepth+janear)/(jafar+janear)

I do not know what you’re trying to do with ourdepth here, but it does not make sense. Then again, I’m not sure what lives in World2Camera and Model2World. However, the fact that you shove “big_gay” (???) into gl_Position tells me that it is meant to include the projection matrix.

If that’s the case, then ourdepth is the interpolated clip-space depth value. Well, why are you adjusting it by the camera-sapce znear/far? This makes no sense mathematically, and I don’t understand what you think intend for this to do.

What you need to do is just use the depth buffer. Your “opaque” passes shouldn’t be writing depth at all. Don’t try to write gl_FragDepth or anything you compute for the depth. Just let the depth buffer do its job, then read from it in the lighting passes. That is where you should be getting your depth from.

That’s not going to fix your other code, of course.

Ah
I will fix that, but how do I read from the depth buffer in GLSL?

I apologize
I am beyond frustrated, I have nearly broken my keyboard over this, it got cracked when I tossed it against the wall

I thought this was going to be the easiest part of writing my graphics engine and it’s turned out to be the single hardest thing i’ve had to do

OK wise guy
How do I take
ourdepth

and normalize it for putting to an FBO

My target OpenGL version doesn’t allow depth sampling

(3.3)

In general, that’s incorrect. It’s possible to create a non-invertible projection matrix (e.g. if the near and far planes are equal), but any projection matrix you’re likely to want to use will be invertible.

Define “bad”. If it didn’t meet your expectations, that probably means that your expectations are wrong.

You can check that an inverse is correct by multiplying it by the original matrix (the order doesn’t matter) and checking that the result is an identity matrix (to within rounding error).

You don’t. Specifically, you can’t read from the “current” depth buffer, i.e. the one which will be updated by the current drawing operation.

You can read the depth from previous drawing operations by first rendering to a FBO which has a texture as the depth attachment, detaching the texture from the FBO (or unbinding the FBO itself), then reading from the texture. But you cannot read from a texture (more precisely, a level of a texture) while it is being used as a render target. In fact, even the possibility of the texture being read is enough to trigger undefined behaviour, so textures which are used as FBO attachments shouldn’t be accessible via sampler variables in the shader.

OK wise guy
How do I take
ourdepth

and normalize it for putting to an FBO

Don’t.

The operation you’re trying to do is to reconstruct the position of a fragment based on its depth. You already have its depth; you captured it in the depth buffer when you were rendering. You shouldn’t be writing or computing this “ourdepth” value; OpenGL handles this just fine.

You simply need to use that depth buffer as a texture when you do your lighting pass. That will give you exactly what gl_FragCoord.z had for that fragment.

Once you read that depth value, you can apply the math needed to reverse the transformation and get back the position of the fragment.

UPDATE:
I have found a workaround
I am writing the worldposition of the fragment minus the cameraposition, divided by zFar, and mapped between 0 to 1