Please help! About BRDF

I am the new comer of opengl. My boss asked me to write a opengl program which has used brdf inside the shader.
Is there any demo code or tutorial can I follow?
I am very frustrated about that. Please help me

I am trying to figure out how to do that myself.

Here is a good place to start.

If you have an extremely solid math background you might pick up a copy of Real Time Rendering. That’s the best book on the subject I’ve found although I’m very comfortable with linear algebra (but not Calculus) and I don’t understand a good portion of what he’s talking about because he describes everything in Calculus equations. Still, I’m able to gleam a little from his descriptions of the algorithms. It’s probably a really well written book if you understand Calculus; unfortunately, I do not.

You can probably find a BRDF shader on GitHub or something and work on reverse engineering it. That’s probably what I’m going to do.

I’m learning to make PBR models right now and I’m probably going to be importing my model back into Blender this weekend to render it after texturing it in Substance. That should give me some idea of how PBR and the metalic work flow is actually rendered, which should make it a little easier to understand how to render it myself in GLSL. Once I see how Blender uses the maps exported by Substance, it should be a little more clear what my GLSL render needs to do.

But if someone has a GLSL shader for the PBR metalic work flow or even the smoothness glossiness work flow, I’d love to see it with an explanation.

[QUOTE=BBeck1;1285446]I am trying to figure out how to do that myself.

Here is a good place to start.

If you have an extremely solid math background you might pick up a copy of Real Time Rendering. That’s the best book on the subject I’ve found although I’m very comfortable with linear algebra (but not Calculus) and I don’t understand a good portion of what he’s talking about because he describes everything in Calculus equations. Still, I’m able to gleam a little from his descriptions of the algorithms. It’s probably a really well written book if you understand Calculus; unfortunately, I do not.

You can probably find a BRDF shader on GitHub or something and work on reverse engineering it. That’s probably what I’m going to do.

I’m learning to make PBR models right now and I’m probably going to be importing my model back into Blender this weekend to render it after texturing it in Substance. That should give me some idea of how PBR and the metalic work flow is actually rendered, which should make it a little easier to understand how to render it myself in GLSL. Once I see how Blender uses the maps exported by Substance, it should be a little more clear what my GLSL render needs to do.

But if someone has a GLSL shader for the PBR metalic work flow or even the smoothness glossiness work flow, I’d love to see it with an explanation.[/QUOTE]

Thanks. Are you also doing the BRDF rendering projects? Is there any small program about BRDF implementation that I can follow to know how to do it? Appreciate. XD

Hey !

This might not be the best ressource out there, but I’m working on a rendering project myself and had to write BRDF code in some fragment shader.

Here’s the source if you want to take a look at it : https://github.com/rivten/gliewer/blob/master/src/shaders/shadow_mapping_f.glsl

I do not claim this is a perfect not even correct code, but it seems to work fine for me and if it can help you in some way, that’d be great. There are also a loooot of ressources online for BRDF if you google just a little bit, but examples are hard to come by unfortunately.

Not yet. I’ve been reading up on the subject. I’m learning how to model PBR models and my Blinn-Phong shader isn’t going to be able to display the art I’m learning how to make. I’m going to have to build a PBR shader or two and get it working to have any of the work I’m doing in my OGL programs. (Although at this point, I can just import them back into Blender and use Cycles to render pictures of the models which should give me a little more insight into the process of rendering them setting up the shader in Cycles. At some point though, I want to get these types of models into my OGL programs. So, that will require building PBR shaders in GLSL which will have BRDF’s at their core.)

I’m more focused on making the art right now than I am figuring out the GLSL. But hopefully in the next few months I’ll turn my attention back to code and try and figure out how to write the GLSL. So, thanks rivten, that looks helpful to have a working code example.

Of course, BRDF’s are just a part of PBR, but it looks to me that all Physically Based Rendering is built around BRDF’s.

I thought this video was pretty good at explaining PBR. And of course, PBR is built on top of BRDF’s.

[QUOTE=rivten;1285451]Hey !

This might not be the best ressource out there, but I’m working on a rendering project myself and had to write BRDF code in some fragment shader.

Here’s the source if you want to take a look at it : https://github.com/rivten/gliewer/blob/master/src/shaders/shadow_mapping_f.glsl

I do not claim this is a perfect not even correct code, but it seems to work fine for me and if it can help you in some way, that’d be great. There are also a loooot of ressources online for BRDF if you google just a little bit, but examples are hard to come by unfortunately.[/QUOTE]

Thanks. But I have no ideal how to implement the BRDF in the program. Is there any good tutorial for that? I am scare to be blamed by my boss. I need to finish it asap.

You may just want to admit to your boss you don’t know how to do it. Sometimes honesty is the best policy.

Every day I learn more about this subject and yet, I still have a very long way to go. I finished my (well really not mine because I followed a tutorial) PBR shader in Blender Cycles and learned quite a bit from it. One thing that became clear from that is that there are a lot of BRDF shaders. Wikipedia lists 12 different BRDF algorithms.

From what you’ve said, your boss did not specify which one, which makes me question whether your boss has any idea what a BRDF is. Although, I’m far from an expert on this subject. So, I’m probably the last person who should be criticizing someone for not knowing what a BRDF is. But it would seem to me that requesting a BRDF is almost as generic as asking for a shader. I think pretty much the only non-BRDF shaders are maybe toon shaders and artistic shaders that are not realistic. So, the term “BRDF” would eliminate those, but otherwise says almost nothing about what is required.

But from what I’m reading, Blinn-Phong is a BRDF (which I did not know having assumed all BRDF’s are more complex than Blinn-Phong). So “technically” I think you could hand over a Blinn-Phong shader and honestly say it is a BRDF. And in that case, I can actually help you. Besides, Blinn-Phong is a really good place to start because, with my limited knowledge on the subject, most of the others seem to kind of build on the same BRDF ideas of Blinn-Phong with more and more advanced algorithms and then joining those algorithms together in a single shader. Physically Based Rendering, for example, uses at least Cook-Torrance and Fresnel in a single mathematical equation. All that’s explained in Real Time Rendering which has a pretty indepth look at the math.

If you can sell Blinn-Phong as a BRDF, you should probably check out my video series on Blinn-Phong which is a series on HLSL. It walks you through step by step in creating a Blinn-Phong shader with texturing in HLSL. That’s a good starting point to learn other shaders too. The next video in the series would have been normal mapping, because once you get that far you’re ready for other stuff like normal mapping. I did a proto-type program but never got around to making that video.

Anyway, its HLSL and not GLSL, but the math is the main thing. It’s also XNA instead of OpenGL, but the basic principles are the same. On top of that, I have pretty much the exact same shader written in GLSL in my OpenGL basic engine project on my website. That’s a complete Visual Studio C++, OpenGL 4.5, GLSL shader and the calling program that uses Blinn-Phong with texturing. So, you can see how the GLSL is called in code.

So, as long as you can pass Blinn-Phong off as a BRDF, there’s a working code example for you complete with everything needed to make it run. (I do use several libraries as stated, like GLFW and GLM and FreeImage.) And from what I’m reading, Blinn-Phong is technically a BRDF. I think my confusion is that I immediately start thinking “Cook-Torrance”+“Fresnel” where you are using an incoming area of light, rather than a single ray and use a cubemap to determine the incoming light rather than single directional light. But apparently you don’t have to get nearly that complicated for it to be a BRDF. I guess I’m thinking that all BRDF’s are what the Wikipedia article calls “Physically Based BDRF’s” which appears to be where the tie in with Physically Based Rendering comes in. And I think I may also have been confusing BRDF’s with BSDF’s as well.

If the Blinn-Phong shader can “get you there”, then the links above should help substantially. Short of that though, I’ve been trying to figure this stuff out for a couple of months now, and granted I have a whole lot of other stuff that is higher priority taking me away from it, but it’s still a pretty big complex subject. There might be enough in the links above to learn Blinn-Phong in an 8 hour day (the videos assume you know almost nothing other than trig and algebra and start almost too far towards the beginning of the subject), but the broader subject of BRDF’s is something that I would expect to take months to learn unless you really have someone holding your hand through the learning process.

And as far as the videos being HLSL instead of GLSL, it’s good to know HLSL anyway just so you can read example code from books and such. And if you compare it to the GLSL shader I linked above, it should be pretty obvious how they are alike and how they are different. HLSL Constant Buffers are basically GLSL Uniforms.

Just for good measure, I’ll throw in the GLSL Blinn-Phong shader here(I’ve posted it on the Internet before. The code includes both a Blinn specular function and a Phong specular function as in the videos. You only need one or the other, but it’s two different algorithms that do the same thing.):

BlinnPhong.vrt


#version 450 core
layout (location = 0) in vec3 Pos;
layout (location = 1) in vec2 UV;
layout (location = 2) in vec3 Normal;
layout (location = 3) in vec4 Color;

uniform mat4 WorldMatrix;
uniform mat4 ViewMatrix;
uniform mat4 ProjectionMatrix;



smooth out vec2 TextureCoordinates;
smooth out vec3 VertexNormal;
smooth out vec4 RGBAColor;
smooth out vec4 PositionRelativeToCamera;
out vec3 WorldSpacePosition;


void main()
{
	gl_Position = WorldMatrix * vec4(Pos, 1.0f);				//Apply object's world matrix.
	WorldSpacePosition = gl_Position.xyz;						//Save the position of the vertex in the 3D world just calculated. Convert to vec3 because it will be used with other vec3's.
	gl_Position = ViewMatrix * gl_Position;						//Apply the view matrix for the camera.
	PositionRelativeToCamera = gl_Position;
	gl_Position = ProjectionMatrix * gl_Position;				//Apply the Projection Matrix to project it on to a 2D plane.
	TextureCoordinates = UV;									//Pass through the texture coordinates to the fragment shader.
	VertexNormal = mat3(WorldMatrix) * Normal;					//Rotate the normal according to how the model is oriented in the 3D world.
	RGBAColor = Color;											//Pass through the color to the fragment shader.
};

BlinnPhong.frg


#version 450 core

in vec2 TextureCoordinates;
in vec3 VertexNormal;
in vec4 RGBAColor;
in float FogFactor;
in vec4 PositionRelativeToCamera;
in vec3 WorldSpacePosition;

layout (location = 0) out vec4 OutputColor;


uniform vec4 AmbientLightColor;
uniform vec3 DiffuseLightDirection;
uniform vec4 DiffuseLightColor;
uniform vec3 CameraPosition;
uniform float SpecularPower;
uniform vec4 FogColor;
uniform float FogStartDistance;
uniform float FogMaxDistance;
uniform bool UseTexture;
uniform sampler2D Texture0;



vec4 BlinnSpecular(in vec3 LightDirection, in vec4 LightColor, in vec3 PixelNormal, in vec3 CameraDirection, in float SpecularPower)
{
	vec3 HalfwayNormal;
	vec4 SpecularLight;
	float SpecularHighlightAmount;


	HalfwayNormal = normalize(LightDirection + CameraDirection);
	SpecularHighlightAmount = pow(clamp(dot(PixelNormal, HalfwayNormal), 0.0, 1.0), SpecularPower);
	SpecularLight = SpecularHighlightAmount * LightColor; 

	return SpecularLight;
}


vec4 PhongSpecular(in vec3 LightDirection, in vec4 LightColor, in vec3 PixelNormal, in vec3 CameraDirection, in float SpecularPower)
{
	vec3 ReflectedLightDirection;	
	vec4 SpecularLight;
	float SpecularHighlightAmount;


	ReflectedLightDirection = 2.0 * PixelNormal * clamp(dot(PixelNormal, LightDirection), 0.0, 1.0) - LightDirection;
	SpecularHighlightAmount = pow(clamp(dot(ReflectedLightDirection, CameraDirection), 0.0, 1.0), SpecularPower);
	SpecularLight = SpecularHighlightAmount * LightColor; 
	

	return SpecularLight;
}


void main()
{
	vec3 LightDirection;
	float DiffuseLightPercentage;
	vec4 SpecularColor;
	vec3 CameraDirection;	//Float3 because the w component really doesn't belong in a 3D vector normal.
	vec4 AmbientLight;
	vec4 DiffuseLight;
	vec4 InputColor;

	
	if (UseTexture) 
	{
		InputColor = texture(Texture0, TextureCoordinates);
	}
	else
	{
		InputColor = RGBAColor; // vec4(0.0, 0.0, 0.0, 1.0);
	}


	LightDirection = -normalize(DiffuseLightDirection);	//Normal must face into the light, rather than WITH the light to be lit up.
	DiffuseLightPercentage = max(dot(VertexNormal, LightDirection), 0.0);	//Percentage is based on angle between the direction of light and the vertex's normal. 
	DiffuseLight = clamp((DiffuseLightColor * InputColor) * DiffuseLightPercentage, 0.0, 1.0);	//Apply only the percentage of the diffuse color. Saturate clamps output between 0.0 and 1.0.

	CameraDirection = normalize(CameraPosition - WorldSpacePosition);	//Create a normal that points in the direction from the pixel to the camera.

	if (DiffuseLightPercentage == 0.0f) 
	{
		SpecularColor  = vec4(0.0f, 0.0f, 0.0f, 1.0f);
	}
	else
	{
		//SpecularColor = BlinnSpecular(LightDirection, DiffuseLightColor, normalize(VertexNormal), CameraDirection, SpecularPower);
		SpecularColor = PhongSpecular(LightDirection, DiffuseLightColor, normalize(VertexNormal), CameraDirection, SpecularPower);
	}

	float FogDensity = 0.01f;
	float LOG2 = 1.442695f;
	float FogFactor = exp2(-FogDensity * FogDensity * PositionRelativeToCamera.z * PositionRelativeToCamera.z * LOG2);
	FogFactor = 1 - FogFactor;
	//float FogFactor = clamp((FogMaxDistance - PositionRelativeToCamera.z)/(FogMaxDistance - FogStartDistance), 0.0, 1.0);
	
	OutputColor = RGBAColor * (AmbientLightColor * InputColor) + DiffuseLight + SpecularColor;
	OutputColor = mix (OutputColor, FogColor, FogFactor);
	//OutputColor = vec4(0.0f, 0.5f, 0.0f, 1.0f);
};

Thanks for your great help.
I have seen there are some BRDF dataset. And people use them to render image. What is the different between using equations and using dataset to rendering?

Thanks you help. May I have the source code of the whole project using the blinn-phong in glsl shader and opengl to construct the cubemap. I really need this program as a start-point to study brdf. THanks

I don’t know about BRDF datasets. All I can imagine is vertex buffers, which is what I have in my code. I haven’t gotten around to adding my model class to use Blender models which I did in my DirectX 11 engine. Assimp is a library that people often use to load model data. But at this point, the models in my example code are hand coded by loading vertex buffers. That’s basically what something like Assimp is going to do except it builds the vertex buffer for you so you don’t have to. It always has to be turned into a vertex buffer before it goes to the graphics card whether you do that or a library does that for you.

Possibly what you mean with datasets is similar to terrains. I’ve written on that pretty extensively. Blinn-Phong works fine with terrains. For a terrain, you have an array of data that represents the height of every corner of every square in a grid. All the grid squares are square and equal distance on the X/Z plane, but their Y heights are taken from the array with one value for every corner of every grid square. You build a vertex buffer out of that data and display it. I have XNA code examples of that here.

Anyway, this should be a direct link to download all of the source code. It’s all the files for the engine that calls the GLSL shader including the shader code. It’s probably just a little too much code to just copy and paste here. Plus, it uses several libraries. I think it uses GLFW, GLM, GLEW, and FreeImage. That’s pretty standard stuff that you would most likely want to use anyway. I think that’s all of the libraries. But it’s all on the web page where I posted it. There is even a link on the page where I’ve zipped up the exact versions of the libraries that I used although you might want to go to the websites for each of those libraries and make/build them yourself to produce fresh binaries. There may be newer versions that what I used by now. I had to make/build a few of them myself.

I’ve never done cube mapping. I think I mostly understand the basic concept. I used actual textured cubes for my skybox’s in the past. I haven’t really had a need for cubemaps on any of the projects I’ve done so far and there were other things that were much more high priority for me to learn. A lot of the PBR stuff seems to use them from what I’ve observed. So, I may be forced to learn to use cubemaps pretty soon.

The cubemaps themselves should be just like texturing a cube. I have several here that I used for skybox’s. I think that using cubemaps for reflections, it uses a special texture sampler that knows the texture is a cube. GL_TEXTURE_CUBE_MAP and samplercube are a different sampler than I use. I treat the cubemap as a regular 2D texture in the stuff I’ve done, but I think the cube map sampler works slightly different. Here is what I assume is a pretty good discussion on it having never done it myself.

Up until I started learning about PBR, the only use for cube maps I knew of were A) an alternate method of making skyboxes and B) reflections. The PBR stuff seems to treat them as light sources. So, I’m not sure exactly how that works. I could imagine that you could take the position of the object in the scene and render 6 different camera shots to build the six sides of the cube. I’ve considered doing this for other things. Seems awfully expensive to do that for every item in the scene. Still, PBR allows for lots of metalic stuff like chrome which essentially requires that. Graphics cards are getting pretty fast but I can’t imagine rendering 6 images for every object in these scene with thousands of objects in the scene just so you can draw 1 frame. I would image they are cheating and only rendering 6 images for the entire scene and reusing them on every object whether they are accurate or not. Until the camera rotates they should be relatively accurate and then you can build another cube. They may even completely cheat and pre-render the single cube. I haven’t gotten deep enough into the PBR stuff to see exactly how it’s done.

But PBR seems to work basically the same as using cubemaps for reflection where the light rays are sampled with a cubemap index. The difference being that with reflection that’s all it does and with PBR you don’t just merely reflect the incoming light ray but choose how much of the ray to reflect, which frequencies of light to reflect in what amount, and how much sub-surface scattering to perform as well as which colors are absorbed. So, you’re combining reflection with a bunch of other calculations. The stuff I’ve seen is Fresnel and Cook-Torrance.

But from what I’ve observed recently cube-mapping similar to the way you use it for reflections is at the heart of a lot of the PBR stuff. They just build on that an make it more complex from there. So, understanding cube-mapping is not necessary to understand Blinn-Phong or to implement it. Cube-mapping for reflection has traditionally been used instead of Blinn-Phong I believe, not with it. You could modify the Blinn-Phong shader to use a cube-map instead of a directional light I suppose. I’m not sure if that makes sense. I’m not sure if Blinn-Phong can be modified to use cube-maps the way I’ve seen them used in PBR. Almost all the PBR stuff I’ve seen uses Fresnel and Cook-Torrance instead of Blinn-Phong. That’s why I don’t even tend to think of Blinn-Phong as a BRDF because I don’t think it’s a PBR BRDF.

With Blinn-Phong you actually have two types of shading: Gouraud and Blinn-Phong. Gouraud shading draws the model and Blinn-Phong adds the specular highlight to it. I suppose you could use the cube-map to determine the light color of the incoming rays of light rather than a single directional light. Then you would have something much more complex and it would no longer be Gouraud. Probably something more like Cook-Torrance at that point. And then Blinn-Phong probably would no longer make sense.

The PBR stuff I’ve seen handles specular entirely differently than Blinn-Phong. In PBR specular becomes micro-surface “roughness” in a calculation that determines on average how much light is reflected and how much is scattered.

So anyway, I’m not certain if you can combine cube-maps with Blinn-Phong without turning it into something much more like Cook-Torrance or some other algorithm.

May I ask for a whole project source code of blinn phong or other brdf model with glsl and opengl?
I want a runnable project to study about that as many shader code given on the web is unable to compile with my environment.
Thanks.

BRDF simply is the ratio between the incomping radiance and the out going radiance… it’s hard to use most renderig models assume that the surface is Lambertien where the diffuse radiance is equal and the BRDF of lambertien surface is giving = 1 / Pi (Pi = 3.14…) and for the mirror effect or any refraction they use simple tricks like cube maps and other things… but you can program a BRDF specialist call this a global illumination algorithms which based on physics and light interaction with the scene and it’s ongoing research field. i recommend to you to read Physically Based Rendering : from theory to implementation it is worth it if you are going for the long ride. but if you need something just for the sake of getting the job done please check out this web site : learnopengl.com/#!PBR/Lighting you can find the theory smmury + code c++ opengl and shader 4.x i guess good luck

[QUOTE=thedarkknight1717;1285645]May I ask for a whole project source code of blinn phong or other brdf model with glsl and opengl?
I want a runnable project to study about that as many shader code given on the web is unable to compile with my environment.
Thanks.[/QUOTE]

Since you want a whole project, here it is. You’ll have to create an account on github however. And this is big…

Unreal Engine 4

I’m not really sure, but I’m almost sure it does some kind of PBR:

Open X-Ray