Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 2 of 2 FirstFirst 12
Results 11 to 11 of 11

Thread: Please help! About BRDF

  1. #11
    Intern Contributor
    Join Date
    May 2016
    Posts
    75
    I don't know about BRDF datasets. All I can imagine is vertex buffers, which is what I have in my code. I haven't gotten around to adding my model class to use Blender models which I did in my DirectX 11 engine. Assimp is a library that people often use to load model data. But at this point, the models in my example code are hand coded by loading vertex buffers. That's basically what something like Assimp is going to do except it builds the vertex buffer for you so you don't have to. It always has to be turned into a vertex buffer before it goes to the graphics card whether you do that or a library does that for you.

    Possibly what you mean with datasets is similar to terrains. I've written on that pretty extensively. Blinn-Phong works fine with terrains. For a terrain, you have an array of data that represents the height of every corner of every square in a grid. All the grid squares are square and equal distance on the X/Z plane, but their Y heights are taken from the array with one value for every corner of every grid square. You build a vertex buffer out of that data and display it. I have XNA code examples of that here.

    Anyway, this should be a direct link to download all of the source code. It's all the files for the engine that calls the GLSL shader including the shader code. It's probably just a little too much code to just copy and paste here. Plus, it uses several libraries. I think it uses GLFW, GLM, GLEW, and FreeImage. That's pretty standard stuff that you would most likely want to use anyway. I think that's all of the libraries. But it's all on the web page where I posted it. There is even a link on the page where I've zipped up the exact versions of the libraries that I used although you might want to go to the websites for each of those libraries and make/build them yourself to produce fresh binaries. There may be newer versions that what I used by now. I had to make/build a few of them myself.

    I've never done cube mapping. I think I mostly understand the basic concept. I used actual textured cubes for my skybox's in the past. I haven't really had a need for cubemaps on any of the projects I've done so far and there were other things that were much more high priority for me to learn. A lot of the PBR stuff seems to use them from what I've observed. So, I may be forced to learn to use cubemaps pretty soon.

    The cubemaps themselves should be just like texturing a cube. I have several here that I used for skybox's. I think that using cubemaps for reflections, it uses a special texture sampler that knows the texture is a cube. GL_TEXTURE_CUBE_MAP and samplercube are a different sampler than I use. I treat the cubemap as a regular 2D texture in the stuff I've done, but I think the cube map sampler works slightly different. Here is what I assume is a pretty good discussion on it having never done it myself.

    Up until I started learning about PBR, the only use for cube maps I knew of were A) an alternate method of making skyboxes and B) reflections. The PBR stuff seems to treat them as light sources. So, I'm not sure exactly how that works. I could imagine that you could take the position of the object in the scene and render 6 different camera shots to build the six sides of the cube. I've considered doing this for other things. Seems awfully expensive to do that for every item in the scene. Still, PBR allows for lots of metalic stuff like chrome which essentially requires that. Graphics cards are getting pretty fast but I can't imagine rendering 6 images for every object in these scene with thousands of objects in the scene just so you can draw 1 frame. I would image they are cheating and only rendering 6 images for the entire scene and reusing them on every object whether they are accurate or not. Until the camera rotates they should be relatively accurate and then you can build another cube. They may even completely cheat and pre-render the single cube. I haven't gotten deep enough into the PBR stuff to see exactly how it's done.

    But PBR seems to work basically the same as using cubemaps for reflection where the light rays are sampled with a cubemap index. The difference being that with reflection that's all it does and with PBR you don't just merely reflect the incoming light ray but choose how much of the ray to reflect, which frequencies of light to reflect in what amount, and how much sub-surface scattering to perform as well as which colors are absorbed. So, you're combining reflection with a bunch of other calculations. The stuff I've seen is Fresnel and Cook-Torrance.

    But from what I've observed recently cube-mapping similar to the way you use it for reflections is at the heart of a lot of the PBR stuff. They just build on that an make it more complex from there. So, understanding cube-mapping is not necessary to understand Blinn-Phong or to implement it. Cube-mapping for reflection has traditionally been used instead of Blinn-Phong I believe, not with it. You could modify the Blinn-Phong shader to use a cube-map instead of a directional light I suppose. I'm not sure if that makes sense. I'm not sure if Blinn-Phong can be modified to use cube-maps the way I've seen them used in PBR. Almost all the PBR stuff I've seen uses Fresnel and Cook-Torrance instead of Blinn-Phong. That's why I don't even tend to think of Blinn-Phong as a BRDF because I don't think it's a PBR BRDF.

    With Blinn-Phong you actually have two types of shading: Gouraud and Blinn-Phong. Gouraud shading draws the model and Blinn-Phong adds the specular highlight to it. I suppose you could use the cube-map to determine the light color of the incoming rays of light rather than a single directional light. Then you would have something much more complex and it would no longer be Gouraud. Probably something more like Cook-Torrance at that point. And then Blinn-Phong probably would no longer make sense.

    The PBR stuff I've seen handles specular entirely differently than Blinn-Phong. In PBR specular becomes micro-surface "roughness" in a calculation that determines on average how much light is reflected and how much is scattered.

    So anyway, I'm not certain if you can combine cube-maps with Blinn-Phong without turning it into something much more like Cook-Torrance or some other algorithm.
    Last edited by BBeck1; 01-18-2017 at 04:55 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •