Depth/Normal data from Reflected Objects using Cubemaps

Hi,

I’m looking forward to simulate the multipath propagation process by rasterization on shaders. By using dynamic cubemaps, how can I collect the depth and normal data of reflected objects, instead of colors?

Follows below my current code:

Vertex Shader:

#version 130

uniform mat4 osg_ViewMatrixInverse;
uniform vec3 cameraPos;

void main() {
    gl_Position = ftransform();
    gl_TexCoord[0] = gl_MultiTexCoord0;
    mat4 ModelWorld4x4 = osg_ViewMatrixInverse * gl_ModelViewMatrix;
    mat3 ModelWorld3x3 = mat3( ModelWorld4x4 );
    vec4 WorldPos = ModelWorld4x4 *  gl_Vertex;
    vec3 N = normalize( ModelWorld3x3 * gl_Normal );
    vec3 E = normalize( WorldPos.xyz - cameraPos.xyz );
    gl_TexCoord[1].xyz = reflect( E, N );
}

Fragment Shader:

#version 130

uniform samplerCube cubeMap;
uniform sampler2D colorMap;
const float reflect_factor = 0.5;

void main (void)                                                            
{
    vec3 base_color = texture2D(colorMap, gl_TexCoord[0].xy).rgb;
    vec3 cube_color = textureCube(cubeMap, gl_TexCoord[1].xyz).rgb;
    gl_FragColor = vec4( mix(cube_color, base_color, reflect_factor).rgb, 1.0);
}

[QUOTE=romulogcerqueira;1289768]
I’m looking forward to simulate the multipath propagation process by rasterization on shaders. By using dynamic cubemaps, how can I collect the depth and normal data of reflected objects, instead of colors?[/QUOTE]

By using render to texture and multiple render targets.
I never did MRT with a cubemap, but it should work.

Depth rendering into cubemap can be tricky. I don’t have the link at hand, but if you need it we will be able to find it.

Thanks for the tip, Silence.

I rendered the depth to texture and now I can manage it on shader.

Do you know how can I render the normal data to texture as well?

Yes. Sorry for the MRT link. It was indeed not that good…

So, in order to render the normal data as long as other data, you can use MRT (multiple render targets). And this link (threw the name of deferred rendering) covers it in a better way.

There are few things to keep in mind:
Create as much textures as you need buffers, ie one for the vertices, one for the normals… (optimization exist, but let’s keep that for later).
Assign each of them to dedicated attachments to your FBO as render targets
Call glDrawBuffers to specify on which render targets to draw
In the fragment shader, use as much output as you have textures to write to

Let us know if some clarification is still needed.

Silence,

thanks for all tips. I read your links and a lot of tutorials/documents on the internet and I understand ver good about RTT, MRT and FBO, however I still face with the normal problem.

Do you have any code sample to help me?

Thanks in advance.

[QUOTE=romulogcerqueira;1289774]I rendered the depth to texture and now I can manage it on shader. Do you know how can I render the normal data to texture as well?

I understand ver good about RTT, MRT and FBO, however I still face with the normal problem.
[/QUOTE]

You would probably get a faster answer to your question if you specified what problem you’re having writing normals to a texture from a fragment shader, and/or reading them back in in a subsequent pass.

Back the render target you’re going to write normals to with a texture, and write normals to that render target. It’s pretty much the same thing as writing any other color target. Just disable GL_BLEND (on that render target at least), GL_ALPHA_TEST, and any other state which could play games with your ability to write to all render targets without modification.

For starters, don’t get fancy with encoding the normal you write out; you can always do that later. Just use some 3-component texture with sufficient dynamic range like RGB16F. Once you get it working you can cut this back to 32- or even 16-bits per texel (depending on your needs).

And since the OP seems to create a simulation, he might not want to loose any precision. So RGB32F might be even better.

Yes, as Dark Photon told, please give further details. If this is not related with the encoding/decoding (see above), what kind of problem are you facing with normals ? Also, as usual when writting to texture/render targets, make sure about which system coordinates is used for the writing, and ensure that the same coordinate system is used when reading. Note that since you are rendering into cubemap faces, each of the faces will have a different modelview matrix.

Hi Silence and Dark Photon,

I have tried to get the normal from reflected objects, however I got only the reflected colors (as seen in figure below). I have used RTT with FBO. As I read in technical documentation and some examples, the normal texture should be attached as COLOR_BUFFER and managed on shader.

[ATTACH=CONFIG]1652[/ATTACH]

My C++ code is written with OSG, however I think it is understandable even for non-OSG developers.


// OSG includes
#include <osgViewer/Viewer>
#include <osg/Texture>
#include <osg/TexGen>
#include <osg/Geode>
#include <osg/ShapeDrawable>
#include <osg/TextureCubeMap>
#include <osg/TexMat>
#include <osg/MatrixTransform>
#include <osg/PositionAttitudeTransform>
#include <osg/Camera>
#include <osg/TexGenNode>
#include <osgDB/FileUtils>

// C++ includes
#include <iostream>

#define SHADER_PATH_FRAG "normal_depth_map/shaders/normalDepthMap1.frag"
#define SHADER_PATH_VERT "normal_depth_map/shaders/normalDepthMap1.vert"

using namespace osg;

static const int numTextures = 6;

enum TextureUnitTypes {
    TEXTURE_UNIT_DIFFUSE,
    TEXTURE_UNIT_NORMAL,
    TEXTURE_UNIT_CUBEMAP
};

osg::ref_ptr<osg::Group> _create_scene() {
    osg::ref_ptr<osg::Group> scene = new osg::Group;

    osg::ref_ptr<osg::Geode> geode = new osg::Geode;
    scene->addChild(geode.get());

    const float radius = 0.8f;
    const float height = 1.0f;
    osg::ref_ptr<osg::ShapeDrawable> shape;

    // sphere
    shape = new osg::ShapeDrawable(new osg::Sphere(osg::Vec3(-3.0f, 0.0f, 0.0f), radius));
    shape->setColor(osg::Vec4(0.6f, 0.8f, 0.8f, 1.0f));
    geode->addDrawable(shape.get());

    // box
    shape = new osg::ShapeDrawable(new osg::Cone(osg::Vec3(0.0f, 0.0f, -3.0f), radius, height));
    shape->setColor(osg::Vec4(0.4f, 0.9f, 0.3f, 1.0f));
    geode->addDrawable(shape.get());

    // cylinder
    shape = new osg::ShapeDrawable(new osg::Cylinder(osg::Vec3(3.0f, 0.0f, 0.0f), radius, height));
    shape->setColor(osg::Vec4(1.0f, 0.3f, 0.3f, 1.0f));
    geode->addDrawable(shape.get());

    // cylinder
    shape = new osg::ShapeDrawable(new osg::Box(osg::Vec3(0.0f, 0.0f, 3.0f), 2* radius));
    shape->setColor(osg::Vec4(0.8f, 0.8f, 0.4f, 1.0f));
    geode->addDrawable(shape.get());

    return scene;
}

osg::NodePath createReflector() {
    Geode* node = new Geode;
    const float radius = 0.8f;
    ref_ptr<TessellationHints> hints = new TessellationHints;
    hints->setDetailRatio(5.0f);
    ShapeDrawable* shape = new ShapeDrawable(new Sphere(Vec3(0.0f, 0.0f, 0.0f), radius * 1.5f), hints.get());
    shape->setColor(Vec4(0.8f, 0.8f, 0.8f, 1.0f));
    node->addDrawable(shape);

    osg::NodePath nodeList;
    nodeList.push_back(node);

    return nodeList;
}

class UpdateCameraAndTexGenCallback : public osg::NodeCallback
{
    public:

        typedef std::vector< osg::ref_ptr<osg::Camera> >  CameraList;

        UpdateCameraAndTexGenCallback(osg::NodePath& reflectorNodePath, CameraList& Cameras):
            _reflectorNodePath(reflectorNodePath),
            _Cameras(Cameras)
        {
        }

        virtual void operator()(osg::Node* node, osg::NodeVisitor* nv)
        {
            // first update subgraph to make sure objects are all moved into position
            traverse(node,nv);

            // compute the position of the center of the reflector subgraph
            osg::Matrixd worldToLocal = osg::computeWorldToLocal(_reflectorNodePath);
            osg::BoundingSphere bs = _reflectorNodePath.back()->getBound();
            osg::Vec3 position = bs.center();

            typedef std::pair<osg::Vec3, osg::Vec3> ImageData;
            const ImageData id[] =
            {
                ImageData( osg::Vec3( 1,  0,  0), osg::Vec3( 0, -1,  0) ), // +X
                ImageData( osg::Vec3(-1,  0,  0), osg::Vec3( 0, -1,  0) ), // -X
                ImageData( osg::Vec3( 0,  1,  0), osg::Vec3( 0,  0,  1) ), // +Y
                ImageData( osg::Vec3( 0, -1,  0), osg::Vec3( 0,  0, -1) ), // -Y
                ImageData( osg::Vec3( 0,  0,  1), osg::Vec3( 0, -1,  0) ), // +Z
                ImageData( osg::Vec3( 0,  0, -1), osg::Vec3( 0, -1,  0) )  // -Z
            };

            for(unsigned int i = 0; i < 6 && i < _Cameras.size(); ++i) {
                osg::Matrix localOffset;
                localOffset.makeLookAt(position,position+id[i].first,id[i].second);

                osg::Matrix viewMatrix = worldToLocal*localOffset;

                _Cameras[i]->setReferenceFrame(osg::Camera::ABSOLUTE_RF);
                _Cameras[i]->setProjectionMatrixAsFrustum(-1.0,1.0,-1.0,1.0,1.0,10000.0);
                _Cameras[i]->setViewMatrix(viewMatrix);
            }
        }

    protected:

        virtual ~UpdateCameraAndTexGenCallback() {}

        osg::NodePath               _reflectorNodePath;
        CameraList                  _Cameras;
};

class UpdateCameraPosUniformCallback : public osg::Uniform::Callback
{
public:
	UpdateCameraPosUniformCallback(osg::Camera* camera)
		: mCamera(camera)
	{
	}

	virtual void operator () (osg::Uniform* u, osg::NodeVisitor*)
	{
		osg::Vec3 eye;
		osg::Vec3 center;
		osg::Vec3 up;
		mCamera->getViewMatrixAsLookAt(eye,center,up);

		u->set(eye);
	}
protected:
	osg::Camera* mCamera;
};

osg::TextureCubeMap* createRenderTexture(int tex_width, int tex_height, bool isDepth = false) {
    // create simple 2D texture
    osg::ref_ptr<osg::TextureCubeMap> texture = new osg::TextureCubeMap;
    texture->setTextureSize(tex_width, tex_height);
    texture->setFilter(osg::TextureCubeMap::MIN_FILTER,osg::TextureCubeMap::LINEAR);
    texture->setFilter(osg::TextureCubeMap::MAG_FILTER,osg::TextureCubeMap::LINEAR);
    texture->setWrap(osg::Texture::WRAP_S, osg::Texture::CLAMP_TO_EDGE);
    texture->setWrap(osg::Texture::WRAP_T, osg::Texture::CLAMP_TO_EDGE);
    texture->setWrap(osg::Texture::WRAP_R, osg::Texture::CLAMP_TO_EDGE);
    texture->setSourceType(GL_FLOAT);

    if (isDepth) {
        texture->setInternalFormat(GL_DEPTH_COMPONENT32F);
        texture->setSourceFormat(GL_DEPTH_COMPONENT);
    } else {
        texture->setInternalFormat(GL_RGBA32F);
        texture->setSourceFormat(GL_RGBA);
    }

    return texture.release();
}

osg::Camera* createRTTCamera( osg::Camera::BufferComponent buffer, osg::TextureCubeMap* tex, unsigned int face = 0, unsigned int level = 0) {
    // set the viewport size and clean the background color and buffer
    osg::ref_ptr<osg::Camera> camera = new osg::Camera;
    camera->setViewport(0, 0, tex->getTextureWidth(), tex->getTextureHeight());
    camera->setClearColor(osg::Vec4());
    camera->setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    // force the camera to be rendered before the main scene, and use the RTT technique with FBO
    camera->setRenderOrder(osg::Camera::PRE_RENDER);
    camera->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);
    camera->attach(buffer, tex, level, face);

    return camera.release();
}

osg::Group* createShadowedScene(osg::Node* reflectedSubgraph, osg::NodePath reflectorNodePath, unsigned int unit, unsigned tex_width, unsigned tex_height, osg::Camera::RenderTargetImplementation renderImplementation, osg::Camera* camera = 0) {
    osg::Group* group = new osg::Group;

    osg::TextureCubeMap* texture  = createRenderTexture(tex_width, tex_height, false);

    // set up the render to texture cameras.
    UpdateCameraAndTexGenCallback::CameraList Cameras;
    for(unsigned int i = 0; i < numTextures; ++i) {
        // create the RTT camera
        osg::Camera* camera = createRTTCamera(osg::Camera::COLOR_BUFFER, texture, i);

        // add subgraph to render
        camera->addChild(reflectedSubgraph);

        group->addChild(camera);

        Cameras.push_back(camera);
    }

    // create the texgen node to project the tex coords onto the subgraph
    osg::TexGenNode* texgenNode = new osg::TexGenNode;
    texgenNode->getTexGen()->setMode(osg::TexGen::REFLECTION_MAP);
    texgenNode->setTextureUnit(unit);
    group->addChild(texgenNode);

    // set the reflected subgraph so that it uses the texture and tex gen settings.
    osg::Node* reflectorNode = reflectorNodePath.front();

    group->addChild(reflectorNode);

    osg::StateSet* ss = reflectorNode->getOrCreateStateSet();
    ss->setTextureAttributeAndModes(unit,texture,osg::StateAttribute::ON|osg::StateAttribute::OVERRIDE);

    osg::Program* program = new osg::Program;
    osg::ref_ptr<osg::Shader> shaderVertex = osg::Shader::readShaderFile(osg::Shader::VERTEX, osgDB::findDataFile(SHADER_PATH_VERT));
    osg::ref_ptr<osg::Shader> shaderFragment = osg::Shader::readShaderFile(osg::Shader::FRAGMENT, osgDB::findDataFile(SHADER_PATH_FRAG));
    program->addShader(shaderFragment);
    program->addShader(shaderVertex);
    ss->setAttributeAndModes( program, osg::StateAttribute::ON );

    ss->addUniform( new osg::Uniform("cubemapTexture", 0) );

    osg::Uniform* u = new osg::Uniform("cameraPos",osg::Vec3());
    u->setUpdateCallback( new UpdateCameraPosUniformCallback( camera ) );
    ss->addUniform( u );

    // add the reflector scene to draw just as normal
    group->addChild(reflectedSubgraph);

    // set an update callback to keep moving the camera and tex gen in the right direction.
    group->setUpdateCallback(new UpdateCameraAndTexGenCallback(reflectorNodePath, Cameras));

    return group;
}


// int main(int argc, char** argv){}
int main() {
    // construct the viewer.
    osgViewer::Viewer viewer;

    unsigned tex_width = 256;
    unsigned tex_height = 256;

    osg::Camera::RenderTargetImplementation renderImplementation = osg::Camera::FRAME_BUFFER_OBJECT;

    osg::ref_ptr<osg::Group> scene = new osg::Group;
    osg::ref_ptr<osg::Group> reflectedSubgraph = _create_scene();
    if (!reflectedSubgraph.valid()) exit(0);

    osg::ref_ptr<osg::Group> reflectedScene = createShadowedScene(
			reflectedSubgraph.get(),
			createReflector(),
			TEXTURE_UNIT_CUBEMAP,
                        tex_width,
			tex_height,
			renderImplementation,
			viewer.getCamera());

    scene->addChild(reflectedScene.get());
    viewer.setSceneData(scene.get());
    viewer.setUpViewInWindow(0,0,600,600);
    viewer.run();
}

This is my vertex code:


#version 130

uniform mat4 osg_ViewMatrixInverse;
uniform vec3 cameraPos;

void main() {
    gl_Position = ftransform();
    gl_TexCoord[0] = gl_MultiTexCoord0;
    mat4 ModelWorld4x4 = osg_ViewMatrixInverse * gl_ModelViewMatrix;
    mat3 ModelWorld3x3 = mat3( ModelWorld4x4 );
    vec4 WorldPos = ModelWorld4x4 *  gl_Vertex;

    vec3 N = normalize( ModelWorld3x3 * gl_Normal );
    vec3 E = normalize( WorldPos.xyz - cameraPos.xyz );
    gl_TexCoord[1].xyz = reflect( E, N );
}

This is my fragment code:


#version 130

uniform samplerCube cubemapTexture;

void main (void) {
    vec3 cubemapColor = texture(cubemapTexture, gl_TexCoord[1].xyz).rgb;
    gl_FragColor = vec4(cubemapColor, 1.0);
}

Since the texture datatype in fragment shader is samplerCube and the texture() return the color value of this texture, I am confused about how to get the normal with this available information. I also prefer the normal data from reflected objects be presented in one channel.

Thanks in advance.

Haven’t read all yet, but at first glance, your fragment shader looks suspicious. I was expecting to see it writing to several targets, not only a single one.

Initially I have tried to display one target each time. By the end, I will reunite all target in the fragment shader.

What you’ve posted is a scene graph and some shaders. What we need to see instead of the scene graph is the GL call trace that’s produced by running all this. In particular the parts setting up state for and rendering to your FBO with the depth texture.

Without that, your best bet is to search the OpenSceneGraph sample code for an example of how to render to a depth texture with OSG and/or posting a question about how to do render-to-depth-texture to the osg-users forum. The trick is knowing how to use OpenSceneGraph to get the GL state setup properly for rendering.

[QUOTE=Dark Photon;1290024]What you’ve posted is a scene graph and some shaders. What we need to see instead of the scene graph is the GL call trace that’s produced by running all this. In particular the parts setting up state for and rendering to your FBO with the depth texture.

Without that, your best bet is to search the OpenSceneGraph sample code for an example of how to render to a depth texture with OSG and/or posting a question about how to do render-to-depth-texture to the osg-users forum. The trick is knowing how to use OpenSceneGraph to get the GL state setup properly for rendering.[/QUOTE]

Hi Dark Photon,

do you suggest some debugger tool (e.g. GDB and Valgrind) to capture the GL call trace?

This page contains some: Debugging Tools (OpenGL Wiki).

I’ve personally used BuGLe on Linux, and apitrace on Windows for this. GLIntercept is another one for Windows.

I’ve also used gDEBugger for this (now CodeXL) on Linux. For the GL call trace though, it produces a HTML file containing the full GL call trace with inline links to resources such as textures. This can be pretty useful, but it requires another step to convert it into a plain-text GL call trace dump. I can’t recall if there was an option to just dump the plain-text GL call trace dump directly.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.