Deferred shading // update

i want to switch over to deffered shading. i have setup my fat buffer, but i’m not sure what exactly i have to store and in which way. i guess i need at least diffuse color, normal and depth (to reconstruct the original vertex position).

  • how to store the normal? should i simply pass the vertex normal as a varying to the fragment shader? or should i have already transformed it into view space or tangent space?
  • how to store depth? should i use a depth texture to store it or is it sufficent to use the alpha channel of oneof the textures?
  • how to restore the original vertex position?

thanks!

Hi!

I would use all 16 bit floating point render targets.

You could simply store the vertex normal in world space and to the lighting in the deferred pass also in world space.

For depth I would not use a depth texture (I don’t even know if you can easily read from them). You can simply use the alpha channel of the texture where you store the normal for instance

You could store your vertex position as three floats but that is not realy necessary as it is possible to recover the original vertex position from depth only.

[ www.trenki.net | vector_math (3d math library) | software renderer ]

it is possible to recover the original vertex position from depth only.
yeah, but how? :slight_smile:

You could probably look it up here: http://forum.beyond3d.com/showthread.php?t=37614

He just reconstructs the world position using the inverse of the projection matrix and the fragment position on screen.

thanks for the link, but i have to admit that i cannot quite follow. what would vpos.xy be in glsl? gl_FragCoord.xy?

I guess :slight_smile: You’ll have to look it up in the spec though, I am not sure how DX and OpenGL agree on device coordinaes and depth representation (the w coordinate in particular)

It’s simpler to use calculate the distance from the Z-Value. The direction is already known from the pixel position (if the bounding volume of a light is drawn)

could you specify plz?

You can always store world coordinate in buffer. This technique was used in S.T.A.L.K.E.R (there’s an excellent article in GPU Gems 2).

… and the story continues in GPU Gems 3.

P.S. Great book guys :eek:

yeah i read that article. but they’re passing the position as a whole. what i like to do is reconstruct it from depth only. atm i need 3 textures to store position, normal and diffuse color. that’s just too much. i can compute normal.z from normal.xy, so i could drop that… if i could drop position.xy as well, then it’d all fit into 2 textures.

Take a look at gluUnproject. That function does exactly what you want to do, you only need to implement that function in a shader.

For example here:
http://www.opengl.org/documentation/specs/man_pages/hardcopy/GL/html/glu/unproject.html

Jan.

float Z = DepthParameter.y/(DepthParameter.x - texture2DRect(g_depth,gl_FragCoord.xy).r);
vec3 ModelView = vec3 (unpro.xy/unpro.z * Z,Z);

Try that to recalculate the modelview position. unpro is the is the modelview position of a light bounding volume pixel…

@oc2k1: what would DepthParameter.x/.y be? and how and why get a pixel of the light’s bounding volume?

thanks guys! don’t lose patience plz :stuck_out_tongue:

DepthParameter.x/.y are two uniform values that are required to calculate the distance from the Depthbuffer value, both are depend to the far and near plane. For more information read that:
http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html

unpro.xy/unpro.z is the direction to the pixel. It’s a varying that have to be filled with the worldviewposition in the vertex shader.

i still didn’t get it to work :stuck_out_tongue:

vec4 viewport = vec4( 0, 0, 1024, 768 );
float z = texture2D( u_DepthTexture, v_Coordinates ).r;

vec4 input;
input.x = 2.0 * ( gl_FragCoord.x - viewport.x ) / ( viewport.z - 1.0 );
input.y = 2.0 * ( gl_FragCoord.y - viewport.y ) / ( viewport.w - 1.0 );
input.z = 2.0 * z - 1.0;
input.w = 1.0;

vec3 Position = ( u_Matrix*input ).xyz;

u_Matrix is the inverse of view * projection.

i have two projection matrices, a perspective one (when rendering the geometry) and an orthographic one (when rendering the full screen quads). i need to use the perspective one here, right?

sure… but you should use my code snipped, because that it’s much faster. And rendering fullscreen quads isn’t optimal for the most light sources…

i would, but i’d first have to get unpro.x/.y and DepthParameter.x/.y, resulting in more potential sources or error. i don’t care much about speed atm, as long as it works :slight_smile:

does the code look ok to you? the position is still dependant on the camera’s position/orientation.

ok, i gave another approach a try, a professor at university suggested the following:

initial pass, vs:

vViewPosition = gl_ModelViewMatrix*gl_Vertex;

fs:

vViewPosition /= vViewPosition.w;
float Distance = -vViewPosition.z;

//then i store Distance in the MRT

deferred lighting pass, fs:

// Depth = distance computed in initial pass, read from MRT

vec3 ray;
float invTanHalfFOV = 1.0 / tan( radians( 22.5 ) );
ray = vec3( ( (gl_FragCoord.xy/vec2(1024, 768)) - 0.5 ) * 2.0, -invTanHalfFOV );
ray /= invTanHalfFOV;

Position = vec4( ray * Depth, 1.0 );

y & z are correct, but x is always a bit too small. It cannot be due to the depth stored in the MRT, as only one component is wrong. so there’s probably sth wrong in the way i calculate the ray. i checked the fov and the resolution… do you have any clue what might be wrong? thanks :slight_smile:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.