PDA

View Full Version : Reading the depth buffer into texture memory



dr4cula
07-24-2013, 10:44 AM
Hello,

I'm trying to get a simple shadowmapping demo up and running but I've run into a bit of a problem. I need to translate to the light's position, save the depth values into texture memory and finally generate texture coordinates based on the depth values. Now, my current code isn't working so I've been debugging it the whole day and I suspect the problem is with the transfer of depth buffer info into a texture.

Here's my code:

init:

glGenTextures(1, &shadowmap_);
glBindTexture(GL_TEXTURE_2D, shadowmap_);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);

render:


// position the light
glLightfv(GL_LIGHT0, GL_POSITION, lightPos_);

// set up the projection parameters from the light's POV
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluPerspective(lightFOV_, lightAspect_, lightNear_, lightFar_);

glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
// translate to the light's position
gluLookAt(lightPos_[0], lightPos_[1], lightPos_[2], -1.0f, 0.0f, 5.0f, 0.0f, 1.0f, 0.0f);

// render the scene to get the depth information
renderSceneElements();
glPopMatrix();

// end the projection modification
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);

// copy over the depth information
glBindTexture(GL_TEXTURE_2D, shadowmap_);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, 512, 512);

// render a simple quad with the shadowmap for debugging
glPushMatrix();
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, shadowmap_);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

glTranslatef(3.0f, 2.0f, 5.0f);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(0.0f, 0.0f, 0.0f);

glTexCoord2f(1.0f, 0.0f);
glVertex3f(3.0f, 0.0f, 0.0f);

glTexCoord2f(1.0f, 1.0f);
glVertex3f(3.0f, 3.0f, 0.0f);

glTexCoord2f(0.0f, 1.0f);
glVertex3f(0.0f, 3.0f, 0.0f);
glEnd();
glDisable(GL_TEXTURE_2D);
glPopMatrix();

The result is a white quad :/

renderSceneElements() contains a bunch of VAOs.

Also, I know there's a way to copy over the depth buffer using FBOs. I want to implement that afterwards but first I'm curious as to what on Earth I'm doing wrong here.

Thanks in advance!

GClements
07-24-2013, 03:45 PM
I'm trying to get a simple shadowmapping demo up and running but I've run into a bit of a problem. I need to translate to the light's position, save the depth values into texture memory and finally generate texture coordinates based on the depth values. Now, my current code isn't working so I've been debugging it the whole day and I suspect the problem is with the transfer of depth buffer info into a texture.


Why do you think that the problem is with the transfer? If it's because it's a white quad, have you analysed what the expected range of values should be? The mapping between Z and depth is highly non-linear, particularly if the near plane is too close.

Also: try storing your own data in the depth texture, to make sure that your debug code is working.

dr4cula
07-25-2013, 12:20 PM
Thanks for your reply!

I decided to use FBOs: I can actually see the shadowmap when I map it onto a quad (it's faint but it's visible at least). However, my problems don't end there: now the entire scene is black (except for the textured quad and brownish glClearColor() defined background). I'm guessing my texture coordinate generation is wrong but not sure. Any help would be greatly appreciated!

new init:

glGenTextures(1, &shadowmap_);
glBindTexture(GL_TEXTURE_2D, shadowmap_);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);

//glGenRenderbuffers(1, &renderbuffer_);
//glBindRenderbuffer(GL_RENDERBUFFER, renderbuffer_);
//glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, 512, 512);

glGenFramebuffers(1, &framebuffer_);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer_);
//glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, renderbuffer_);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shadowmap_, 0);

new render:

glBindFramebuffer(GL_FRAMEBUFFER, framebuffer_);
//glDrawBuffer(GL_NONE);
//glReadBuffer(GL_NONE);

glClearColor(0.5, 0.2, 0.1, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// position the light
glLightfv(GL_LIGHT0, GL_POSITION, lightPos_);

// set up the projection parameters from the light's POV
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluPerspective(lightFOV_, lightAspect_, lightNear_, lightFar_);

glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
// translate to the light's position
gluLookAt(lightPos_[0], lightPos_[1], lightPos_[2], -1.0f, 0.0f, 5.0f, 0.0f, 1.0f, 0.0f);

// render the scene to get the depth information
renderSceneElements();
glPopMatrix();

// end the projection modification
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);

glBindFramebuffer(GL_FRAMEBUFFER, 0);

// copy over the depth information
//glBindTexture(GL_TEXTURE_2D, shadowmap_);
//glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, 512, 512);

// matrix defining the planes for S, Q, R, T components for texture generation
float planeMatrix[16];
glPushMatrix();
glLoadIdentity();
// compensate for the eye-coordinate to texture coordinate conversion: [-1,1] to [0,1]
glTranslatef(0.5f, 0.5f, 0.0f);
glScalef(0.5f, 0.5f, 1.0f);

// do the perspective projection and translate to the light's position
gluPerspective(lightFOV_, lightAspect_, lightNear_, lightFar_);
gluLookAt(lightPos_[0], lightPos_[1], lightPos_[2], -1.0f, 0.0f, 5.0f, 0.0f, 1.0f, 0.0f);

glGetFloatv(GL_MODELVIEW_MATRIX, planeMatrix);
glPopMatrix();

// go from OpenGL's column-major to row-major matrix form
transposeMatrix16(planeMatrix);

// set up the type for texture generation
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glTexGeni(GL_R, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glTexGeni(GL_Q, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);

// data for texture generation
glTexGenfv(GL_S, GL_OBJECT_PLANE, &planeMatrix[0]);
glTexGenfv(GL_T, GL_OBJECT_PLANE, &planeMatrix[4]);
glTexGenfv(GL_R, GL_OBJECT_PLANE, &planeMatrix[8]);
glTexGenfv(GL_Q, GL_OBJECT_PLANE, &planeMatrix[12]);

glEnable(GL_TEXTURE_GEN_S);
glEnable(GL_TEXTURE_GEN_T);
glEnable(GL_TEXTURE_GEN_R);
glEnable(GL_TEXTURE_GEN_Q);


glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, shadowmap_);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE);

glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);

renderSceneElements();

glDisable(GL_LIGHTING);
glDisable(GL_LIGHT0);

glDisable(GL_TEXTURE_2D);

glDisable(GL_TEXTURE_GEN_Q);
glDisable(GL_TEXTURE_GEN_R);
glDisable(GL_TEXTURE_GEN_T);
glDisable(GL_TEXTURE_GEN_S);

glPushMatrix();
glEnable(GL_TEXTURE_2D);
//glBindTexture(GL_TEXTURE_2D, shadowmap_);
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_NONE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE);
glTranslatef(3.0f, 2.0f, 5.0f);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(0.0f, 0.0f, 0.0f);

glTexCoord2f(1.0f, 0.0f);
glVertex3f(3.0f, 0.0f, 0.0f);

glTexCoord2f(1.0f, 1.0f);
glVertex3f(3.0f, 3.0f, 0.0f);

glTexCoord2f(0.0f, 1.0f);
glVertex3f(0.0f, 3.0f, 0.0f);
glEnd();
glDisable(GL_TEXTURE_2D);
glPopMatrix();

Note that I had to explicitly bind the texture - I thought it would be automatically related to the framebuffer and hence enabling texturing would have caused that texture to be used? If I don't have that binding there, OpenGL selects my previously used texture.

Thanks in advance!

GClements
07-25-2013, 01:12 PM
// compensate for the eye-coordinate to texture coordinate conversion: [-1,1] to [0,1]
glTranslatef(0.5f, 0.5f, 0.0f);
glScalef(0.5f, 0.5f, 1.0f);


You need to perform the same signed-to-unsigned conversion for the Z coordinate, as the depth values in the texture are returned as 0..1.

Also, you should bind a renderbuffer (or texture) to GL_COLOR_ATTACHMENT0 even if you're not using it.



Note that I had to explicitly bind the texture - I thought it would be automatically related to the framebuffer and hence enabling texturing would have caused that texture to be used?
No. glFramebufferTexture2D() causes rendered output to be directed to the texture (or rather, a specific mipmap level of it). It doesn't associate the texture with a texture unit. In fact, having a texture used as both a source and destination simultaneously is undefined.

Alfonse Reinheart
07-25-2013, 01:43 PM
In fact, having a texture used as both a source and destination simultaneously is undefined.

Only if you sample from the same image as you're writing to. Sampling from one mipmap level and writing to another is fine.

GClements
07-25-2013, 04:23 PM
Only if you sample from the same image as you're writing to. Sampling from one mipmap level and writing to another is fine.
That depends upon how you define "sampling". My understanding of the 4.3 specification (8.14.2.1) is that the behaviour is undefined if the attached level is within the range of levels available for reading, regardless of which levels are actually read. So if mipmapping is disabled and the attached level is GL_TEXTURE_BASE_LEVEL, or mipmapping is enabled and the attached level is within the range GL_TEXTURE_BASE_LEVEL to GL_TEXTURE_MAX_LEVEL, the behaviour is undefined.

Alfonse Reinheart
07-25-2013, 06:20 PM
That depends upon how you define "sampling". My understanding of the 4.3 specification (8.14.2.1) is that the behaviour is undefined if the attached level is within the range of levels available for reading, regardless of which levels are actually read. So if mipmapping is disabled and the attached level is GL_TEXTURE_BASE_LEVEL, or mipmapping is enabled and the attached level is within the range GL_TEXTURE_BASE_LEVEL to GL_TEXTURE_MAX_LEVEL, the behaviour is undefined.

You know, while I was looking at this part of the spec, something occurred to me. They never updated the feedback language to handle view textures. Just look at the way it keeps talking about "texture object T"; it never takes into account the possibility of "texture object T" having an image attached and reading from "texture object VT", which is a view of T.

I was behind on my bug quota, not having submitted one since well, yesterday, so I fired that one off (http://www.khronos.org/bugzilla/show_bug.cgi?id=922).

But in any cause, yes, you must actively prevent sampling being at all possible from any image attached to the framebuffer in order to not hit undefined behavior. That doesn't mean you can't sample from the same texture you're rendering to. You just need to know how to do it correctly.

Though to be honest, it'd be great if the rules were a bit more reasonable. The way it's specified now, you can't even access a different array layer in the same mipmap. In fact, it's undefined behavior even if you can't render to the attached image (because it's not in the glDrawBuffers list).

Though I'll grant that the last may be a performance optimization. To allow that to work, changing the glDrawBuffers set would have to clear the framebuffer cache. And that would kill lots of optimization possibilities.

dr4cula
07-26-2013, 07:42 AM
You need to perform the same signed-to-unsigned conversion for the Z coordinate, as the depth values in the texture are returned as 0..1.

Also, you should bind a renderbuffer (or texture) to GL_COLOR_ATTACHMENT0 even if you're not using it.


The Red Book suggested translation only in x- and y- directions which I found a bit odd. Changed it to translate in the z-direction as well and added the recommended renderbuffer. However, everything is still black in the scene :/



No. glFramebufferTexture2D() causes rendered output to be directed to the texture (or rather, a specific mipmap level of it). It doesn't associate the texture with a texture unit. In fact, having a texture used as both a source and destination simultaneously is undefined.

Ah, thanks for clarifying!

I've uploaded the code to pastebin since the forum editing kinda sucks: http://pastebin.com/G1jT0FfR

To be honest, I'm really confused as to how OpenGL will know how to map the shadowmap to the scene if, for example, I can't use it for texture mapping a quad. I suppose it's got something to do with the GL_COMPARE_R_TO_TEXTURE but I'm a bit confused :D Thoughts anyone?

Thanks in advance!

GClements
07-26-2013, 10:18 AM
However, everything is still black in the scene :/
Are you performing a glClear() for the physical framebuffer? I don't see it in the code, but that may just be because it's not part of the render function.

You need to call glViewport(0, 0, 512, 512) for the FBO (then set it back to cover the window for the second pass).

The depth is being offset by 0.5 but still scaled by 1.0. All 6 values should be 0.5.


To be honest, I'm really confused as to how OpenGL will know how to map the shadowmap to the scene if, for example, I can't use it for texture mapping a quad. I suppose it's got something to do with the GL_COMPARE_R_TO_TEXTURE but I'm a bit confused
When GL_TEXTURE_COMPARE_MODE is GL_COMPARE_R_TO_TEXTURE, the first two texture coordinates are used to sample the texture, and the third texture coordinate is compared to the sampled value using GL_TEXTURE_COMPARE_FUNC. If the test passes, the luminance (R,G,B), intensity (R,G,B,A) or alpha are one, otherwise they're zero.

dr4cula
07-27-2013, 05:55 AM
Are you performing a glClear() for the physical framebuffer? I don't see it in the code, but that may just be because it's not part of the render function.

You need to call glViewport(0, 0, 512, 512) for the FBO (then set it back to cover the window for the second pass).

The depth is being offset by 0.5 but still scaled by 1.0. All 6 values should be 0.5.

Thanks for your reply once again! My viewport and window size are both 512x512 so the calls to glViewport() should be redundant. I added them in (just in case) and nothing changed (as expected). Also changed the scale but nothing. glClear() is called on the physical buffer before entering the rendering state of this particular scene but just in case, I added another clear after switching to it for the second pass.

New render: http://pastebin.com/MHGDxsSf

Any other ideas? :D

Thanks in advance!

GClements
07-27-2013, 09:24 AM
Any other ideas?


You'll need to post more complete code. There's nothing inherently wrong with the code you've posted, but it's missing a few key pieces, e.g. the setup of the camera projection, and renderSceneElements().

Here is a working example based upon the parts which you posted:
http://pastebin.com/JQzGr1Rk

dr4cula
07-27-2013, 11:09 AM
You'll need to post more complete code. There's nothing inherently wrong with the code you've posted, but it's missing a few key pieces, e.g. the setup of the camera projection, and renderSceneElements().

Here is a working example based upon the parts which you posted:
http://pastebin.com/JQzGr1Rk

Hm... I changed my renderSceneElements() to the following for testing purposes and it almost seems to work (screenshot: http://tinypic.com/view.php?pic=14jw8xi&s=5)


glPushMatrix();
glBegin(GL_QUADS);
glNormal3f(0.0f, 1.0f, 0.0f);
glVertex3f(0.0f, 0.0f, 0.0f);
glVertex3f(0.0f, 0.0f, 10.0f);
glVertex3f(20.0f, 0.0f, 10.0f);
glVertex3f(20.0f, 0.0f, 0.0f);
glEnd();

glTranslatef(10.0f, 0.0f, 5.0f);
glBegin(GL_QUADS);
glNormal3f(1.0f, 0.0f, 0.0f);
glVertex3f(0.0f, 0.0f, 0.0f);
glVertex3f(0.0f, 0.0f, 2.0f);
glVertex3f(0.0f, 2.0f, 2.0f);
glVertex3f(0.0f, 2.0f, 0.0f);
glEnd();
glPopMatrix();

As for the camera's projection stuff, I just use gluLookAt() from a set of calculated vectors based on the camera's pitch, yaw and roll. I've got a collection of scenes that I can switch between and this is where the camera's projection is set up:


glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear The Screen And The Depth Buffer
glLoadIdentity(); // load Identity Matrix

// position the camera
Vector3 position = cam_.getPositionVec();
Vector3 lookAt = cam_.getLookAtVec();
Vector3 up = cam_.getUpVec();
gluLookAt(position.x_, position.y_, position.z_, lookAt.x_, lookAt.y_, lookAt.z_, up.x_, up.y_, up.z_); //Where we are, What we look at, and which way is up


// check for polygon mode
if(wireframe_) {
glPolygonMode(GL_FRONT, GL_LINE);
}
else {
glPolygonMode(GL_FRONT, GL_FILL);
}

// render the currently selected scene
p_currentScene_->render();

And that's it. From there it goes into the render code that I've posted.

Thank you so much for your help already! I'm just completely stumped as to why I'm getting these odd results...

EDIT: realized I forgot to post the overall projection stuff (this is set up only once and called only again if the window is resized):

void WindowHandler::ResizeGLWindow(int width, int height) {
if (height==0) { // Prevent A Divide By Zero error
height=1; // Make the Height Equal One
}

glViewport(0,0,width,height);

glMatrixMode(GL_PROJECTION);
glLoadIdentity();

//calculate aspect ratio
gluPerspective(45.0f,(GLfloat)width/(GLfloat)height, 0.1 ,1500.0f);

glMatrixMode(GL_MODELVIEW);// Select The Modelview Matrix
glLoadIdentity();// Reset The Modelview Matrix
}

EDIT2: Now I'm even more confused. Decided to make the "floor" a bit more detailed and then this happened instead (on the right): http://i42.tinypic.com/zxwbuu.png

I swapped the single big quad with this:


glPushMatrix();
for(int i = 0; i < 20; i++) {
glTranslatef(1.0f, 0.0f, 0.0f);
glPushMatrix();
for(int j = 0; j < 10; j++) {
glBegin(GL_QUADS);
glNormal3f(0.0f, 1.0f, 0.0f);
glVertex3f(0.0f, 0.0f, 0.0f);
glVertex3f(0.0f, 0.0f, 1.0f);
glVertex3f(1.0f, 0.0f, 1.0f);
glVertex3f(1.0f, 0.0f, 0.0f);
glEnd();
glTranslatef(0.0f, 0.0f, 1.0f);
}
glPopMatrix();
}
glPopMatrix();

What on Earth is going on?

GClements
07-27-2013, 12:51 PM
Hm... I changed my renderSceneElements() to the following for testing purposes and it almost seems to work (screenshot: http://tinypic.com/view.php?pic=14jw8xi&s=5)
FWIW, I find that the Z component of the glTranslate() call needs to be a fraction below 0.5 to avoid depth-fighting. I used 0.499 in the example I posted, but the optimum value depends upon the near plane and other factors.


EDIT2: Now I'm even more confused. Decided to make the "floor" a bit more detailed and then this happened instead (on the right): http://i42.tinypic.com/zxwbuu.png

I swapped the single big quad with this:


glPushMatrix();
for(int i = 0; i < 20; i++) {
glTranslatef(1.0f, 0.0f, 0.0f);

You can't change the model-view matrix when drawing the scene, because you're setting GL_TEXTURE_GEN_MODE to GL_OBJECT_LINEAR, so the texture coordinates are based upon the values passed to glVertex() without any model-view transformation applied, and planeMatrix only includes the "camera" transformations (i.e. those from the gluPerspective() and gluLookAt() calls).

You would need to either switch to GL_EYE_LINEAR (and omit the model-view matrix from the calculation of planeMatrix), or apply every transformation to both the model-view matrix and the texture matrix simultaneously, or update the tex-gen planes whenever you update the vertex transformation, or transform the vertices in the program before passing them to glVertex().

Or you could switch to using shaders, where you get to control the transformations directly.

dr4cula
07-28-2013, 07:54 AM
FWIW, I find that the Z component of the glTranslate() call needs to be a fraction below 0.5 to avoid depth-fighting. I used 0.499 in the example I posted, but the optimum value depends upon the near plane and other factors.


You can't change the model-view matrix when drawing the scene, because you're setting GL_TEXTURE_GEN_MODE to GL_OBJECT_LINEAR, so the texture coordinates are based upon the values passed to glVertex() without any model-view transformation applied, and planeMatrix only includes the "camera" transformations (i.e. those from the gluPerspective() and gluLookAt() calls).

You would need to either switch to GL_EYE_LINEAR (and omit the model-view matrix from the calculation of planeMatrix), or apply every transformation to both the model-view matrix and the texture matrix simultaneously, or update the tex-gen planes whenever you update the vertex transformation, or transform the vertices in the program before passing them to glVertex().

Or you could switch to using shaders, where you get to control the transformations directly.

I tried switching to GL_EYE_LINEAR, however if I omit the gluLookAt() (which is the model-view part of the planeMatrix) then I'm not getting the results I'm looking for. If I keep that there then it sorta looks OK (the shadow is translated away from the object for whatever reason). I tried this version with my full scene as well and the shadowmap was all over the place there. If I can get the shadowmap working for these 2 panels then I can start looking into the construction of the renderSceneElements() more critically but as it stands now, I'm not still not happy with the two-planes result: http://tinypic.com/view.php?pic=16ive6e&s=5

Did I even understand your idea with the GL_EYE_LINEAR correctly? The reason I'm going with this solution is that it seems the easiest out of the other options in the fixed-function pipeline OpenGL.

Thank you so much for your help in advance!

EDIT: So I tried adding 2 cubes to the scene (GL_EYE_LINEAR with gluLookAt() in planeMatrix) and if I had only 1 cube, it looked OK. Once I added another one of the following happened: 1) the 2nd cube didn't cast a shadow, 2) the 2nd cube casted a massive shadow. Here's what I mean: http://i40.tinypic.com/smghtf.png

Thanks in advance!

GClements
07-28-2013, 10:55 AM
I tried switching to GL_EYE_LINEAR, however if I omit the gluLookAt() (which is the model-view part of the planeMatrix) then I'm not getting the results I'm looking for.[QUOTE].
My mistake. planeMatrix needs to contain the part of the model-view transformation which is specific to the light, but not subsequent transformations which are applied to the object.

[QUOTE=dr4cula;1253262]Did I even understand your idea with the GL_EYE_LINEAR correctly? The reason I'm going with this solution is that it seems the easiest out of the other options in the fixed-function pipeline OpenGL.
I've posted an updated version (http://pastebin.com/zZ2EDe1S) which uses eye-linear coordinates. The object can be moved/rotated using shift/control and the arrow/page keys.

The eye planes are transformed by the inverse-transpose of the model-view matrix at the point that glTexGen() is called, so the model-view matrix needs to be set to the identity matrix at that point (at least, it matters what it's set to; see below).

The bottom line is that the texture coordinates actually used for the lookup in the second pass need to exactly match (other than the 0..1 -> -1..+1 conversion) the clip coordinates from the first pass. So any transformations which are applied to the vertex coordinates in the first pass must also be applied to the texture coordinates in the second pass.

"Constant" transformations (i.e. the perspective and look-at transformations which define the view) are dealt with by planeMatrix, but dynamic transformations (transforming objects within the scene) also need to be included, and using eye-linear texture generation does that.

I think that if you want to apply a gluLookAt() for the camera, you will need to have that transformation in place for the glTexGen() calls, so that using eye-linear coordinates doesn't result in it being applied twice.

dr4cula
07-29-2013, 06:00 AM
Ok, so the only difference I could find between our codes was the light FOV. As soon as I changed it to 90.0, the massively long shadows disappeared. This is why I think it was happening: due to the shadowmapping method, everything behind an object from the light's limited POV is shadowed, hence the long shadows from different angles than the light's original angle. Kinda hard to explain what I mean :P But yeh, once I added the call to glLoadIdentity(), the shadow map got stuck in the camera and floated around with it. But like you said, the gluLookAt() for the camera needs to be in place before glTexGen() calls (which it was anyways due to the program's setup) so all I had to do was fix the angle (besides GL_EYE_LINEAR mapping).

Now, I thought I was done with the problems but ran into 2 odd artifacts:

1) shadows seem to be translated a bit from the object that casts them: changing the light's near plane changes this. Going from 0.1 to 1.0 gives perfect results distance wise but produces another problem: incorrect texture mapping on some objects. Here's what I mean: http://i40.tinypic.com/an1un9.png
Also, this is independent of the distance to the light source as there's another cube in the scene further back that has the same problem with the top face texture.

2) there are weird mappings behind the light: http://i44.tinypic.com/2qnnpjm.png
One way I can think of to remove those mappings is to disable texturing before rendering the back wall but that doesn't seem like the best idea.

I can't thank you enough for your invaluable insight! Hope you can help me cross the finish line! :)

GClements
07-29-2013, 08:20 AM
Ok, so the only difference I could find between our codes was the light FOV. As soon as I changed it to 90.0, the massively long shadows disappeared. This is why I think it was happening: due to the shadowmapping method, everything behind an object from the light's limited POV is shadowed, hence the long shadows from different angles than the light's original angle.
The FoV angle only affects how much of the scene gets rendered. So long as both the shadow caster and shadow target fit within the frustum used in the first pass, the FoV angle won't have any effect.
However, if any part of the scene lies outside of the frustum, then you'll be getting depth values based upon the texture's wrap mode, which will invariably produce the wrong results.
Essentially, the frustum used for rendering the depth map needs to encompass all objects which can cast or receive a shadow which is within the camera's frustum. For a simple scene, you can just set the camera's frustum so that it bounds the scene. For more complex scenes, it's common to use multiple depth maps, with one covering the entire region of interest, and another only covering areas closer to the viewpoint. The former is used as a fall-back if the texture coordinates for the former are out of range.



Now, I thought I was done with the problems but ran into 2 odd artifacts:

1) shadows seem to be translated a bit from the object that casts them: changing the light's near plane changes this.
Offsetting the Z translation to avoid depth fighting can cause this.


Going from 0.1 to 1.0 gives perfect results distance wise
The ratio of the far plane to the near plane determines the degree of non-linearity in the depth buffer. Too high a ratio will result in nearly all of the depth range being used for points close to the near plane, resulting in a loss of depth precision for the rest of the scene.

The problem can be avoided by using an orthographic projection for the light (i.e. a directional light rather than a point light), or using a linear depth buffer (which requires shaders).


but produces another problem: incorrect texture mapping on some objects. Here's what I mean: http://i40.tinypic.com/an1un9.png
Also, this is independent of the distance to the light source as there's another cube in the scene further back that has the same problem with the top face texture.
This looks like depth fighting. When using a reciprocal depth buffer, the Z offset has to be tuned based upon the various parameters (light distance, near/far plane distance, scene dimensions, etc).


2) there are weird mappings behind the light: http://i44.tinypic.com/2qnnpjm.png
Anything which is outside of the frame rendered in the first pass will be wrong. If you're using point lights which are "inside" the scene, things get more complex. Using a cube map should be viable, but it requires rendering 6 views for each light, and I don't know whether it can be done without using shaders.

dr4cula
07-29-2013, 09:05 AM
Thanks for explaining everything in such detail! Really appreciate it.



This looks like depth fighting. When using a reciprocal depth buffer, the Z offset has to be tuned based upon the various parameters (light distance, near/far plane distance, scene dimensions, etc).

Yep, I thought that as well first but then I enabled multitexturing and mapped the shadowmap onto texture unit 1 and I'm still getting the same odd pattern: http://i39.tinypic.com/opcjkn.png
The image on the right is the top face of the cube (kinda hard to see but it's there).

EDIT: or actually wait, I was getting z-fighting beforehand with just TU0 as well (0.499 modification)... So what, am I casting shadows on top of each other or?

EDIT2: Nevermind, I moved the light in the y-direction and it works fine now: http://i44.tinypic.com/33ug6rq.png

There aren't enough words to describe how grateful I am for your help GClements: seriously, thank you so much. The internet needs more people like you :D

hlewin
07-29-2013, 09:22 AM
Sorry if interrupting.
Not having done shadow-mapping I've got a question: In the image linked above the shadow cast from the box seems to jump out of the plane it is projected on. Is the image just tricking my eye?
Would the density of the shadow vary based on the distance between the shadowing surface and the light source in nature because of light-Diffusion?

GClements
07-29-2013, 11:05 AM
In the image linked above
Which one?

the shadow cast from the box seems to jump out of the plane it is projected on. Is the image just tricking my eye?
Possibly, or it might be caused by the depth offset required to avoid depth fighting.


Would the density of the shadow vary based on the distance between the shadowing surface and the light source in nature because of light-Diffusion?
In its simplest form (used here), shadow mapping results in hard shadows, although there are various techniques which can be used to soften them.

hlewin
07-29-2013, 02:03 PM
I mean the first Image linked in the last post before mine.
is this effect generally caused by hard-shadows as there is no softening?
I've read some articles about soft-shadows but if I remember right those did some arbitrary blurring of the shadow-edges that did not try to take the distances between the light and the shadow-caster and the one between the shadow-caster and the shadowed surface into account.
The thought is that the blurring would have to be based on 1. the light's area/volume 2. the distance between the light and the shadow caster (those two will alledgedly be influencing the blurring at most) and 3. the distance between the shadow caster and the shadowed surface as when crossing a medium light will diffute further.

EDIT: With jumping out of the plane I mean the shadow cast onto the larger box. The shadow seems to have a depth on it's right side

GClements
07-29-2013, 03:25 PM
I mean the first Image linked in the last post before mine.
I think it's just the shape of the shadow, combined with the fact that the shadow is completely black.


I've read some articles about soft-shadows but if I remember right those did some arbitrary blurring of the shadow-edges that did not try to take the distances between the light and the shadow-caster and the one between the shadow-caster and the shadowed surface into account.
There are several different types of softening. One is to anti-alias edges by comparing adjacent texels in order to avoid the pixellation (the projected shadow pixels may be much larger than a screen pixel); recent hardware may do this automatically for shadow samplers. Another is to vary the shadow intensity based upon the distance between the caster and the receiver and the distance between the caster and the light (this still gives hard edges but the shadow fades with distance). Yet another is to cast multiple shadows to simulate umbra and penumbra regions.

There are probably others. Shadows can't be done both efficiently and correctly, so there's a lot of research into getting better approximations with reasonable performance.

hlewin
07-29-2013, 03:38 PM
The low Resolution of shadow-maps may prevent this but: wouldn't it work to detect the edges of the shadow by fetching a - for simplicity - 3x3 (or better more) depth-value-matrix and determining how far away from the shadows edge one is right now? This would require knowing the depth-value of the current fragment in respect to the light. If no value in the depth-matrix matches is near to the fragment's depth it is far away from the shadow's edge and hence the shadow is at maximal intensity. If at least one value matches one is near the edge and has to take the blurring-parameters into account.
Or something like that. I guess I'll have a look at this more thoroughly.

Alfonse Reinheart
07-29-2013, 04:37 PM
Yes, that would make the edges of the shadow fuzzy. But it wouldn't be correct soft shadows.

Soft shadows happen because lights aren't point lights. They have area, and thus an occluding surface can partially block a light. However, the size of the softness of the shadow depends on a number of factors, such as the distance between the surface being potentially shadowed and the potential occluder(s). The farther from the occluder(s) the point on the surface is, the "softer" the shadowing will be. Your method doesn't take that into account. It just samples in a 3x3 matrix of pixels. This will cause shadows to be soft even when the distance to the occluder is small.

There's also the relative size of the light source, as seen from the surface point. Your method comes close to approximating that, but it's doing it from the wrong end. What you want is to have different occlusions, partially offset from one another, for various points within the light source.

hlewin
07-29-2013, 05:01 PM
Right. Correctly there is some kind of blurring whose are is defined by extrapolating the lines between the edge of the shadow-caster and the light's outer Points onto the plane being shadowed.
But this is the area of varying degrees of light-occulsion only idealy meaning for light traveling through a vacuum.
Or isn't the effect of softer shadows farther away from the occulder due to the diffusion of light traveling through a medium?

Alfonse Reinheart
07-29-2013, 05:11 PM
I didn't mean to imply that those were the only factors in soft shadows. They're just the two biggest contributors. But things like the diffusion of light through the medium deal with things that impact a lot more than just shadows. That's starting to solve global illumination.

Which in more real-time terms, means that this should be covered via the ambient term, or you fake the ambient with a lot of small, weak, non-shadowing lights that you put everywhere. Or some hack of that kind.

GClements
07-29-2013, 07:30 PM
wouldn't it work to detect the edges of the shadow by fetching a - for simplicity - 3x3 (or better more) depth-value-matrix and determining how far away from the shadows edge one is right now?
Yes, and recent hardware may do this automatically. 8.22.1:


If the value of TEXTURE_MAG_FILTER is not NEAREST, or the value of TEXTURE_MIN_FILTER is not NEAREST or NEAREST_MIPMAP_NEAREST, then r may be computed by comparing more than one depth texture value to the texture reference value. The details of this are implementation-dependent, but r should be a value in the range [0; 1] which is proportional to the number of comparison passes or failures.

Although I suspect that 2x2 might be more likely, as that can re-use the 2x2 gather for bilinear filtering.

GClements
07-29-2013, 07:42 PM
Or isn't the effect of softer shadows farther away from the occulder due to the diffusion of light traveling through a medium?
The primary reason for "soft" shadows is that real lights aren't points, they have finite radius, resulting in umbra (regions of the receiving surface where the entire light source is occluded) and penumbra (regions where only part of the light source is occluded).

An extension of this principle is radiosity. In the presence of light, most surfaces which aren't either matt black or perfect mirrors can be treated as diffuse light sources. Any light which is approximately omnidirectional will tend to illuminate the surface(s) in the immediate vicinity of the light quite brightly. The effect is to enlarge the light source, which will enlarge the penumbra and shrink the umbra (i.e. make the shadows softer).

Atmospheric diffusion no doubt plays some part, but unless the atmosphere is extremely hazy, it isn't likely to be particularly significant compared to the above.

hlewin
07-30-2013, 10:35 AM
I tried to plot the idea here (http://qtos.de/area_light_occulder.png).
Is it actually worth a try to implement real interpolation towards the shadow's edge based on the parameters?
This would involve - as far as I can think - fetching an area of depth-values at each fragment to see if one is in the yellow-bordered area. That is what I meant with fetching a Matrix. Sorry if I don't understand properly what has been previously said but how would the implementation, the hardware be able to provide functionality to support this directly? One would need to know the size of one depth-texel in the scene to know the distance to the discontinuity of the depth values in world-space.

This is not meant to be the only aspect that accounts for calculating the contribution of the light to the shadow's (soft) edge.