Calculating where the shadow falls on a plane

Hi all,

I am new to OpenGL and it’s some time that I started programming with JOGL and JCUDA (OpenGL and Cuda porting on Java)

I would need to calculate the shadow of a 3d model on a plane. Models can be also 5 milions triangles while the plane is basically a square that ranges from 480 to 4800 tiles.

I already managed to draw the 3d model with OpenGL and calculate the projection of each triangle on the plane with CUDA.

The big problem for me arises when I want to know which tiles are covered by the shadow and which ones arent. If I draw the array of the projected triangle I have the result printed on the screen, but I need somehow to manage it (an idea would be to retrieve the result like a boolean matrix).

For example, a projected triangle on a 4x4 matrix could appear like the following:

0100
0110
0111
0000

1 means I have shadow on that tile, 0 means no shadow

I read a lot about rastering, ray tracing, FBOs, stencil buffer, etc…

But since I am a newbie on this field, I am very confused and so I would like to ask you which is the best idea to reach this goal.

Thanks in advance :slight_smile:

If I draw the array of the projected triangle I have the result printed on the screen, but I need somehow to manage it

This part is not clear to me. What do you actually want to do? Render your model such that it casts a shadow onto the plane or do you need e.g. the size of the area of the projection?

Rendering shadows is a huge topic, with a large number of techniques. Most are variations of either “shadow volumes” (basically extrudes silhouette edges in light direction and renders that into the stencil buffer, then uses the stencil to determine where light is reflected) or “shadow maps” (basically render the scene from the light’s point of view, and compare distance of a camera fragment with distance of closest object from light to decide if the fragment is lit).

Hallo Carsten, thanks for replying,

sorry if my english is not good :o, however I would need to cast the shadow on the plane and retrieving a kind of matrix that represents the plane itself and where each tiles is boolean (that is, shadow or not)

At the moment I have somehow calculated the shadow by just projecting each triangle on the plane. At this point I’d need to rasterize it… but it appears like an hard job compared to OpenGL, where it’s pretty easy and it rasterize them automatically. The problem is that the OpenGL output is only displayed on the screen… while I’d need instead in a boolean matrix for example

One way to do this is to create a FBO (frame buffer object) that has the dimensions of your plane. You’d then set up rendering to go into that FBO instead of the usual application buffers (that are displayed on the screeen) and have your modelview/projection matrices perform the projection of triangles onto the plane. Clear the FBO to white, enable GL_MIN blending (glBlendEquation(GL_MIN)) and render the triangles in black. Then use glReadPixels() to read back the data to main memory or another buffer on the GPU (that buffer could use CUDA/GL interop to be shared with CUDA) depending on where you need to process the information. Black pixels are in shadow, white ones are not.

Information on how to use FBOs can be found on the OpenGL wiki: FBO for example.

After days spent looking for answer this looks the right solution! Thanks Carsten :slight_smile: tomorrow I will try to apply it

I am a little confused

Do I need to render to texture anyway?
Which attachment should I use? The color?
What should I attach to the framebuffer, a texture image or a renderbuffer image?

Unfortunately I didnt found any example that show clearly how to render off screen and save it as an image :stuck_out_tongue:

( This user looks to had the same problem)

Here at the end it says it doesnt matter if I attach a renderBuffer or a texture

Do you have any suggest on which way should I use?

Ok, I decided to opt for a renderBuffer

here my code so far

private void renderShadows(GL2 gl)    {
        //  creating the FBO
        int[] frameBufferID = new int[1];
        gl.glGenBuffers(1, frameBufferID, 0);
        
        // bounding the FBO
        gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, frameBufferID[0]);
        
        // creating the RenderBuffer Object
        int[] renderBufferID = new int[1];
        gl.glGenRenderbuffers(1, renderBufferID, 0);
        
        // bounding the RBO
        gl.glBindRenderbuffer(GL2.GL_RENDERBUFFER, renderBufferID[0]);
        
        // Allocate the RBO
        gl.glRenderbufferStorage(GL2.GL_RENDERBUFFER, GL2.GL_RGB, floorWidth, floorHeight);
        
        // Attaching the RB image (RBO) to the FBO
        gl.glFramebufferRenderbuffer(GL2.GL_FRAMEBUFFER, GL2.GL_COLOR_ATTACHMENT0,
                                                GL2.GL_RENDERBUFFER, renderBufferID[0]);
        
        if(gl.glCheckFramebufferStatus(GL2.GL_FRAMEBUFFER) == GL2.GL_FRAMEBUFFER_COMPLETE)
            System.out.println("GL_FRAMEBUFFER_COMPLETE!!");
        else
            System.out.println("..[censored] ^^");
    }

And so far it works :D, I get the frameBuffer complete

But how can I go further now? How do I clean the FBO to white? And what about the pixel format?

// save the current viewport and set the new
        gl.glPushAttrib(GL2.GL_VIEWPORT_BIT);
        gl.glViewport(0, 0, floorWidth, floorHeight);
        
        // bind the FBO
        gl.glBindFramebuffer(GL2.GL_DRAW_FRAMEBUFFER, frameBufferID[0]);
        
        int[] attachmentID = new int[1];
        attachmentID[0] = GL2.GL_COLOR_ATTACHMENT0;
        gl.glDrawBuffers(1, attachmentID, 0);
        
        // clear
        gl.glClear(GL2.GL_COLOR_BUFFER_BIT);
        
        gl.glBlendEquation(GL2.GL_MIN);
        
        gl.glColor3f(0.0f, 0.0f, 0.0f);
        
        // render
        gl.glBegin(GL2.GL_TRIANGLES);
            gl.glVertex3f(0.0f, 1.0f, 0.0f);
            gl.glVertex3f(1.0f, 0.0f, 0.0f);
            gl.glVertex3f(0.0f, 0.0f, 0.0f);
        gl.glEnd();
        
        gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, 0);
        gl.glReadBuffer(GL2.GL_BACK);
        gl.glDrawBuffer(GL2.GL_BACK);
        
        //  restore viewport
        gl.glPopAttrib();

How can I render the FBO content on the screen to check it?

This doesnt work:

gl.glBindFramebuffer( GL2.GL_FRAMEBUFFER, frameBufferID[0] );
        gl.glDrawBuffer(GL2.GL_COLOR_ATTACHMENT0);
        gl.glViewport( 0, 0, floorWidth, floorHeight );
        //gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
        gl.glClear( GL2.GL_COLOR_BUFFER_BIT | GL2.GL_DEPTH_BUFFER_BIT );

:frowning:

If you attach a texture instead of a render buffer you can then use that texture on a quad rendered to the application frame buffer to look at it. Or you use glReadPixels() to transfer the pixels to main memory and write an image file.

Is that last snippet meant to render to the screen? It binds the FBO and sets GL_COLOR_ATTACHMENT0 as draw buffer, so it draws to the FBO. To render to an application frame buffer, unbind the FBO (glBindFrameBuffer(GL_FRAMEBUFFER, 0)) and set the draw buffer to GL_BACK.

I know, but I would need to use the renderBuffer because they are faster and I do some heavy operations…

I just want to check the content of the renderBuffer, and to do this I thought to render/display it on the screen… But it doesnt work

I know, but I would need to use the renderBuffer because they are faster and I do some heavy operations…

Sorry, I mentioned the two methods I can think of how to look at what is rendered into an FBO, so I don’t really know what to tell you. Maybe someone else here has another idea?
FWIW I don’t quite follow your reasoning: you reject using a texture because of performance (personally I even doubt there is a big difference between render buffers and textures, but I’ve not measured it), but we are talking about a temporary change to aid debugging. Taking your point of view to the extreme it seems to become “better a fast program that computes something nonsensical than a slow and correct one” :wink: :wink:

I just want to check the content of the renderBuffer, and to do this I thought to render/display it on the screen… But it doesnt work

AFAIK there is no way to directly (without copying or other transformation) use the contents of a render buffer for drawing. The code sequence you had shown previously only sets the FBO’s color attachment 0 as target for future drawing operations.
About the only use of render buffers I can think of is for input to other computations on the GPU, through OpenCL/CUDA, because as far as OpenGL is concerned there is not a whole lot you can do with them - at least that is my understanding.

You are totally right :), but I was almost giving up and using texture when I made it :smiley:

Yep, you are right again, I did as follow to read and drawing the content of the RBO

gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, frameBufferID[0]);
        FloatBuffer pixels = GLBuffers.newDirectFloatBuffer(250*250);
        gl.glPixelStorei(GL2.GL_UNPACK_ALIGNMENT, 1);
        gl.glReadBuffer(GL2.GL_COLOR_ATTACHMENT0);
        gl.glReadPixels(0, 0, 250, 250, GL2.GL_BGRA, GL2.GL_UNSIGNED_BYTE, pixels);
        
        gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, 0);
        gl.glRasterPos2d(0, 0);
        gl.glDrawPixels(250, 250, GL2.GL_BGRA, GL2.GL_UNSIGNED_BYTE, pixels);
        
        gl.glFlush();

Damn, you are right for the third time! :smiley:
I am going to use Cuda for some calculation on the results. I have different light sources, each of them will produce a shadow. But when I have two or more shadows on one pixel, than that pixel should have a different shadow (let’s say “core shadow”)…

Now I’d need to know what you really meant by “clearing the FBO to white” (I guess set color to white, right?) and why you mentioned “enable GL_MIN blending (glBlendEquation(GL_MIN))” (what is it useful for?)

Moreover, how can I recognize by a mathematical point of view when a pixel is white or black?

Because it is not so much clear to me, since when I allocate the RBO I use RGBA

        gl.glRenderbufferStorage(GL2.GL_RENDERBUFFER, GL2.GL_RGBA, floorWidth, floorHeight);

while when I read BGRA

gl.glReadPixels(0, 0, 250, 250, GL2.GL_BGRA, GL2.GL_UNSIGNED_BYTE, pixels);

But when I have two or more shadows on one pixel, than that pixel should have a different shadow (let’s say “core shadow”)…

Yes, that is why usually shadows are not explicitly calculated, but are implicit when calculating lighting. What I mean is that when calculating the color of a fragment one tests from which (if any) light sources it receives light and calculates the reflected light based on that. This naturally puts fragments that don’t receive any light into shadow.

It seems to me that you want a more explicit representation of shadows by keeping track of which fragments are in shadow with respect to a light source.
You could assign each light source a “color” based on 1/numLights and draw that. So for two light sources a black pixel means light from all sources, a 50% grey pixel means light from one source and a white pixel means fully in shadow. This requires that you use additive blending (glBlendEquation(GL_FUNC_ADD)) to add up the contributions from the different lights. However the big problem is overdraw: if two triangles from one object are drawn to the same fragment (e.g. because the object has a front and back side) you incorrectly add twice to that fragment.
You could create a render buffer per light source and afterwards combine them to get around that.

meant by “clearing the FBO to white” (I guess set color to white, right?)

yes, I meant setting the clear color (glClearColor()) to white.
The GL_MIN blending was a mistake, it is not needed. The idea was to make sure that a pixel can only ever go from white to black - never the other way - but that won’t happen anyway.

Moreover, how can I recognize by a mathematical point of view when a pixel is white or black?

it’s rgb value is (1.0, 1.0, 1.0) or (0.0, 0.0, 0.0) respectively.

Another question:

Since I am going to have n shadow matrixes (that I will merge in a final one), shall I declare n RBO or n FBO?

I guess n RBOs since the FBO switching mechanism makes things complicated if I want to read values from two different FBO and merge in one (and this means having an additional temporary content somewhere)… right?

LOL, we posted at the same time, and actually you answered me in your post :smiley:

You could create a render buffer per light source and afterwards combine them to get around that.

Thanks carsten! :wink:

I tried to create a RBO for each light (I start with 2).

I start binding the FBO[0] and RBO[0] for the first shadow set.

// bind the FBO
        gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, frameBufferID[0]);
        gl.glBindRenderbuffer(GL2.GL_RENDERBUFFER, renderBufferID[0]);
        
        // clear
        gl.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
        gl.glClear(GL2.GL_COLOR_BUFFER_BIT | GL2.GL_DEPTH_BUFFER_BIT);
                
        gl.glViewport(0, 0, floorWidth, floorHeight);
        gl.glLoadIdentity();
        gl.glMatrixMode(GL2.GL_PROJECTION);
        gl.glLoadIdentity();
        
        gl.glMatrixMode(GL2.GL_MODELVIEW);
        gl.glLoadIdentity();
        gl.glDrawBuffer(GL2.GL_COLOR_ATTACHMENT0);


        // render
        gl.glColor3f(0.5f, 0.5f, 0.5f);
        gl.glBegin(GL2.GL_TRIANGLES);
            gl.glVertex3f(0.0f, 0.5f, 0.0f);
            gl.glVertex3f(0.5f, 0.0f, 0.0f);
            gl.glVertex3f(0.0f, 0.0f, 0.0f);
        gl.glEnd();

Then I bind the FBO[0] and RBO[1] for the second shadow set

gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, frameBufferID[0]);
        gl.glBindRenderbuffer(GL2.GL_RENDERBUFFER, renderBufferID[1]);
        // Allocate the RBO
        gl.glRenderbufferStorage(GL2.GL_RENDERBUFFER, GL2.GL_RGBA, floorWidth, floorHeight);
        // Attaching the RB image (RBO) to the FBO
        gl.glFramebufferRenderbuffer(GL2.GL_FRAMEBUFFER, GL2.GL_COLOR_ATTACHMENT1,
                                                GL2.GL_RENDERBUFFER, renderBufferID[1]);
        // clear
        gl.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
        gl.glClear(GL2.GL_COLOR_BUFFER_BIT | GL2.GL_DEPTH_BUFFER_BIT);
                
        gl.glViewport(0, 0, floorWidth, floorHeight);
        gl.glLoadIdentity();
        gl.glMatrixMode(GL2.GL_PROJECTION);
        gl.glLoadIdentity();
        
        gl.glMatrixMode(GL2.GL_MODELVIEW);
        gl.glLoadIdentity();
        gl.glDrawBuffer(GL2.GL_COLOR_ATTACHMENT1);
                
        // render
        gl.glColor3f(0.5f, 0.5f, 0.5f);
        gl.glBegin(GL2.GL_TRIANGLES);
            gl.glVertex3f(-0.5f, 0.0f, 0.0f);
            gl.glVertex3f( 0.0f,-0.5f, 0.0f);
            gl.glVertex3f(-0.5f,-0.5f, 0.0f);
        gl.glEnd();

The problem is that the first shadow set gets overwrite…why?

I am binding everytime the proper FBO and RBO, + drawing in the right color_attachment

Because I am an idiot! I was clearing the color BEFORE switching color_attachment :stuck_out_tongue:

I am just wondering if it makes sense what I am doing:

  • render to the RBO each shadow set , merge them together with cuda, reading the final resulting shadow set, sending to the CPU, create a texture and apply it to the floor inside my render.

The other option would be:

  • render directly to texture, merge them to a unique final texture with Cuda and apply it directly

I choose the first way because I read (or at least this appeared to me) that working with RBO is faster and easier. Take in account that I need to calculate core and partial shadow for hundred/thousand of “shadow set”.
In the final result, a white tile will be that tile that has never been shadowed by any shadow set. A core-shadowed tile will be that tile that has been always shadowed by all the shadow sets. All the other tiles are then partial-shadowed, that is at least one shadow set without shadow on this tile and at least one shadow set with shadow on this tile.

[quote=“elect”]
I am just wondering if it makes sense what I am doing… In the final result,

  • [li] a white tile will be that tile that has never been shadowed by any shadow set. [] A core-shadowed tile will be that tile that has been always shadowed by all the shadow sets. [] All the other tiles are then partial-shadowed, that is at least one shadow set without shadow on this tile and at least one shadow set with shadow on this tile.

Sounds like you’re trying to compute soft shadows by accumulating the results of rendering hard shadows from a bunch of different points on the surface of an area light source.

I probably don’t understand some of what you’re requirements are, but…

I have to confess. I don’t understand:

  1. [li] why you think you need CUDA for this,[*] why you’re convinced you need a renderbuffer over a texture.

You may want to check out Casting Shadows in Real-time for this and other soft shadow algorithms to determine if there’s another more efficient solution that you’d be happier with (unless you’re just trying to generate an approximate ground truth image). This content may have juiced up and released as a book under the name Real-time Shadows (I say may because I haven’t actually had the latter in my hands yet, but the Table of Contents looks suspiciously similar).