PDA

View Full Version : Calculating where the shadow falls on a plane



elect
01-03-2012, 08:03 AM
Hi all,

I am new to OpenGL and it's some time that I started programming with JOGL and JCUDA (OpenGL and Cuda porting on Java)

I would need to calculate the shadow of a 3d model on a plane. Models can be also 5 milions triangles while the plane is basically a square that ranges from 480 to 4800 tiles.

I already managed to draw the 3d model with OpenGL and calculate the projection of each triangle on the plane with CUDA.

The big problem for me arises when I want to know which tiles are covered by the shadow and which ones arent. If I draw the array of the projected triangle I have the result printed on the screen, but I need somehow to manage it (an idea would be to retrieve the result like a boolean matrix).

For example, a projected triangle on a 4x4 matrix could appear like the following:

0100
0110
0111
0000

1 means I have shadow on that tile, 0 means no shadow

I read a lot about rastering, ray tracing, FBOs, stencil buffer, etc...

But since I am a newbie on this field, I am very confused and so I would like to ask you which is the best idea to reach this goal.

Thanks in advance :)

carsten neumann
01-03-2012, 08:57 AM
If I draw the array of the projected triangle I have the result printed on the screen, but I need somehow to manage it


This part is not clear to me. What do you actually want to do? Render your model such that it casts a shadow onto the plane or do you need e.g. the size of the area of the projection?

Rendering shadows is a huge topic, with a large number of techniques. Most are variations of either "shadow volumes" (basically extrudes silhouette edges in light direction and renders that into the stencil buffer, then uses the stencil to determine where light is reflected) or "shadow maps" (basically render the scene from the light's point of view, and compare distance of a camera fragment with distance of closest object from light to decide if the fragment is lit).

elect
01-03-2012, 09:21 AM
If I draw the array of the projected triangle I have the result printed on the screen, but I need somehow to manage it


This part is not clear to me. What do you actually want to do? Render your model such that it casts a shadow onto the plane or do you need e.g. the size of the area of the projection?


Hallo Carsten, thanks for replying,

sorry if my english is not good :o, however I would need to cast the shadow on the plane and retrieving a kind of matrix that represents the plane itself and where each tiles is boolean (that is, shadow or not)

At the moment I have somehow calculated the shadow by just projecting each triangle on the plane. At this point I'd need to rasterize it... but it appears like an hard job compared to OpenGL, where it's pretty easy and it rasterize them automatically. The problem is that the OpenGL output is only displayed on the screen.. while I'd need instead in a boolean matrix for example

carsten neumann
01-03-2012, 10:12 AM
One way to do this is to create a FBO (frame buffer object) that has the dimensions of your plane. You'd then set up rendering to go into that FBO instead of the usual application buffers (that are displayed on the screeen) and have your modelview/projection matrices perform the projection of triangles onto the plane. Clear the FBO to white, enable GL_MIN blending (glBlendEquation(GL_MIN)) and render the triangles in black. Then use glReadPixels() to read back the data to main memory or another buffer on the GPU (that buffer could use CUDA/GL interop to be shared with CUDA) depending on where you need to process the information. Black pixels are in shadow, white ones are not.

Information on how to use FBOs can be found on the OpenGL wiki: FBO (http://www.opengl.org/wiki/Framebuffer_Objects) for example.

elect
01-03-2012, 10:30 AM
One way to do this is to create a FBO (frame buffer object) that has the dimensions of your plane. You'd then set up rendering to go into that FBO instead of the usual application buffers (that are displayed on the screeen) and have your modelview/projection matrices perform the projection of triangles onto the plane. Clear the FBO to white, enable GL_MIN blending (glBlendEquation(GL_MIN)) and render the triangles in black. Then use glReadPixels() to read back the data to main memory or another buffer on the GPU (that buffer could use CUDA/GL interop to be shared with CUDA) depending on where you need to process the information. Black pixels are in shadow, white ones are not.

Information on how to use FBOs can be found on the OpenGL wiki: FBO (http://www.opengl.org/wiki/Framebuffer_Objects) for example.

After days spent looking for answer this looks the right solution! Thanks Carsten :) tomorrow I will try to apply it

elect
01-04-2012, 02:01 AM
I am a little confused

Do I need to render to texture anyway?
Which attachment should I use? The color?
What should I attach to the framebuffer, a texture image or a renderbuffer image?

Unfortunately I didnt found any example that show clearly how to render off screen and save it as an image :p

( This user (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=304721) looks to had the same problem)

elect
01-04-2012, 02:04 AM
Here (http://www.opengl.org/wiki/Framebuffer_Object_Examples#Quick_example.2C_rende r_to_buffer_.28p-buffer_replacement.29) at the end it says it doesnt matter if I attach a renderBuffer or a texture

Do you have any suggest on which way should I use?

elect
01-04-2012, 02:54 AM
Ok, I decided to opt for a renderBuffer

here my code so far


private void renderShadows(GL2 gl) {
// creating the FBO
int[] frameBufferID = new int[1];
gl.glGenBuffers(1, frameBufferID, 0);

// bounding the FBO
gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, frameBufferID[0]);

// creating the RenderBuffer Object
int[] renderBufferID = new int[1];
gl.glGenRenderbuffers(1, renderBufferID, 0);

// bounding the RBO
gl.glBindRenderbuffer(GL2.GL_RENDERBUFFER, renderBufferID[0]);

// Allocate the RBO
gl.glRenderbufferStorage(GL2.GL_RENDERBUFFER, GL2.GL_RGB, floorWidth, floorHeight);

// Attaching the RB image (RBO) to the FBO
gl.glFramebufferRenderbuffer(GL2.GL_FRAMEBUFFER, GL2.GL_COLOR_ATTACHMENT0,
GL2.GL_RENDERBUFFER, renderBufferID[0]);

if(gl.glCheckFramebufferStatus(GL2.GL_FRAMEBUFFER) == GL2.GL_FRAMEBUFFER_COMPLETE)
System.out.println("GL_FRAMEBUFFER_COMPLETE!!");
else
System.out.println("..[censored] ^^");
}

And so far it works :D, I get the frameBuffer complete

But how can I go further now? How do I clean the FBO to white? And what about the pixel format?

elect
01-04-2012, 05:02 AM
// save the current viewport and set the new
gl.glPushAttrib(GL2.GL_VIEWPORT_BIT);
gl.glViewport(0, 0, floorWidth, floorHeight);

// bind the FBO
gl.glBindFramebuffer(GL2.GL_DRAW_FRAMEBUFFER, frameBufferID[0]);

int[] attachmentID = new int[1];
attachmentID[0] = GL2.GL_COLOR_ATTACHMENT0;
gl.glDrawBuffers(1, attachmentID, 0);

// clear
gl.glClear(GL2.GL_COLOR_BUFFER_BIT);

gl.glBlendEquation(GL2.GL_MIN);

gl.glColor3f(0.0f, 0.0f, 0.0f);

// render
gl.glBegin(GL2.GL_TRIANGLES);
gl.glVertex3f(0.0f, 1.0f, 0.0f);
gl.glVertex3f(1.0f, 0.0f, 0.0f);
gl.glVertex3f(0.0f, 0.0f, 0.0f);
gl.glEnd();

gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, 0);
gl.glReadBuffer(GL2.GL_BACK);
gl.glDrawBuffer(GL2.GL_BACK);

// restore viewport
gl.glPopAttrib();

How can I render the FBO content on the screen to check it?

This doesnt work:

gl.glBindFramebuffer( GL2.GL_FRAMEBUFFER, frameBufferID[0] );
gl.glDrawBuffer(GL2.GL_COLOR_ATTACHMENT0);
gl.glViewport( 0, 0, floorWidth, floorHeight );
//gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
gl.glClear( GL2.GL_COLOR_BUFFER_BIT | GL2.GL_DEPTH_BUFFER_BIT ); :(

carsten neumann
01-04-2012, 08:17 AM
If you attach a texture instead of a render buffer you can then use that texture on a quad rendered to the application frame buffer to look at it. Or you use glReadPixels() to transfer the pixels to main memory and write an image file.

Is that last snippet meant to render to the screen? It binds the FBO and sets GL_COLOR_ATTACHMENT0 as draw buffer, so it draws to the FBO. To render to an application frame buffer, unbind the FBO (glBindFrameBuffer(GL_FRAMEBUFFER, 0)) and set the draw buffer to GL_BACK.

elect
01-04-2012, 09:30 AM
If you attach a texture instead of a render buffer you can then use that texture on a quad rendered to the application frame buffer to look at it. Or you use glReadPixels() to transfer the pixels to main memory and write an image file.

I know, but I would need to use the renderBuffer because they are faster and I do some heavy operations..



Is that last snippet meant to render to the screen? It binds the FBO and sets GL_COLOR_ATTACHMENT0 as draw buffer, so it draws to the FBO. To render to an application frame buffer, unbind the FBO (glBindFrameBuffer(GL_FRAMEBUFFER, 0)) and set the draw buffer to GL_BACK.

I just want to check the content of the renderBuffer, and to do this I thought to render/display it on the screen.. But it doesnt work

carsten neumann
01-04-2012, 10:42 AM
I know, but I would need to use the renderBuffer because they are faster and I do some heavy operations..


Sorry, I mentioned the two methods I can think of how to look at what is rendered into an FBO, so I don't really know what to tell you. Maybe someone else here has another idea?
FWIW I don't quite follow your reasoning: you reject using a texture because of performance (personally I even doubt there is a big difference between render buffers and textures, but I've not measured it), but we are talking about a temporary change to aid debugging. Taking your point of view to the extreme it seems to become "better a fast program that computes something nonsensical than a slow and correct one" ;) ;)



I just want to check the content of the renderBuffer, and to do this I thought to render/display it on the screen.. But it doesnt work


AFAIK there is no way to directly (without copying or other transformation) use the contents of a render buffer for drawing. The code sequence you had shown previously only sets the FBO's color attachment 0 as target for future drawing operations.
About the only use of render buffers I can think of is for input to other computations on the GPU, through OpenCL/CUDA, because as far as OpenGL is concerned there is not a whole lot you can do with them - at least that is my understanding.

elect
01-05-2012, 02:41 AM
FWIW I don't quite follow your reasoning: you reject using a texture because of performance (personally I even doubt there is a big difference between render buffers and textures, but I've not measured it), but we are talking about a temporary change to aid debugging. Taking your point of view to the extreme it seems to become "better a fast program that computes something nonsensical than a slow and correct one" ;) ;)

You are totally right :), but I was almost giving up and using texture when I made it :D




AFAIK there is no way to directly (without copying or other transformation) use the contents of a render buffer for drawing. The code sequence you had shown previously only sets the FBO's color attachment 0 as target for future drawing operations.

Yep, you are right again, I did as follow to read and drawing the content of the RBO


gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, frameBufferID[0]);
FloatBuffer pixels = GLBuffers.newDirectFloatBuffer(250*250);
gl.glPixelStorei(GL2.GL_UNPACK_ALIGNMENT, 1);
gl.glReadBuffer(GL2.GL_COLOR_ATTACHMENT0);
gl.glReadPixels(0, 0, 250, 250, GL2.GL_BGRA, GL2.GL_UNSIGNED_BYTE, pixels);

gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, 0);
gl.glRasterPos2d(0, 0);
gl.glDrawPixels(250, 250, GL2.GL_BGRA, GL2.GL_UNSIGNED_BYTE, pixels);

gl.glFlush();



About the only use of render buffers I can think of is for input to other computations on the GPU, through OpenCL/CUDA, because as far as OpenGL is concerned there is not a whole lot you can do with them - at least that is my understanding.

Damn, you are right for the third time! :D
I am going to use Cuda for some calculation on the results. I have different light sources, each of them will produce a shadow. But when I have two or more shadows on one pixel, than that pixel should have a different shadow (let's say "core shadow")..

Now I'd need to know what you really meant by "clearing the FBO to white" (I guess set color to white, right?) and why you mentioned "enable GL_MIN blending (glBlendEquation(GL_MIN))" (what is it useful for?)

Moreover, how can I recognize by a mathematical point of view when a pixel is white or black?

Because it is not so much clear to me, since when I allocate the RBO I use RGBA


gl.glRenderbufferStorage(GL2.GL_RENDERBUFFER, GL2.GL_RGBA, floorWidth, floorHeight);


while when I read BGRA


gl.glReadPixels(0, 0, 250, 250, GL2.GL_BGRA, GL2.GL_UNSIGNED_BYTE, pixels);

carsten neumann
01-05-2012, 08:27 AM
But when I have two or more shadows on one pixel, than that pixel should have a different shadow (let's say "core shadow")..


Yes, that is why usually shadows are not explicitly calculated, but are implicit when calculating lighting. What I mean is that when calculating the color of a fragment one tests from which (if any) light sources it receives light and calculates the reflected light based on that. This naturally puts fragments that don't receive any light into shadow.

It seems to me that you want a more explicit representation of shadows by keeping track of which fragments are in shadow with respect to a light source.
You could assign each light source a "color" based on 1/numLights and draw that. So for two light sources a black pixel means light from all sources, a 50% grey pixel means light from one source and a white pixel means fully in shadow. This requires that you use additive blending (glBlendEquation(GL_FUNC_ADD)) to add up the contributions from the different lights. However the big problem is overdraw: if two triangles from one object are drawn to the same fragment (e.g. because the object has a front and back side) you incorrectly add twice to that fragment.
You could create a render buffer per light source and afterwards combine them to get around that.



meant by "clearing the FBO to white" (I guess set color to white, right?)


yes, I meant setting the clear color (glClearColor()) to white.
The GL_MIN blending was a mistake, it is not needed. The idea was to make sure that a pixel can only ever go from white to black - never the other way - but that won't happen anyway.



Moreover, how can I recognize by a mathematical point of view when a pixel is white or black?


it's rgb value is (1.0, 1.0, 1.0) or (0.0, 0.0, 0.0) respectively.

elect
01-05-2012, 08:27 AM
Another question:


Since I am going to have n shadow matrixes (that I will merge in a final one), shall I declare n RBO or n FBO?


I guess n RBOs since the FBO switching mechanism makes things complicated if I want to read values from two different FBO and merge in one (and this means having an additional temporary content somewhere).. right?

elect
01-05-2012, 08:34 AM
LOL, we posted at the same time, and actually you answered me in your post :D


You could create a render buffer per light source and afterwards combine them to get around that.


Thanks carsten! ;)

elect
01-06-2012, 12:34 AM
I tried to create a RBO for each light (I start with 2).

I start binding the FBO[0] and RBO[0] for the first shadow set.


// bind the FBO
gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, frameBufferID[0]);
gl.glBindRenderbuffer(GL2.GL_RENDERBUFFER, renderBufferID[0]);

// clear
gl.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
gl.glClear(GL2.GL_COLOR_BUFFER_BIT | GL2.GL_DEPTH_BUFFER_BIT);

gl.glViewport(0, 0, floorWidth, floorHeight);
gl.glLoadIdentity();
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glLoadIdentity();

gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glDrawBuffer(GL2.GL_COLOR_ATTACHMENT0);


// render
gl.glColor3f(0.5f, 0.5f, 0.5f);
gl.glBegin(GL2.GL_TRIANGLES);
gl.glVertex3f(0.0f, 0.5f, 0.0f);
gl.glVertex3f(0.5f, 0.0f, 0.0f);
gl.glVertex3f(0.0f, 0.0f, 0.0f);
gl.glEnd();

Then I bind the FBO[0] and RBO[1] for the second shadow set


gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, frameBufferID[0]);
gl.glBindRenderbuffer(GL2.GL_RENDERBUFFER, renderBufferID[1]);
// Allocate the RBO
gl.glRenderbufferStorage(GL2.GL_RENDERBUFFER, GL2.GL_RGBA, floorWidth, floorHeight);
// Attaching the RB image (RBO) to the FBO
gl.glFramebufferRenderbuffer(GL2.GL_FRAMEBUFFER, GL2.GL_COLOR_ATTACHMENT1,
GL2.GL_RENDERBUFFER, renderBufferID[1]);
// clear
gl.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
gl.glClear(GL2.GL_COLOR_BUFFER_BIT | GL2.GL_DEPTH_BUFFER_BIT);

gl.glViewport(0, 0, floorWidth, floorHeight);
gl.glLoadIdentity();
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glLoadIdentity();

gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glDrawBuffer(GL2.GL_COLOR_ATTACHMENT1);

// render
gl.glColor3f(0.5f, 0.5f, 0.5f);
gl.glBegin(GL2.GL_TRIANGLES);
gl.glVertex3f(-0.5f, 0.0f, 0.0f);
gl.glVertex3f( 0.0f,-0.5f, 0.0f);
gl.glVertex3f(-0.5f,-0.5f, 0.0f);
gl.glEnd();

The problem is that the first shadow set gets overwrite...why?

I am binding everytime the proper FBO and RBO, + drawing in the right color_attachment

elect
01-06-2012, 01:07 AM
The problem is that the first shadow set gets overwrite...why?


Because I am an idiot! I was clearing the color BEFORE switching color_attachment :p

elect
01-06-2012, 03:00 AM
I am just wondering if it makes sense what I am doing:

- render to the RBO each shadow set , merge them together with cuda, reading the final resulting shadow set, sending to the CPU, create a texture and apply it to the floor inside my render.

The other option would be:

- render directly to texture, merge them to a unique final texture with Cuda and apply it directly

I choose the first way because I read (or at least this appeared to me) that working with RBO is faster and easier. Take in account that I need to calculate core and partial shadow for hundred/thousand of "shadow set".
In the final result, a white tile will be that tile that has never been shadowed by any shadow set. A core-shadowed tile will be that tile that has been always shadowed by all the shadow sets. All the other tiles are then partial-shadowed, that is at least one shadow set without shadow on this tile and at least one shadow set with shadow on this tile.

Dark Photon
01-09-2012, 06:17 PM
I am just wondering if it makes sense what I am doing... In the final result,
a white tile will be that tile that has never been shadowed by any shadow set. A core-shadowed tile will be that tile that has been always shadowed by all the shadow sets. All the other tiles are then partial-shadowed, that is at least one shadow set without shadow on this tile and at least one shadow set with shadow on this tile.
Sounds like you're trying to compute soft shadows by accumulating the results of rendering hard shadows from a bunch of different points on the surface of an area light source.

I probably don't understand some of what you're requirements are, but...

I have to confess. I don't understand:
why you think you need CUDA for this, why you're convinced you need a renderbuffer over a texture.From what I gather, seems to me you could do this all in GL/GLSL, keep around just two images (or some small number), and use textures instead (so you could use GLSL to do the reduction). Rendering 5 million tris is child's play for GL on a GPU (if you batch your data properly), so even with 100 such frames rendered from different eyepoints (positions on the light source), this isn't super heavy lifting. Not necessarily real-time (I'm assuming that's not your goal), but still pretty darn fast.

You may want to check out Casting Shadows in Real-time (http://www.mpi-inf.mpg.de/resources/ShadowCourse/) for this and other soft shadow algorithms to determine if there's another more efficient solution that you'd be happier with (unless you're just trying to generate an approximate ground truth image). This content may have juiced up and released as a book under the name Real-time Shadows (http://www.amazon.com/Real-Time-Shadows-Elmar-Eisemann/dp/1568814380/ref=sr_1_1?ie=UTF8&qid=1326159231&sr=8-1) (I say may because I haven't actually had the latter in my hands yet, but the Table of Contents looks suspiciously similar).

elect
01-10-2012, 01:43 AM
Sounds like you're trying to compute soft shadows by accumulating the results of rendering hard shadows from a bunch of different points on the surface of an area light source.

I probably don't understand some of what you're requirements are, but...

This program allows to see if 3d vehicles respect a certain standard or not. This is why I need to image my floor like a matrix based on tiles and know with precision which of them are hard and soft shadowed.



I have to confess. I don't understand:
why you think you need CUDA for this, why you're convinced you need a renderbuffer over a texture.

- Because I thought it could be much faster keeping all data on graphic card and merge all the shadow-set from there, without reading everytime the RBO/Texture to a buffer and sending it everytime back the to the cpu.
- Because reading around, they say RBO are faster (like here (http://webcache.googleusercontent.com/search?q=cache:DUgU7ynNHj4J:www.gamedev.net/topic/465851-fbo-render-buffer-vs-texture-when-to-use-each/+opengl+render+renderbuffer+to+texture&cd=8&hl=de& ct=clnk&gl=de&client=firefox-a) )

However, I would like to underline that this is just my impressions and since I am a newbie in OpenGL I am totally open to suggests/ideas



From what I gather, seems to me you could do this all in GL/GLSL, keep around just two images (or some small number), and use textures instead (so you could use GLSL to do the reduction).

Ok, but would this require to use OpenGL 3+? Because I was told to start with OpenGL 2 since it is easier for beginners. Do you think GL/GLSL and OpenGL 3 is too much? I am also asking, because it looked to me like the most of tutorials on the web are for the 80% based on the OpenGL 2 and 1.



Rendering 5 million tris is child's play for GL on a GPU (if you batch your data properly), so even with 100 such frames rendered from different eyepoints (positions on the light source), this isn't super heavy lifting. Not necessarily real-time (I'm assuming that's not your goal), but still pretty darn fast.

Actually I was trying to render a 5 milions of triangles, but I had some problem with the VBO allocation (I could not create a single VBO with more than 2M), so at the end I just allocate 3 VBOs and render them sequentially. It was quite slow, but could be depend on the crap 9400 GT that I am using?

Btw this could depend on java and 32-bit system. I am going to check on a 64b windows in the next future.



You may want to check out Casting Shadows in Real-time (http://www.mpi-inf.mpg.de/resources/ShadowCourse/) for this and other soft shadow algorithms to determine if there's another more efficient solution that you'd be happier with (unless you're just trying to generate an approximate ground truth image). This content may have juiced up and released as a book under the name Real-time Shadows (http://www.amazon.com/Real-Time-Shadows-Elmar-Eisemann/dp/1568814380/ref=sr_1_1?ie=UTF8&qid=1326159231&sr=8-1) (I say may because I haven't actually had the latter in my hands yet, but the Table of Contents looks suspiciously similar).


Thanks for the link, I am going to see it later

Dark Photon
01-10-2012, 05:51 AM
This program allows to see if 3d vehicles respect a certain standard or not.
Ok.



I have to confess. I don't understand:
why you think you need CUDA for this, why you're convinced you need a renderbuffer over a texture.
- Because I thought it could be much faster keeping all data on graphic card and merge all the shadow-set from there, without reading everytime the RBO/Texture to a buffer and sending it everytime back the to the cpu.
Almost certainly! However, AFAICT you can do this all in OpenGL on the GPU, without any readback to the CPU, without any CUDA or OpenCL.


- Because reading around, they say RBO are faster (like here (http://webcache.googleusercontent.com/search?q=cache:DUgU7ynNHj4J:www.gamedev.net/topic/465851-fbo-render-buffer-vs-texture-when-to-use-each/+opengl+render+renderbuffer+to+texture&cd=8&hl=de& ct=clnk&gl=de&client=firefox-a) )
Perhaps a little in some circumstances, depending on vendor/format/driver version idiosyncrasies, but that's not a given.

However, it's trivial to switch rendering from renderbuffer to texture or vice versa. You just use the texture if you need to read the data back into GL later. Use renderbuffer if you don't. And even then, if you're hard set on rendering to renderbuffer, you can use glCopyTexImage2D to copy from RB into texture after rendering. You have options.




From what I gather, seems to me you could do this all in GL/GLSL, keep around just two images (or some small number), and use textures instead (so you could use GLSL to do the reduction).

Ok, but would this require to use OpenGL 3+? Because I was told to start with OpenGL 2 since it is easier for beginners. Do you think GL/GLSL and OpenGL 3 is too much? I am also asking, because it looked to me like the most of tutorials on the web are for the 80% based on the OpenGL 2 and 1.
Given what I think you're trying to do, it seems that using CUDA or OpenCL is probably overkill (definitely so if you've never coded in CUDA or OpenCL).

You could use OpenGL with GLSL shaders for this (which is OpenGL 2.x not 3.x), and that would give you the most flexibility and probably ultimately be simplest (it's not that hard). Though I think you might even be able to do what you want without shaders if you're determined, since you can build shadow maps and render with them without shaders, and possibly use additive blending to merge your shadow counts together.

elect
01-12-2012, 04:30 AM
However, it's trivial to switch rendering from renderbuffer to texture or vice versa. You just use the texture if you need to read the data back into GL later. Use renderbuffer if you don't. And even then, if you're hard set on rendering to renderbuffer, you can use glCopyTexImage2D to copy from RB into texture after rendering. You have options.

Yep, but the tricky part is to find the best implementation :D



Given what I think you're trying to do, it seems that using CUDA or OpenCL is probably overkill (definitely so if you've never coded in CUDA or OpenCL).

You could use OpenGL with GLSL shaders for this (which is OpenGL 2.x not 3.x), and that would give you the most flexibility and probably ultimately be simplest (it's not that hard). Though I think you might even be able to do what you want without shaders if you're determined, since you can build shadow maps and render with them without shaders, and possibly use additive blending to merge your shadow counts together.


Here the comfortable part comes :p, I spent over one year on Cuda, developing a custom program that speed up Zero-Knowledge proof that require thousand of multiplications between large numbers. And based on what I looked around, OpenCL is pretty similar.. So this should not be a problem.
Indeed, actually I do already use Cuda to calculate the coordinates of the projected triangles on the floor (that is the shadow) of my 3d model.
What scares me, is OpenGL 3/4, shader and GLSL.. material on these topics lack over the net :p