Raytracing acceleration

Hi
I want to use openGl acceleration to realize a raytracer. I made the basic functions.I put my camera at the start point of the ray and she looks in the ray direction. I don’t know how to get the nearest polygon when I launch a ray.
Thank

It’s not an opengl question, but a general computer graphics questions. The answer may be found, for example, in :

Foley,van Dam,Feiner,Hugues
“Computer graphics : principles and practice”, 2nd edition
I think (not sure) that it’s edited by Addison-Wesley

try this sites out.
they are all about raytracing.

http://www.acm.org/tog/resources/RTNews/demos/overview.htm
http://www.acm.org/pubs/tog/editors/erich/

http://www.cfxweb.net/article.php?sid=997&mode=&order=0

http://lycium.cfxweb.net/

[This message has been edited by AdrianD (edited 05-22-2002).]

interesting.
but much more interesting:

why the hell there are still no true Raytracing-Accel-Cards ? this could be THE GOLDMINE for a startup company… such a card really could f…k up ati & nvidia.

raytracing is most easily parallelizable and i think even the interfaces like opengl or dx needs only to be changed a little…

actually, there is some raytracing-accel-hw out there… but not for consumers.
Last year or so, i read a test of such an rt-accelerator. It was very powerfull and capable doing such things like caustics very fast.(“fast” means “faster than mental ray” NOT realtime)

but, i don’t think it would be the best solution, to have an RT-only accelerator. I think it would be much more better to have an “normal” 3d-accelerator with some Raytracing-features for the fancy stuff (ie.: reflections/refractions).
but wait, actually we allready got this (somehow)… when we use cubemaps and some sophisticated vertex programms.

IMO it would be much more nicer to get Global Illumination accelerated in realtime.

Actually there are a couple of papers which discuss using current or coming GPUs to directly accellerate raytracing and related algorithms.
They are not running at 100 fps, but they are faster than CPUs and improving more quickly.

Useful links:
http://graphics.stanford.edu/papers/rtongfx/
http://ncstrl.cs.uiuc.edu/Dienst/UI/2.0/Describe/ncstrl.uiuc_cs/UIUCDCS-R-2002-2269

Great papers!

There are hardware ray tracing implementations, they are not the goldmine you assume them to be. You can buy one right now, today.
http://www.art-render.com/

Your typical OpenGL acceleration of ray tracing for first hit involves color coding the polygon ID and drawing everything to the framebuffer. Then when you read back the image you know which object you need to ray test against.

The papers are much more sophisticated, but quite useless until better hardware is available. Even then their value is questionable. The Stanford paper requires MIMD floating point pixel processors before it’s remotely interesting. That’s quite an extrapolation, but at least we know what NVIDIA is planning to build now. When you have that kind of power, you’ll be able to do all sorts of things, ray tracing won’t be the most interesting of them.

One of the items on my list (but not on my boss’s) is to generate a script from an OpenGL scene that could be rendered in a raytracing system for better lighting effects, and possibly even substitute real-time opengl models for higher resolution (higher triangle count) models for scene display. The primary difference being a screen capture vs. a more sophisticated rendering of the scene. However, as discussed here, the lack of hardware acceleration has kept my boss from backing the idea, even though they have always liked the raytracings I do vs screen capture of opengl scenes. BUT, as mentioned here, with the advent of hardware accelerated shaders and the other advances in GPUs, I have been reworking my idea to offer a cross between raytracing and real-time images using the shader technology. Thus generating a fully vertex shaded scene in less than a second and return to real-time mode (60fps) as if nothing happened… Anyone ever tried something like this?