Ray Tracing

Okay so this lies in the area of OpenGL 3.0 (?!), but to look at the recent evolution of cutting edge software renderers (mental ray, renderman and vex) ray tracing is a mandate for truly cutting edge lighting. What do people think about the implementaion of basic ray tracing capabilities?
Initially one merely needs the ability to cast a ray from any point in a given direction to another location and return lighting and point information. Comments?

What do you mean? This functionality is already exposed through GL_ARB_raytracing.

oops! my bad.

AAron… youre such a joker…

v3a: have you thought of that in order for a gfx card to do that it needs to know the whole scene ( taking alog of the memory on the gfxcard)… either you have to upload it once, or you have to do something like
gl_query_raytrace(ray information)
render the vital scene via glcalls
gl_get_reytrace_result();

that means you need to resend most of the scene for each query…

or does anyone have a better approach?

Oh, it’s time for third suggestion in this month for implementing raytrasing in hardware! The world is simply spinning around…

there is no way to perform ray tracing in opengl since the hardware doesn’t know all the geometry of the scene.

but opengl could some ray tracing capabilities:

  • ray / triangle intersection acceleration
    for example you fill a vertex array,you give a ray and the hardware return the intersection.

  • lighting capabilities
    point + normal + direction + lights => phong illumination.
    that is not very usefull for only one point but for several points … why not ???
    maybe “feedback” can allready do that but don’t know it enought.

call me deaf, but could you please explain, why Raytracing algorithms have to know all of the scene?

Shadows can be done otherway, although it
would be nice to have shadows calculated
just by placing lightsources.

Reflections and refractions can be done
by sample ray projection or by leaving
limited Space behind the camera and most
significant areas in the scene.
That would suffice for disturbed reflection like from a rusty or bumpy sphere.

Further effects such as Radiosity and Photon Map are, I think, far from realtime capable.

Before someone grunts and gives the standard answer for why Ray-Tracing needs an entire scene up front, let me suggest that it doesn’t. We’ve been in rasterland for so long that we may have missed some reasonably straightforward ways the OpenGL data streaming style of API could be applied to Ray-Tracing.

The key is to cache the rays instead of caching the scene. As each triangle comes down the pipe, test it against all or some rays and move on to the next.

I’d point out that a framebuffer is implicitly doing just that – one ray per pixel/sample, storing the closest intersection and color, where the ray’s origin and direction are a function of x,y.

To support ray-tracing, we would either reserve some bits in the framebuffer or use a corresponding texture lookup that store origin/direction info. The total number of rays is then limited to available FB memory unless we page back and forth or get more clever.

One early problem with this approach is that each triangle sent to the HW generates new rays, which would get cached for the future triangles, but not intersected against previous triangles. So there’s a clear order of operations problem. However, this can be solved with a low order of multi-pass, basically N+1 passes, where N is the approximate recursion level of the ray-trace (actually, ray cast is probably better here).

Anyway, that’s just a sketch to get you thinking. The idea is more involved than that.

One thing that would be very useful, at the driver-level is being able to dual-use texture and vertex buffer memory so that fragment shaders writing to texture targets could generate rays for another pass through the T&L logic. The Stanford GPU raytracing approach seems to rely on the fragment shaders only, representing the whole scene as textures. I’d prefer to have more flexibility if we’re going to try to use the GL API fully.

And, of course, full-speed framebuffer or texture readback never hurts.

Avi

[This message has been edited by Avi Bar-Zeev (edited 05-21-2003).]

yes, it could be nice for real time ray tracing. but this suggestion imply a lot of change in the graphic pipeline, and these change will not be usefull for real time rendering (i mean opengl or direct3d rendering). i don’t think that it will be done.
but some “raytracing capabilities” would be usefull for both “zbuffer” (i don’t know if there is a name) rendering and raytracing rendering. and why not photon mapping. radiosity … i don’t think so.

The Question with radiosity is "how much
Energy(Light) is transferred from patch (polygon) a to patch b that implies for n
patches n*n transaction calculations for each pass.
You need at least 2 passes, one for “loading” the patches with energy and one for the first level of diffuse reflection
thats alota work.

Perhaps there’s a chance for low level
photon maps such as sunlight being reflectet
from windows to the street, but how can you
possibly speed up the effects in the shadow
of a vineglass or on the ground of a swimming
pool?

Originally posted by ParAdocTus:
but how can you
possibly speed up the effects in the shadow
of a vineglass or on the ground of a swimming
pool?

By cheating, just like it’s been done for ages. It’s unrealistic to expect a general rendering solution for realtime graphics anytime soon.

As for radiosity requiring at least two passes, so does any decent dynamic lighting solution, at least every one I’m aware of.

Moreover, I don’t think simple indirect lighting needs to be done with raytracing or hemicubes. When you think of it, a simple shadowmap, preferably combined with a color channel, stores efficently all direct light from one lightsource. With some smart blurring and shading I think you can get a quite good approximation of second order radiosity.

I’ve done some research on this and I got some ok results showing ambient light and color bleeding. I gave it up, though, as I decieded I’d rather have five non-radiosity lights than two radiosity ones.

-Ilkka

Zengar! You are here! Mail me at:
sch56@hotbox.ru
raytrace@front.ru

BTW, This topic is about the same with my topic “NVIDIA HOT TOPIC”!
v3a! Theyll never understand what you mean! Only I understand your question! If engineers could build co-processor for CPU, they can build co-GPU for GPU to implement individual traces. Itll very effective for true-realistic shadows and much more effects! They can`t understand that you talking about single traces! They think that data rate through RAM-GPU is low! Ha-ha! They really think you talking about REAL-TIME ray-tracing!!!

He want to cast a ray and get back color and point information… in order to know where the point reflected/refracted against, or if it is on shadow or light, you need to know the whole scene, and all material properties, and that is a raytracer ( one ray at the time maybe , but still a raytracer ). the only difference between that and a full realtime raytracer is the amount of traces.

Hi all,

as a number of people quite rightly pointed out ray tracing is intensive of memory and CPU cycles - memory for one should have to cache the whole scene (or just the rays - nice one Avi), and CPU for the escalating combinatorial problem of primary, secondary, tertiary etc rays. Neither criterion is, however a problem, certainly by the time that OpenGL could implement ray tracing as core functionality. Current state of the art cards have 512MB on board and perform 100’s of millions of floating point ops per second.
After implementation of pixel and vertex shaders, ray tracing is the next step.

I guess I suggested only the simple shoot out a primary ray, and return color/point info as this one operation is the basis of ray tracing usefulness and one can do a bunch of stuff with this alone. Ambient occlusion, ray-traced reflections+refractions and a whole slew of internal lighting tricks that I often use to make graphics look tasty. I have come to this forum from a backgorund of software rendering - mostly mental ray, renderman and vex and the one irritating thing is the turn around time for getting a result - it is often the limiting factor in producing real quality as you have only gotten to try a couple of different looks before you deliver. Hardware rendering will overtake software rendering in due course. Till then cant wait…

v3a