!!! nVidia and effects (hot topic) !!! --== Woow!!! ==--

Hello everybody!
I want to ask a very interesting question for you!
Has nVidias 3D-graphic processors functions for RAY-TRACING implemented in hardware or not (Since GeForce2MX)? If yes, how can I use them through OpenGL? Dont smiling and joking above me!
All modern effects uses RAY-TRACING. And I think RAY-TRACING owe to be implemented in hardware!!! Has come it is time. It`s very important algorithm (ray-trace).
Thank you for attention!

No, it hasn´t. At least it has no “ray-tracing-engine”, what you are asking for. However if you implement a ray-tracer, you may be able to use some functions, current hardware provides, but i don´t know much about this.

As far as i know it is impossible to implement hardware ray-tracing. And “all the ray-tracing effects” are only used in films, where they are computed in software-mode. Real-time applications like games don´t use ray-tracing.

Sorry, pal, but you have to do it the old way.
Jan.

Raytracing requires all the data be available at once, so there would be no parallelism between the cpu and gpu anymore.

The gpu has very finite memory, and would likely not be able to store all the data required for a decent looking scene to be raytraced.

For objects such as spheres, raytracing has an infinite level of detail (to a certain extent), and with very finite memory limits (we’re talking 64 - 128meg on the latest gfx cards), youd run out of memory real fast yet again.

If you had any reflectivity in your scene, youd have to eliminate a lot of your geometry culling, as otherwise you could very likely get incorrect results.

That said, theres a fragment program for the nv30 that performs realtime raytracing, however the scene is VERY simple.

see http://www.flipcode.com/cgi-bin/msg.cgi?showThread=00007722&forum=3dtheory&id=-1

Originally posted by DopeFish:
[b]Raytracing requires all the data be available at once, so there would be no parallelism between the cpu and gpu anymore.

The gpu has very finite memory, and would likely not be able to store all the data required for a decent looking scene to be raytraced.

For objects such as spheres, raytracing has an infinite level of detail (to a certain extent), and with very finite memory limits (we’re talking 64 - 128meg on the latest gfx cards), youd run out of memory real fast yet again.

If you had any reflectivity in your scene, youd have to eliminate a lot of your geometry culling, as otherwise you could very likely get incorrect results.

That said, theres a fragment program for the nv30 that performs realtime raytracing, however the scene is VERY simple.[/b]

Hi, my friends!
No-no!!! I talking not about ray-trace rendering!
I had in view only single ray trace. I know all about ray-tracing and what it requires lot amount of memory for secondary and greater rays. For creating dynamic shadows in games required using of ray-trace. But it runs slowly on CPU. So, my question: has nVidia`s 3D-graphic processors functions for RAY-TRACING implemented in hardware? GPU can process single trace much faster than CPU.

DopeFish, you said:
That said, theres a fragment program for the nv30 that performs realtime raytracing, however the scene is VERY simple.

— Where I can get this “fragment program for the nv30 that performs realtime raytracing”? Even if scene is very simple!

Thank you for attention!

>>>No-no!!! I talking not about ray-trace rendering!
So, my question: has nVidia`s 3D-graphic processors functions for RAY-TRACING implemented in hardware? GPU can process single trace much faster than CPU.<<<

If you are not talking about rendering, then what are you talking about?

No nvidia doesn’t have a raytracing hardware available to the public. I think that was pretty obvious since if they did, they would be writing “YES, we do have raytracing built into our 3D hardware” in in super jumbo sized letters all over the moon.

Dopefish is talking about a demo at
cgshader.org
All source code and everything should be there.

There are a number of raytracing shaders you can find here: http://www.cgshaders.org/shaders/

You can find even more info by searching for “real time raytracing” on Google.

you mean realtime raytracing like it was done on pentium 200 without 3d acceleration? :wink:
http://www.pouet.net/prod.php?which=5

I think, Youre all bad programmers (Im sure!).
OK, I repeat one more time for especially bad:
Has any 3D-graphics processor hardware acceleration for one trace of ray as following:

GLbool glTraceRayEXT(GL_RAY_TRACE, &Triangle, &Ray, &IntersectionPoint);

(Not RAY-TRACE rendering!!! NOT!!! Only single trace;
NO REALTIME RENDERING!!! NO REALTIME!!! I had in view only improving speed of ray-tracing for example, for phisical dynamics by using internal trace functions of my 3D accelerated card if they exists!!! Understand? So, my question: exists they or not??? ???
I don`t know english language too well, but I think I formulated my question pretty well)

For example:
on pentium4 2000 MHz -> 10,000,000 ray-traces per second;
on GeForce4 -> 150,000,000 ray-traces per second;

Understand, what means my question?
I hope, after that all of you will have understand my purposes!
Thank you for attention!

I think, you’re someone who has some very frightening problems with reading. (I’m sure!)

The answer is NO, as it was already mentioned in previous posts.

The solution is simple, if you wish to have a function working like:

GLbool glTraceRayEXT(GL_RAY_TRACE, &Triangle, &Ray, &IntersectionPoint);

do it yourself on the CPU.

Your API suggestion has to be the worst I have ever seen and obviously impossible for anyone to accelerate in hardware. The posters have been quite correct to point out some of the things they have w.r.t. retained databases, and your posts in response have been spectacularly illadvised.

Technical details on implementations: http://graphics.stanford.edu/papers/rtongfx/ http://www.cs.uiuc.edu/Dienst/UI/2.0/Describe/ncstrl.uiuc_cs/UIUCDCS-R-2002-2269

General Discussion of this vs conventional rendering: http://online.cs.nps.navy.mil/DistanceEd…ation/cdrom.pdf http://online.cs.nps.navy.mil/DistanceEd…on/session.html

push crap button

Ti russkij, ne tak li? Ja eto ponial uge pered tem posmotret tvoi profil. Vi nikogda ne umeli rasgovarivat…

Sorry for my russian post. I didn’t want to offend you guys, I simply had to say C++ something in private.

Hi, friends!
Thank you all for your opinions and advices.
I already have my own software implementation of RAY-TRACING.
I ask this question just because of I not skilled programmer (3 years) and know nothing about opportunity of implementation ray-tracing algorithms in hardware!

OK, lets talk about productivity.
My algoritm works: 750,000 traces per second on Celeron A 366 MHz.
Why all around speak me: You can reach millions even on your machine Celeron A 366 MHz (V-man)? Is it possible?
I think not! Do you think so???

Ok, it looks more clear now.

750.000 traces per second on a 366Mhz processor? it’s already quite fast in my opinion… but it depends how complex your test scene is.

You could optimize your routine in assembler with 3dnow or sse for example to get higher trace routine on your machine.

Here is an article that covers the implementation of a raytracer in assembler with sse: http://www.digit-life.com/articles/rtraytracing/

There are also other possibilities, it is just a suggest.

regards,

So if you’re not a skilled programmer why asking dumb questions that google can answer in a very short time. It’s just the same as in the “OpenGL suggestions” forums: people that are too lazy or don’t understand things come and ask for sound/networking/chinese restaurant to be integrated into OpenGL. I can understand that sometimes you have some ideas and the first thing you think is “hey that’s cool”. But then before posting you should do an extensive search on google and groups.google. You know the problem is that some people (I can only speak for myself but I bet there some other people that have a similar opinion) are getting tired of seeing all the same questions every day. Thats why there is a beginner and advanced board (not only here almost anywhere) - people that have no problem with answering a “how to make a camera” questions again and again are free to answer these questions and people that don’t like it are free not to see such questions popping up every hour.

There’s no problem being a beginner, and I used to ask dumb questions on this board too, but learn to do a little research before asking. In 99.9% of cases you’ll find the answer without asking - and it’s even faster for you. And please stop using such a ******* topic title - my personal rule is that if the title contains more than 3 “!” the question/suggestion is dumb - and an exception still has to be found.

Although this posting may sound agressive it isn’t meant to be, please don’t take it personally. It’s just an advice and following the rules I’ve presented here has even more positive effects, one of them is that you won’t get no more replies like “push crap button” or like this one anymore.

-Lev

Howewer, the ray-tracing hardware exists. http://www.saarcor.de/

Ah, so that’s one of the pieces of hardware John Carmack talked about last quakecon. After he said there exists some specialized hardware that does ray-tracing I was like, ‘oh that’s kinda nifty, I need to check this out and see what they can do.’ The screenshot on that page with the two trees and the billions (exagerated I know) of flowers looks pretty cool. Nice and detailed. The lighting on those images looks pretty neat too. I wouldn’t mind having one of these cards to mess around with.

What would be cool is have a graphics card that has two chips on it where one chip is a GPU like what we have today in the R300+ or NV30+ and the other chip be a ray-tracers like the one on that page. And have it setup where you can either render an image with both chips at the same time or just one or the other. I know this is pretty far-fetched and all, and would cost thousands but still a neat concept I think.

-SirKnight

I think the conference room with the table/chairs was more impressive.

Anyway, I’m all for EXT_chinese_restaurant; that’ll save me lots of lunch time and allow me to write more OpenGL code faster.