suggestion

We could have a built in functions in GLSL for doing intersection testing. Like ray to sphere, ray to cylinder, ray to triangle.
Perhaps IHV can put it in silicon if it would be fast.

I post just to say the thing makes sense also to me considering there are some “important” shaders which do those things.

This seems extremely useful! I actually think it is strange that this has not been done yet, since this is something that a lot of the times has to be done per fragment (= a lot of times!)…

Originally posted by thinks:
I actually think it is strange that this has not been done yet, since this is something that a lot of the times has to be done per fragment.
Yeah, strange. Just as strange, as the fact, that there is no normalize-function in ARB_fragment_program…

Jan.

These may be implemented as a set of functions which you may use in your own shaders. As I recall, as long as you only have one shader called “main”, you can compile several different shaders for use in one program.

So, let’s say you want that ray-sphere test. You would write one shader, called “ray_sphere_intersection”. You would also write your standard “main” fragment shader. Compile and link the two shaders in one program object, and voila! In fact, you could use that same intersection function in your vertex shader, as well.

At least, that’s the theory, as I understand it. I haven’t actually tried doing the above, so if anyone else here has practical experience that would be useful.

And you’re right, releasing a library of such standard functions would be extremely useful, though they shouldn’t be built in (imo).

Originally posted by bChambers:
[b]These may be implemented as a set of functions which you may use in your own shaders. As I recall, as long as you only have one shader called “main”, you can compile several different shaders for use in one program.

So, let’s say you want that ray-sphere test. You would write one shader, called “ray_sphere_intersection”. You would also write your standard “main” fragment shader. Compile and link the two shaders in one program object, and voila! In fact, you could use that same intersection function in your vertex shader, as well.

At least, that’s the theory, as I understand it. I haven’t actually tried doing the above, so if anyone else here has practical experience that would be useful.

And you’re right, releasing a library of such standard functions would be extremely useful, though they shouldn’t be built in (imo).[/b]
The point of the whole discussion is not, to be able to do that stuff, but to get hardware-acceleration if possible. If there are built-in functions, some future hardware may do those calculations pretty fast, and even today drivers could replace them with very optimized code. This is not possible, if you have code, which does that, but if it is not a built-in function, because drivers wouldn´t be able to recognize, that your code does in fact a ray-sphere test.

AFAIK for ARB_fragment_program current drivers do exactly that. They look out for the combination of three functions (or so), because those are usually used to “simulate” a normalization. Since current hardware can do a normalization with dedicated hardware, this improves speed. Had the creators of the extension thought of the possibility, that we might use a normalization here or there, and that hardware-vendors might actually be that crazy to put that useless function into silicon, then they might have added it directly. But well, who could have known that?

Jan.

I don’t exactly see intersection testing being common enough or anywhere near important enough to motivate special built-in functions, and definitely not for special hardware.

Such things sound unlikely to happen but eventually it has too. The future transistor budget will go from 200 million to 300 and what are you going to spend it on, cache? That’s boring.

If they can build a physics chip, then can do this. Well, first let’s see how that turns out :slight_smile:

Originally posted by V-man:
[b]Such things sound unlikely to happen but eventually it has too. The future transistor budget will go from 200 million to 300 and what are you going to spend it on, cache? That’s boring.

If they can build a physics chip, then can do this. Well, first let’s see how that turns out :slight_smile: [/b]
Cache might be boring, but 32 (or 48, or 64… take your pick) pipelines certainly aren’t. Nor are branch prediction, double-precision, or any of a myriad of other features that people want in their GPUs.

Yes, they can do ray-sphere intersection tests in hardware. But you have to ask, is it worth it to add this functionality, for EVERY pixel / vertex pipe on the chip? If you have 8 vertex pipes, and 16 fragment pipes, that means you now have transistors replicated 24 times for an extremely limited functionality, that will more often that not be sitting idle. Not to mention, shaders would have to be rewritten to take advantage of the new functionality (via GL extensions), as detection of the usage would be difficult at best (someone else already pointed this out).

Normalizing vectors is a completely different scenario; the functionality is commonly used, and extremely easy to recognize.

In the physics chip you mentioned, this functionality would be used in almost every iteration of the stepping process, so it would make sense to have it there. Or, of someone were to manufacture a ray-tracing chip, it would also make heavy use of such a function. For scanline-rendering, however, I doubt we will ever see the ray-sphere intersection test implemented as a hardware-accelerated function.

i could use it… i’m working on a custom radial fog fragment shader for an ‘alien’ world which will basicly requires up to 4 2D line/plane intersections. but then there are a lot of things i could use. for 2d intersections it would nice to be able to do two at a time and store the result in a 4d vector. actually for my shader returning a time (t) would be a more imediately useful result. would the intersections return ‘time’ or vectors? i guess both versions would be desirable. would you give the functions line segments, or rays? there would probably have to be a lot of different variants, and the functions would probably be better encapsulated in a ‘library’ maybe rather than adding more built-ins if this isn’t already standard practice.

someone said it wouldn’t be worth building complex functions into hardware because it would have to be replicated across multiple units… maybe this is already done… but if the units were staggered across a pipeline then fewer such functional units would be needed for each operating unit as the functionality could be shared and would be constantly active. with something like hyperthreading technology, the available functional units could be shared across parallel shader routines running different code. functional units would be bound to processes as the shaders are loaded. this sort of scheduling might seem more relevant once cooperation between gpu’s becomes common place. with a hardware pipeline that can handle per pixel alpha sorting via a linked list buffer responsible for remembering alpha pixels and sorting them when the buffer is swapped. completely asynchronous rasterization could probably be achieved meaning that parallel rendering would not even have to worry about critical sections due to alpha blending passes and only major predictable state changes would hang up the parallel rendering.

Originally posted by bChambers:
Cache might be boring, but 32 (or 48, or 64… take your pick) pipelines certainly aren’t. Nor are branch prediction, double-precision, or any of a myriad of other features that people want in their GPUs.

Yes, they can do ray-sphere intersection tests in hardware. But you have to ask, is it worth it to add this functionality, for EVERY pixel / vertex pipe on the chip? If you have 8 vertex pipes, and 16 fragment pipes, that means you now have transistors replicated 24 times for an extremely limited functionality, that will more often that not be sitting idle. Not to mention, shaders would have to be rewritten to take advantage of the new functionality (via GL extensions), as detection of the usage would be difficult at best (someone else already pointed this out).

Normalizing vectors is a completely different scenario; the functionality is commonly used, and extremely easy to recognize.

In the physics chip you mentioned, this functionality would be used in almost every iteration of the stepping process, so it would make sense to have it there. Or, of someone were to manufacture a ray-tracing chip, it would also make heavy use of such a function. For scanline-rendering, however, I doubt we will ever see the ray-sphere intersection test implemented as a hardware-accelerated function.[/QB]
I don’t think there is any plans for doubles. That’s even less likely to happen cause games don’t need it. Branch prediction is not a programmer’s tool. Who is asking for this?
I’m sure raytrace rendering chips will not happen in the next several years.

I’m asking this for the fragment units where it will likely be used for raytracing. The top companies have shown interest in this. Siggraph members have shown interest in this.

It’s a matter that requires though to determine which tests are worthwhile to do in hw.

I’m not sure what michagl is talking about. What time (t)?

time (t) is simple… a lot of the time when you do intersection tests, you would prefer to get a single scalar from which the actual intersection point can be derived by adding the normalized direction of the intersecting ray scaled by a time (t) to the origin of the ray.

a lot of the time ‘t’ is a more desirable return value than the actual intersection point and generally requires less work to derive.

if the function in hardware used t to derive the intersection point, then returned the point… then not only would it have to do more work to get the point… you might actually want (t), then you would have to derive (t) from the intesection point, which would take even more work. so basicly you would want support for both flavors.

i don’t know what the hell people call (t) in established channels. it stands for ‘time’ and is used in parameterized functions.

i figure this is obvious stuff.

michagl - Why is (t) time? I have seen it being used in many places in linear algebra literature as a unit-less parameter. Please explain! I can also agree with you that it would be useful to retrieve this parameter in certain situations.

Originally posted by thinks:
michagl - Why is (t) time? I have seen it being used in many places in linear algebra literature as a unit-less parameter. Please explain! I can also agree with you that it would be useful to retrieve this parameter in certain situations.
its not necesarrily ‘time’, but that is how it is often used, so the symbol ‘t’ just sort of stuck. a lot of the time it is used as a 0.0~1.0 variable… in this case you can think of 0.0 as the beginning of time, and 1.0 as the end of time. traditionally for instance the third variable of a lerp(linear interpolation) function is (t). generally a function is said to be parameterized if is driven by a scalar or multidimensional vector generally symbolized by the symbol ‘t’ or ‘time’. a lot of the time in raytracers, you can think of ‘t’ as the time the ray traveling at the speed of light (or whatever is the transmissive speed limit) intersects some geometry. so if a a ray hits two opaque geometries then you would use the hit with the smaller (t) or time. in this case the time is generally not 0.0~1.0, but 0.0~infinity.

ok, you are thinking of the physics equation

x(t) = m + v * t

m is a start position and v is speed. t is time. The function x(t) is a position in space.

The parametric equation of a line looks identical.

Originally posted by michagl:
someone said it wouldn’t be worth building complex functions into hardware because it would have to be replicated across multiple units…
Which of course would be slower than having one unit for each pipeline. How many instructions do you need to intersect a ray with a sphere, like 4. So assuming a 16 pipe card and ray-sphere test taking only one cycle, then you’d need at least 4 units to each get even with the regular more RISC like instruction set.

Originally posted by V-man:
I don’t think there is any plans for doubles. That’s even less likely to happen cause games don’t need it. Branch prediction is not a programmer’s tool. Who is asking for this?
I’m sure raytrace rendering chips will not happen in the next several years.

I’m asking this for the fragment units where it will likely be used for raytracing. The top companies have shown interest in this. Siggraph members have shown interest in this.

It’s a matter that requires though to determine which tests are worthwhile to do in hw.
QB]
Although you might not think that games need doubles (an inaccurate statement at best; what you really mean is, the benefits of double aren’t as important as the decreased speed), OpenGL is not used exclusively for games - it is conceivable that someone would need greater precision. And branch prediction, though not used directly by programmers, would make conditional code much more useful.

Anyway, the point was that there are many features we would like to see on our favorite graphics cards, and the card makers have to decide which features will make the most sense to include.

…Chambers

bChambers,

If you think games need doubles, then that’s fine with me. Make your case with the IHVs.

My suggestion is for gaming cards and I think it’s appropriate. Every generation of GPU will have a feature the previous didn’t. IHVs want something that will cause a lot of drooling and I think this is pretty good.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.