Well, tessellation is a bad example. First, it was there for about a decade now, in various forms. It's just that Microsoft selected a way how they imagine tessellation should work and thus IHVs and GL followed the same concept.
Also, tessellation is not a fundamental change. Primitive assembly and rasterization is still the same.
I don't say your idea is bad, but rather than having tetrahedron rendering, which is kind of limited to the use case scenarios you've presented, I would better see generic functionalities to be introduced that could, as a side effect, be used also to implement the technique you've presented.
Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
Technical Blog: http://www.rastergrid.com/blog/
There are only three possible configurations for a tetrahedron relative to the camera (1 front face, 3 back faces / 2 front, 2 back / 3 front, 1 back), so you technically could split the tetrahedron into 3 or 4 regions where front and back depth are linearly interpolated, but I think using tetrahedrons wouldn't be practical, because you'd create a lot of overdraw when you approximate a sphere with a set of tetrahedrons.
However, I wonder if it would be possible to get the previous depth buffer value as fragment shader input, it has to be read before the FS runs for early Z anyway. Then you could draw the front-faces of your light volume, compute the distance to the back faces analytically and compare with the depth input to e.g. test for intersection.
But there could be some optimizations, when all 4 points are less or greater than Z buffer and some hierarchical zbuffer testing is used.
Originally Posted by mbentrup
I think that this can too be a good thing if we want cut one object by one or more planes.
For example, we can store the common center of tetrahedrons and using this for recontruct the correct volumn if we want to cut the object with plane(s).
Without this volumetric information, we loose triangles that are rejected by the cut plane(s)
(cf. a 3D closed surface can begin an 3D open surface if it was cut by a 2D plane)
But with this info, truncated tetrahedrons can to be reconstructed, so the surface is always a closed surface after the user planes cutting
Think for example about a sphere with only surfaces triangles
=> we have hole(s) if we cut the sphere by one (or more) plane(s)
==> but with tetrahedrons, since we work with a solid sphere [not only the limit surface of a sphere], we have always a solid form, cf. a portion of sphere, at the output
For example, if we cut an apple with a knife, we can see automatically the inside of the apple at the cutting plane
(this can only to be computed if we use a 3D texture of course)
On other side, why not directly to use new primitives such as sphere, tetrahedron, cube and others 3D shapes instead of a very big lot of triangles ?
(ok, the rasterisation become more difficult to make but this can use a lot more parallelism than the "simple" serial rasterisation of a lot of triangles too)
Last edited by The Little Body; 05-02-2012 at 02:50 PM.