glDrawElements - Tetrahedron

Hello!
We have functionality to draw points (1 vertex), lines (2 vertices), triangles (3 vertices).
Why don’t we have a possibility to draw tetrahedron (4 vertices)?

4 vertices can be projected to screen as usual. And then in the zone inside of them (in screen space) the device can check if the existing values in depth buffer are located inside tetrahedron (in 3D). If they are inside -> the fragment shader will run. With interpolated values from tetrahedron vertices (volume interpolation).

This will add possibility to approximate 3D functions with vertices (not in regular grid). Now we have 3D textures for that.

For example, we can create volumetric light sources and dimming sources, which interact with existing geometry via depth buffer. We can take a mesh and create extruded faces using normals, this crust can be filled with tetrahedrons (base mesh and extruded mesh). Then some values can be interpolated based on distance from the surface of the mesh (light intensity for example).

It sounds like you are asking for a special case to be implemented in hardware : if the depth value from the depth buffer is inside the tetrahedron, then run the fragment shader. If it is not inside, then don’t run the fragment shader.

Therefore, the GPU needs to evaluate 2 points on the tetrahedron and then compare if the value from the depth buffer is inside.

Also, why is this suggestion limited to a terahedron? Light volumes aren’t always tetrahedrons : I have used cylinders.

Your suggestion doesn’t fit with the rest of the graphics pipeline and it is limited to a tetrahedron.

Yes, the depth test for volumetric object can be INSIDE, not only LESS or GREATER. Or even OUTSIDE, if it is needed.

You can create any shape with tetrahedrons by putting them together just like the surface is formed with triangles. Tetrahedron is a simplex which supports interpolation with barycentric coordinates just like triangle.
I’m just moving to the next dimension and not taking shapes like cylinder.

Why don’t we have a possibility to draw tetrahedron (4 vertices)?

We use triangles because they’re fast to rasterize. They are fast to rasterize because they’re flat objects. They’re guaranteed planar, and they require nothing more than 4D homogenous math and well-known rasterization algorithms.

Rasterizing a tetrahedron requires a lot more work.

I don’t think that it requires a lot of work as it is a convex shape. It’s just a rasterization of back and front sides at the same time and getting depths.

The idea seem fine :slight_smile:

I see another problem about the selection of the back and front sides, since a tetrahedron has 4 faces, not only one as triangles or planar quads :frowning:

But this can perhaps to be handled via the use of two temporally depth buffers that can store the back sides on the first depth buffer and the front sides on the second depth buffer ?
(and at a second pass, we can use the content of the “back sides” and “front sides” depth buffers for to see if the current voxel is inside or outside this volume)

Note that this can too add anothers volumetric effects such as things like clouds and smoke or a fast emulation of semi-transparents glass windows between the eye and drawed objects for example (but without reflexions and refractions effects of course)

Something, this can to be see such as something more or less near to the inverse of dual paraboloid maps, no ?

Why you don’t do that already?

You could use a geometry shader to generate the tetrahedron slices and output gl_Layer so that your fragment shader outputs go to the appropriate slice of the 3D texture.

This may not be exactly what you want to do as I didn’t 100% understand your original request but I’m pretty sure you can achieve what you want using the existing toolset. The only question may be performance.

[QUOTE=aqnuep;1236808]Why you don’t do that already?

You could use a geometry shader to generate the tetrahedron slices and output gl_Layer so that your fragment shader outputs go to the appropriate slice of the 3D texture.

This may not be exactly what you want to do as I didn’t 100% understand your original request but I’m pretty sure you can achieve what you want using the existing toolset. The only question may be performance.[/QUOTE]

No, I’m not talking about rendering to 3D texture.
My idea is more similar to rendering lights in deferred shading.
When the scene is drawn we have a depth buffer to work with.
So for each pixel we have a 3d position. We can do lookup in 3d texture, which stores some value distributed in volume.
Or we can lookup in a mesh of tetrahedrons -> far more efficient approach without a regular grid of 3d texture.

For example the point light can be done like this: we take a polygonal sphere and fill it with tetrahedrons - all sharing the vertex in the center of the sphere. Other 3 vertices of tetrahedrons are faces of the sphere. The center vertex can have color value 1 and surface vertices can have value 0. So when we render this mesh of tetrahedrons existing depth buffer will get interpolated value based on position from center to surface.
But we can model any light source shape this way.
Of course the traditional way of doing point light in DS (by passing center position and drawing sphere triangles) is better.

We can also render a transparent mesh which gets opaque in the middle (made of colored glass, liquid, etc.). We must get the distance between front side and back side of tetrahedrons. Or front side and depth buffer - if depth value is inside tetrahedron. We get the density based on this distance and determine the color.

There are lots of things we can do :slight_smile:

Then shouldn’t your suggestion actually be “glDrawElements should support GL_SPHERE”?

Anyway, the depth test stage is designed to test the depth value of an incoming fragment with the depth value of the depth buffer.
In your case, you want it to have 2 fragments from different faces of your tetrahedron to be the incoming fragments. How exactly is that suppose to happen?
Imagine this, you have a fragment from face 1 with position XYZ.
Which fragment is behind that face? Which face is behind that face? Is there even a face behind that face?
Is it face 2 or face 3 or face 4?
The GPU would have to go into a series of calculations to figure out what that second fragment is. It would actually need access to the primitive assembly stage information.

And the final question. Is that going to be faster than a 2 pass approach?

[QUOTE=V-man;1236921]
The GPU would have to go into a series of calculations to figure out what that second fragment is. It would actually need access to the primitive assembly stage information.

And the final question. Is that going to be faster than a 2 pass approach?[/QUOTE]

Well, GPU should get smarter. Recently it figured out how to tessellate primitives. There is a evolution here :wink:

I’m not talking about 1 tetrahedron. I’m talking about a mesh made of thousands of tetrahedrons. Millions of tetrahedrons in frame.
So 2 pass approach is going to be inefficient.

My first usage of tetrahedrons is real-time ambient occlusion & global illumination.

[ATTACH=CONFIG]137[/ATTACH]

Well, tessellation is a bad example. First, it was there for about a decade now, in various forms. It’s just that Microsoft selected a way how they imagine tessellation should work and thus IHVs and GL followed the same concept.
Also, tessellation is not a fundamental change. Primitive assembly and rasterization is still the same.

I don’t say your idea is bad, but rather than having tetrahedron rendering, which is kind of limited to the use case scenarios you’ve presented, I would better see generic functionalities to be introduced that could, as a side effect, be used also to implement the technique you’ve presented.

There are only three possible configurations for a tetrahedron relative to the camera (1 front face, 3 back faces / 2 front, 2 back / 3 front, 1 back), so you technically could split the tetrahedron into 3 or 4 regions where front and back depth are linearly interpolated, but I think using tetrahedrons wouldn’t be practical, because you’d create a lot of overdraw when you approximate a sphere with a set of tetrahedrons.

However, I wonder if it would be possible to get the previous depth buffer value as fragment shader input, it has to be read before the FS runs for early Z anyway. Then you could draw the front-faces of your light volume, compute the distance to the back faces analytically and compare with the depth input to e.g. test for intersection.

But there could be some optimizations, when all 4 points are less or greater than Z buffer and some hierarchical zbuffer testing is used.

I think that this can too be a good thing if we want cut one object by one or more planes.

For example, we can store the common center of tetrahedrons and using this for recontruct the correct volumn if we want to cut the object with plane(s).

Without this volumetric information, we loose triangles that are rejected by the cut plane(s)
(cf. a 3D closed surface can begin an 3D open surface if it was cut by a 2D plane)

But with this info, truncated tetrahedrons can to be reconstructed, so the surface is always a closed surface after the user planes cutting

Think for example about a sphere with only surfaces triangles
=> we have hole(s) if we cut the sphere by one (or more) plane(s)
==> but with tetrahedrons, since we work with a solid sphere [not only the limit surface of a sphere], we have always a solid form, cf. a portion of sphere, at the output

For example, if we cut an apple with a knife, we can see automatically the inside of the apple at the cutting plane
(this can only to be computed if we use a 3D texture of course)

On other side, why not directly to use new primitives such as sphere, tetrahedron, cube and others 3D shapes instead of a very big lot of triangles ?
(ok, the rasterisation become more difficult to make but this can use a lot more parallelism than the “simple” serial rasterisation of a lot of triangles too)