PDA

View Full Version : Identify triangle's vertices in fragment shader



EmJayJay
05-13-2015, 09:15 AM
I've been looking around and reading about the different shader stages in and out variables that are provided by default but I have not found a way or identified it to A) Gather all vertices of the triangle used in the fragment shader or B) Gather all the vertex indices of the triangle used in the fragment shader.

I'd like to experiment in some point with decals and other texture projection methods by trying to include them directly during the model's fragment shader stage. The model would have a buffer which would hold information about the decal and its orientation towards the model.

Since when rendering triangles, all 3 vertices are unique compare to other vertices. There for this information could be used to form a key or look into sorted array to identify the triangles ID (or is the triangle's ID the same as gl_PrimitiveID found in fragment shader).

I think you could also draw wire frames on top of the model during the same procedure if you had access to the vertices.

So have I missed something or is there a better way?

Alfonse Reinheart
05-13-2015, 10:01 AM
A) Gather all vertices of the triangle used in the fragment shader or B) Gather all the vertex indices of the triangle used in the fragment shader.

That's because such information doesn't exist. Vertices are used by the rasterizer to generate fragments, but individual fragments and fragment shaders have no knowledge of which vertices were used to generate them. Fragment shaders do have a gl_PrimitiveID input, but they can't know anything more than that.

Unless you deliberately provide fragment shaders with such knowledge.


I'd like to experiment in some point with decals and other texture projection methods by trying to include them directly during the model's fragment shader stage. The model would have a buffer which would hold information about the decal and its orientation towards the model.

I don't see how performing texture projection requires access to the vertex positions in the fragment shader. You need the position of the fragment (which you are provided via gl_FragCoord (https://www.opengl.org/wiki/Fragment_Shader#System_inputs), which can be transformed into whatever space you need (https://www.opengl.org/wiki/Compute_eye_space_from_window_space)), but the particular vertices used to generate it are irrelevant.


I think you could also draw wire frames on top of the model during the same procedure if you had access to the vertices.

You could also just render the mesh again in wireframe polygon mode.


So have I missed something or is there a better way?

If there was some legitimate need to know the positions of the vertices for a fragment, you'll need to use a Geometry Shader to provide them.

EmJayJay
05-13-2015, 12:49 PM
That's because such information doesn't exist. Vertices are used by the rasterizer to generate fragments, but individual fragments and fragment shaders have no knowledge of which vertices were used to generate them. Fragment shaders do have a gl_PrimitiveID input, but they can't know anything more than that.

Unless you deliberately provide fragment shaders with such knowledge.

What would be the best way to do that? Fragment shader is created as such that it knows it handles triangles and the gl_PrimitiveID is used to get the wanted vertices. Though these don't have the screen transformed coordinates. And is the gl_PrimitiveID different when the original triangle is clipped?



I don't see how performing texture projection requires access to the vertex positions in the fragment shader. You need the position of the fragment (which you are provided via gl_FragCoord (https://www.opengl.org/wiki/Fragment_Shader#System_inputs), which can be transformed into whatever space you need (https://www.opengl.org/wiki/Compute_eye_space_from_window_space)), but the particular vertices used to generate it are irrelevant.


Well I didn't meant for it to be used only for texture projection. You could also include lighting and give it a shape like flashlight. Or even shape of a shadow to be applied over the fragment. The vertices could have been used to identify the triangle being drawn and access the triangle's "attributes" which in my case would have contained orientation of simple computed patterns. Simple ray tracing could have been applied as well.

Like I said, I'd like to experiment on these things.
The way decals are currently done the way I have learned is that they use a simple box shape object to clip through surfaces and generate mesh from it and map the decal's texture in it and render it in a separate draw call. http://blog.wolfire.com/2009/06/how-to-project-decals/



You could also just render the mesh again in wireframe polygon mode.


If you can merge two render calls to one then why do double draw? I'm thinking applying these in games.

Alfonse Reinheart
05-13-2015, 02:06 PM
What would be the best way to do that? Fragment shader is created as such that it knows it handles triangles and the gl_PrimitiveID is used to get the wanted vertices. Though these don't have the screen transformed coordinates. And is the gl_PrimitiveID different when the original triangle is clipped?

The "best" way to do it would be not to do it at all.

However, if you absolutely must do this, for some reason, you'll have to do some performance testing to determine the performance difference between passing these as per-vertex outputs (each vertex in the same triangle gets the same 3 positions, so you have to duplicate data) and accessing global memory. The former will require a geometry shader, which is not known for its performance. The latter will require accessing global memory, which is also not known for its performance. But at least in the latter case, the data for each triangle will be quickly cached.

And yes, gl_PrimitiveID will remain the same for each of the resulting triangles in clipping operations. To the degree that geometry ever gets clipped, of course.

However, as I keep pointing out, none of the things you're trying to do requires what you're asking for. Let's look at them:


Well I didn't meant for it to be used only for texture projection. You could also include lighting and give it a shape like flashlight.

That is texture projection, which as previously stated does not require fragments to access their individual triangle's vertices. Just because a texture represents light intensities instead of diffuse surface reflectance doesn't stop it from being a texture.


Or even shape of a shadow to be applied over the fragment.

This is also texture projection, which as previously stated does not require fragments to access their individual triangle's vertices. Again, just because a texture represents whether light is occluded at a particular fragment doesn't stop it from being a texture.

Please stop thinking of textures as pictures. It's 2015, not 1995. And even in 1995, they had textures that represented illumination (light maps).


The vertices could have been used to identify the triangle being drawn and access the triangle's "attributes" which in my case would have contained orientation of simple computed patterns.

Any "attributes" of interest can be passed as per-vertex parameters, and thus interpolated across the surface (with flat where appropriate) and provided as per-fragment inputs. After all, an "orientation" value is probably something you want to be interpolated across a surface, not confined to each individual triangle. Not unless you want to create a very discontinuous effect (in which case, you can do that also purely with mesh data).


Simple ray tracing could have been applied as well.

Well, now you're talking about a complete different kind of rendering. The only way for a fragment shader to do any kind of meaningful raytracing is for it to be accessing the (scene) mesh itself as a while. At which point, the primitive you render has no relation to the object being raytraced; it's just a thing you have to do to get the rasterizer to execute your fragment shader (which nowadays could mostly be handled by a compute shader. Unless you need the per-sample processing (https://www.opengl.org/wiki/Per-Sample_Processing)).

So there's no correlation between the specific primitive you rendered and any particular location on the mesh you're raytracing into. You're rendering an imposter, and the FS doesn't care what the imposter's vertices are. It's not like you're raytracing the imposter object; you're raytracing a scene.


The way decals are currently done the way I have learned is that they use a simple box shape object to clip through surfaces and generate mesh from it and map the decal's texture in it and render it in a separate draw call. http://blog.wolfire.com/2009/06/how-to-project-decals/

I see nothing in that algorithm that requires fragments to have the vertices that generate them. The reasons those guys render them that way, in multiple rendering calls, are because they don't want to:

1) Have the shader used for the primary rendering of a surface be responsible for also rendering the decals. Which your suggested algorithm would have to do.

2) Suffer the massive performance hit of having a shader loop through a number of decals for fragments, even if there is no decal anywhere near the object, just to project that point and find out that none of the decals have an effect. Which again, your suggested algorithm would have to do.

It really has nothing to do with the fragment shader not having access to the geometry.


If you can merge two render calls to one then why do double draw? I'm thinking applying these in games.

If that's true, then performance is very relevant to you. As such, rendering twice with two relatively cheap shaders will likely be much better for performance than rendering once while accessing global memory or shoving 9 32-bit floats at the FS.

The main issue you might have is with depth buffering.