Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 8 of 8

Thread: Barycentric coordinates and more

  1. #1
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    612

    Barycentric coordinates and more

    This suggestion is relatively straight forward: expose in a useful way the barycentric coordinates of a fragment. I propose the following details:
    1. a built in vec3 giving those coordinates (s0, s1, s2) defined in the fragment shader
    2. a mechanism to fetch the values of an input of the fragment shader for each vertex of a triangle. One possible method is to defined a function U(x, I) where x is the name of an input into a fragment shader and I is 0, 1 or 2 signifying which vertex of the triangle to grab



    Before anyone jumps up and down and says one can get this via a geometry shader, I'd like to point out that such a system would be terribly inefficient since to do (2) above would induce a heck of a lot more in's to the fragment shader and doing (1) also induces an extra in. On a related note, I also have this suggestion in mind for OpenGL ES [the GLES message boards are kind of barren really, the last message being posted in last July].


    Naturally this jazz above needs some additional tweaks to handle point and line rasterization (I suggest that s2 is made 0 for line rasterization and that U(x, 2) is an implementation dependend undefined value).

    The mentality of this suggestion is that those numbers are likely sitting around anyways (at the very least the values of (2) are around, though the barycentric coordinates of a primitive may or may not be explicitly calculated).

  2. #2
    Member Regular Contributor
    Join Date
    Jan 2011
    Location
    Paris, France
    Posts
    281
    Why not a vec3 Barycentric(vec3 ref, vec3 v1, vec3 v2, vec3 v3) that return into a vec3 the 3 barycentrics coordinates of ref relative to the triangle (v1, v2, v3) ?
    (this can too only return a vec2 since the z coordinate returned is systematically = 1 - x - y)

    Or a more simple vec2 Barycentric() without arguments if we are into a fragment shader because the v1, v2, v3 and ref coordinates are already implicitly know at this stage
    Last edited by The Little Body; 04-03-2013 at 04:16 PM.
    @+
    Yannoo

  3. #3
    Senior Member OpenGL Lord
    Join Date
    May 2009
    Posts
    6,055
    The mentality of this suggestion is that those numbers are likely sitting around anyways (at the very least the values of (2) are around, though the barycentric coordinates of a primitive may or may not be explicitly calculated).
    I'd be interested to see evidence that barycentric coordinates are "sitting around anyways" in fragment-shader-accessible memory, across multiple iterations of various hardware. Interpolation is handled by the rasterizer, not by the fragment shader. And I'm guessing that it doesn't even really use barycentric coordinates to do interpolation.

  4. #4
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    612
    I'd be interested to see evidence that barycentric coordinates are "sitting around anyways" in fragment-shader-accessible memory, across multiple iterations of various hardware. Interpolation is handled by the rasterizer, not by the fragment shader. And I'm guessing that it doesn't even really use barycentric coordinates to do interpolation.

    I did clearly state that the barycentric coordinates may or my not be hanging around. However the original numbers, those from the vertices of a primitive most certainly are. If the barycenric coordinates are not hanging around, for an implementation to support giving them, then likely a hardware interpolator would get used for such a shader. In truth, I strongly suspect that more hardware implementation do NOT have the barycentric coordinates hanging around explicitly and instead a rasterizer computes the coefficients for interpolation of each varying directly. At the very least supporting this from GL rather than doing it via geometry shaders is worthwhile since the geometry shader way would force that a geometry shader is needed and well, in GLES land, there is no geometry shader and even in GL land, geometry shaders, even those that only emit one triangle, are nasty mean things for many implementations because they have SO much input data.

  5. #5
    Advanced Member Frequent Contributor
    Join Date
    Dec 2007
    Location
    Hungary
    Posts
    989
    Quote Originally Posted by Alfonse Reinheart View Post
    Interpolation is handled by the rasterizer, not by the fragment shader. And I'm guessing that it doesn't even really use barycentric coordinates to do interpolation.
    This is not exactly true. Many hardware today is actually doing so called "pull-model interpolation", which means that the interpolation actually happens in the fragment shader and the main inputs are the barycentric coordinates which are then used to interpolate the per-vertex attributes. So what kRogue requested is actually quite possible at least on some hardware.

    Update: IIRC, Fermi and Evergren, i.e. practically all DX11 GPUs do support pull-model interpolation.
    Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
    Technical Blog: http://www.rastergrid.com/blog/

  6. #6
    Member Regular Contributor
    Join Date
    Dec 2009
    Posts
    251
    Quote Originally Posted by Alfonse Reinheart View Post
    I'd be interested to see evidence that barycentric coordinates are "sitting around anyways" in fragment-shader-accessible memory, across multiple iterations of various hardware. Interpolation is handled by the rasterizer, not by the fragment shader. And I'm guessing that it doesn't even really use barycentric coordinates to do interpolation.
    AMD does it that way on the southern island chips according to their open ISA documents:

    Pixel shaders use LDS to read vertex parameter values; the pixel shader then interpolates them to find the per-pixel parameter values.
    The GPU has some dedicated interpolation instructions for the fragment shader.

  7. #7
    Senior Member OpenGL Lord
    Join Date
    May 2009
    Posts
    6,055
    However the original numbers, those from the vertices of a primitive most certainly are.
    How do you know that? Why would it have those original numbers, when all shading languages just use the interpolated values? The rasterizer may have them, but that doesn't mean the fragment shader does. And it doesn't mean the rasterizer can be modified to pass this data along without direct hardware support.

    in GLES land, there is no geometry shader
    All of the information currently available suggests that such information is only available in DX11-style hardware. So I rather doubt that any (non-NVIDIA) GLES hardware has this stuff available.

    Update: IIRC, Fermi and Evergren, i.e. practically all DX11 GPUs do support pull-model interpolation.
    Are we forgetting about Intel's DX11 GPUs?

  8. #8
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    612
    How do you know that? Why would it have those original numbers, when all shading languages just use the interpolated values? The rasterizer may have them, but that doesn't mean the fragment shader does. And it doesn't mean the rasterizer can be modified to pass this data along without direct hardware support.
    Take a look what mbentrup and aqneup have said: AMD and NVIDIA hardware support "pull model interpolation" which means that since they can use baycentric coordinates to generate interpolator values, then the values of the interpolators at the vertices of triangle then must be available to the fragment shader (or at the very least can be computed by passing (1,0,0), (0,1,0) and (0,0,1) as barycentric coordinates to get the value).

    Are we forgetting about Intel's DX11 GPUs?
    I know at times I wish I could forget about Intel GPUs (especially on Linux, the GL implementation of Intel on Windows is not only a completely different code base, it is also much better). However, if Intel HW cannot do this, but the idea is pushed forward then the following:
    1. For GL4 generation this is an ARB extension
    2. Gets promoted to core for GL5 if next generation Intel hardware has the capability



    The extension has neat interesting possibilities with regard to controlling (via discarding) how a primitive is filled. Moreover a logical extension of this idea is to have within the pipeline the ability to specify what range(s) of barycentric coordinates are rasterized. Though, this is much harrier to specify intelligently and in a way that keeps the hardware happy.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •