Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 2 12 LastLast
Results 1 to 10 of 14

Thread: ANY way to do proper fuzzy translucent overlapping lines in OpenGL?

  1. #1
    Intern Contributor
    Join Date
    Nov 2010
    Posts
    61

    ANY way to do proper fuzzy translucent overlapping lines in OpenGL?

    It seems like the OpenGL model of rendering is the way most computer graphics is going, with the triangles and shaders and whatnot. I've been having a lot of trouble duplicating some old-style line-drawing code this way, however. Specifically, if I want to have nice, wide antialiased lines that transparently overlap without causing glitchy artifacts wherever segments intersect. Can this even be done as part of a regular render pass? For many thousands of lines, it will be far too slow to render each line into a separate buffer and then composite it onto the scene.

    For example, take the common technique of drawing a wide line as a triangle strip for the main line, with secondary strips along the outside the edges which fade into transparency for the antialiasing/fuzzing. Even if these lines are not translucent in the middle, their fuzzy edges are. And where they overlap or intersect, you get artifacts where the translucency piles up.

    Just wondering if anyone knows a technique of getting around this problem. I thought that perhaps I could keep an offscreen alpha buffer where the line's alpha values are written, and then the fragment shader could look at that and determine whether to write new values...

    Thanks for any suggestions!

  2. #2
    Junior Member Newbie
    Join Date
    Oct 2017
    Location
    holographica
    Posts
    10
    Quote Originally Posted by bsabiston View Post
    It seems like the OpenGL model of rendering is the way most computer graphics is going, with the triangles and shaders and whatnot. I've been having a lot of trouble duplicating some old-style line-drawing code this way, however. Specifically, if I want to have nice, wide antialiased lines that transparently overlap without causing glitchy artifacts wherever segments intersect. Can this even be done as part of a regular render pass? For many thousands of lines, it will be far too slow to render each line into a separate buffer and then composite it onto the scene.

    For example, take the common technique of drawing a wide line as a triangle strip for the main line, with secondary strips along the outside the edges which fade into transparency for the antialiasing/fuzzing. Even if these lines are not translucent in the middle, their fuzzy edges are. And where they overlap or intersect, you get artifacts where the translucency piles up.

    Just wondering if anyone knows a technique of getting around this problem. I thought that perhaps I could keep an offscreen alpha buffer where the line's alpha values are written, and then the fragment shader could look at that and determine whether to write new values...

    Thanks for any suggestions!
    If you dont want to go down traditional openGL pipeline . I can think of doing this using vector graphics. Give try to NanoVG vector graphics library using OpenGL.

    https://github.com/memononen/nanovg

  3. #3
    Intern Contributor
    Join Date
    Nov 2010
    Posts
    61
    I don't want to use any libraries -- would like to understand how to do it myself.

  4. #4
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    4,156
    Quote Originally Posted by bsabiston View Post
    ...trouble duplicating some old-style line-drawing code
    ...I want to have nice, wide antialiased lines that transparently overlap without causing glitchy artifacts wherever segments intersect.
    Can this even be done as part of a regular render pass?
    Possibly. You have to define your requirements a bit more precisely though. For instance, are these lines 2D, 2.5D, or 3D? What (blending-wise) do you want to happen in areas where wide line segments intersect? Is there an explicit or implied ordering (e.g. layering)? Post a few pictures to make this clear.

    ...where they overlap or intersect, you get artifacts where the translucency piles up.
    What do you mean by that?
    Why does it happen, and what about the "old-style line-drawing code" prevents it from happening?

    ...I thought that perhaps I could keep an offscreen alpha buffer where the line's alpha values are written, and then the fragment shader could look at that and determine whether to write new values...
    That was one of the techniques that came to mind to me too, particularly since it's possible to render an A-buffer in a single pass nowadays (e.g. per-pixel linked lists). But we need to know more about your problem first.
    Last edited by Dark Photon; 10-10-2017 at 05:18 PM.

  5. #5
    Intern Contributor
    Join Date
    Nov 2010
    Posts
    61
    Thanks for replying. I'm not really porting old code -- by 'old-style line drawing code', I just meant that in the old days, you had a framebuffer with an alpha channel, and you could create a separate alpha buffer to track the drawing of lines. When you went to draw a translucent value, you could check the alpha buffer and decide whether or not to draw that pixel, add to the alpha buffer, etc. I just want to find out if the same result is possible using vertex/fragment shaders and 3D.

    So -- the lines I'm drawing are in 3D, or maybe you would call them 2.5D? They have no 3D thickness like cylinders, but rather are polylines with a 2D thickness rendered in vertex/fragment shaders using screen space calculations so that as you turn the camera the lines do not appear to change in width (ie they don't look like a paper-thin billboard from the side). As far as ordering, I am not quite sure how I will deal with that yet. Right now I'm drawing all opaque objects/lines first and then drawing all the transparent ones with depth write turned off.

    What I mean by translucency piling up is that if I draw a half-transparent line, and it crosses over itself, then the intersection is twice as opaque as the rest. This is not a problem so much when the line just crosses itself, but rather when triangles in tight curves overlap and create unsightly spikes. (see first two images attached). I'd like to find a way to prevent that. If I am not antialiasing, then I can use a stencil buffer and then each brushstroke can use the stencil to make sure it doesn't overwrite pixels that have already been drawn into. But with antialiasing, it is a problem, because you DO want the feathered edges to overlap. Otherwise the areas that fade to nothing will still mark the stencil and you get black edges where there is any crossover (see image 3).

    So basically, I was thinking that if I could access an alpha buffer from the fragment shader, then as I draw lines I could write the current line alpha values to the buffer. And the shader could look at the existing values and decide whether to write or not. That way the fuzzy edges could be allowed to write over existing ones, increasing the alpha value until it hit the current line translucency, if that makes sense. Kind of an additive process. Basically mimicking the old framebuffer approach. But from what I've read, fragment shaders cannot access alpha channel values, can they? (Also I should mention I am using iOS/OSX Metal to do this, not strictly OpenGl although so far they seem pretty similar.)

    If I can't use an alpha buffer, maybe there's some way to use blending modes? I know how to do it using a separate render pass and compositing, but I'm trying to figure out if there's anyway to do it so an entire scene with many lines can be rendered in one pass, without having to render and composite each line one at a time.

    Does that illuminate the problem more? Thanks!

    Click image for larger version. 

Name:	line_examples.jpg 
Views:	18 
Size:	5.3 KB 
ID:	2511Click image for larger version. 

Name:	camera_turned.jpg 
Views:	25 
Size:	5.4 KB 
ID:	2512Click image for larger version. 

Name:	stencil_overlap.jpg 
Views:	19 
Size:	4.6 KB 
ID:	2513

    edit: I don't know why the images are so small, does it resize them automatically? I uploaded much larger ones.
    Last edited by bsabiston; 10-10-2017 at 06:02 PM.

  6. #6
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    4,156
    Quote Originally Posted by bsabiston View Post
    So -- the lines I'm drawing are in 3D, or maybe you would call them 2.5D? ...
    As far as ordering, I am not quite sure how I will deal with that yet.
    Right now I'm drawing all opaque objects/lines first and then drawing all the transparent ones with depth write turned off.
    Ok. Sounds like 2.5D. 2D objects, but they have an implicit ordering or layering to them.

    What I mean by translucency piling up is that if I draw a half-transparent line, and it crosses over itself, then the intersection is twice as opaque as the rest.
    Ok. That's true if you use additive blending and your incoming fragments aren't opaque. However, you have control over both the blend functions and your fragment alphas.

    This is not a problem so much when the line just crosses itself, but rather when triangles in tight curves overlap and create unsightly spikes. (see first two images attached). I'd like to find a way to prevent that.
    Even with that choice of blend function and fragment alphas, this is only going to happen when the triangles in the mesh used to rasterize your non-overlapping line feature overlap each other. One solution is to modify how you're generating the vertices of this strip of triangles such that the triangles don't overlap. That's not really too hard. First, compute your displaced vertex positions in a 2D plane. Then connect them with triangles such that no two triangles overlap. No need for a stencil buffer to fix that part of your problem.

    If I am not antialiasing, then I can use a stencil buffer and then each brushstroke can use the stencil to make sure it doesn't overwrite pixels that have already been drawn into. But with antialiasing, it is a problem, because you DO want the feathered edges to overlap. Otherwise the areas that fade to nothing will still mark the stencil and you get black edges where there is any crossover (see image 3).
    Right. I'm starting to see what you might want here. It sounds what you really want is not additive blending (what it sounds like you're doing now) but more like a "max" blending (which is supported). Just as a thought exercise imagine this: start with a black scratch framebuffer off-to-the-side which is the same size as your system framebuffer. Before rendering a line feature, forget what color you plan to draw that line feature. Instead, just render the line (using your trimesh) into the scratch framebuffer with an intensity value of 0..1 using MAX blending (0 = transparent, 1 = opaque). Then when you're done, you've got 100% intensity in the core of the line (even where the line overlaps itself) and you've got nice fade-in partial intensity along the edges of the line (even where the line overlaps itself, with the fade-in region intensities combining using the MAX operation, avoiding the effect you describe above). Now that you have this intensity buffer (the scratch framebuffer), then you can go back and shade in all the pixels of your line into your system framebuffer. Example: for 0% intensity pixels, you have: 0% * line_color + 100% * background, and for 100% intensity pixels you add 100% * line_color + 0% background.

    Leaving aside for a second that this suggests multipass, does this sound like it'd provide the look you want?

    If so, then you can spend some cycles trying to think of how to make this single pass. For instance, one off-the-cuff idea is to do this "intensity buffer" generation in the alpha channel of the current framebuffer (with MAX blending). Then when blending on your line, you could potentially use destination alpha as the source of the alpha.

    That's just off-the-cuff. There's no doubt better options. I don't like this much because there are render state changes and full-screen clears of the alpha channel between each line (though if you think about it, you really don't need a full-screen clear). However, you could batch all of the lines that are rendered with the same color in the same layer in this same pass, which would minimize the number of state changes and clears to just the number of line color layers you have.

    Another option is use stencil to cut out the difficult areas and handle those specially. Another is to use different depths for opaque vs. translucent parts so that opaque parts always overwrite translucent parts (though you may still need something special for where translucent parts overlap, possibly such as some increasing depth slope on the translucent edges so that the right edge fragments "win" the depth test in the right areas to give you a symmetric look on where they meet.

    So basically, I was thinking that if I could access an alpha buffer from the fragment shader, then as I draw lines I could write the current line alpha values to the buffer. And the shader could look at the existing values and decide whether to write or not.
    Yeah, what you're getting at is programmable blending. I believe you can do a limited form of this with Texture barriers. Basically, within a single fragment shader, read the current value, do your blending, and then write out the result. However, this doesn't support multiple reads/writes to the same pixel/texel. For that kind of thing you need to get into writing shaders with side-effects (possibly using Image Load Store).

    edit: I don't know why the images are so small, does it resize them automatically? I uploaded much larger ones.
    When you upload them to forums, yes it seems to (probably to save space). However, if you post an image link to a picture on another site, IIRC it doesn't limit the size.
    Last edited by Dark Photon; 10-12-2017 at 07:13 AM.

  7. #7
    Intern Contributor
    Join Date
    Nov 2010
    Posts
    61
    Thanks for the reply again!


    Ok. Sounds like 2.5D. 2D objects, but they have an implicit ordering or layering to them. Even with that choice of blend function and fragment alphas, this is only going to happen when the triangles in the mesh used to rasterize your non-overlapping line feature overlap each other. One solution is to modify how you're generating the vertices of this strip of triangles such that the triangles don't overlap. That's not really too hard. First, compute your displaced vertex positions in a 2D plane. Then connect them with triangles such that no two triangles overlap. No need for a stencil buffer to fix that part of your problem.


    The problem is that this is not really 2.5D, if I understand correctly. The lines and points are in full 3D, but the rendering of them is in 2D. So I can turn the camera without ever recalculating any vertices -- that means that intersections are going to happen, especially as you turn the camera side-on to the lines. Here is an example of this kind of technique:http://codeflow.org/entries/2012/aug...-solid-trails/

    In your suggestion, where would this fixing of the triangle intersections happen? In the vertex shader somehow, or before the render calls?

    Your description of the "max" blending is how I handled things in my last project. But it was strictly 2D and doing multipasses was no problem. So the "intensity" buffer, using the alpha of the frame buffer, sounds promising. I can't vary the depth for opaque/translucent because like I say these are in fact in 3D already, so I need them to appear with the correct perspective. I'll spend some time today playing around with the MAX blending, maybe that can work -- though I think it would take two draws of my translucent objects? One to form the intensity buffer and one to use it?

    Texture barriers sounds like what I was looking for. I wonder if that is possible with Metal? If I could get the current alpha/color value in the fragment shader, then I could not let it go above the current 'max' value. It would not be perfect but it would stop those ugly triangle intersections. Of these two approaches the Max/intensity thing would be the best because the lines would be perfect. I feel like I spent a lot of time trying to do that in my last project and couldn't get it to work, which is why I ultimately went with the multiple pass approach. But that was on Nintendo with more limited hardware capability so maybe I can find a way now...

    Thanks, this all helps a lot!

  8. #8
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    4,156
    Quote Originally Posted by bsabiston View Post
    In your suggestion, where would this fixing of the triangle intersections happen? In the vertex shader somehow, or before the render calls?
    Well, the first step is to understand where their coming from. I don't know, but it sounds like you're extruding quads (or triangles) independent of how adjacent quads (or triangles) are being extruded. You can't do that.

    What you need is a watertight 2D mesh. That is, it fills all contained space but with no overlaps.

    So, first step toward that is to draw out on paper how this would work. As a starter proposal, draw the vertices making up the center polyline. Then take each vertex and clone it, moving one "left" and one "right", where those directions are away from the centerline, along a line bisecting each interior angle formed by the original polyline verts (see the URL you posted in your last post for a picture). Then connect up those displaced vertices with quads. Then split each quad into 2 tris. There you go: watertight 2D mesh with no triangle overlap.

    So that's on paper. Then you can implement that either on the CPU (uploading the triangle verts to the GPU that way). Or you can use the GPU to transform a line into this trimesh in a shader.

  9. #9
    Intern Contributor
    Join Date
    Nov 2010
    Posts
    61
    Are you certain itís possible to do that? I mean, imagine you draw a line, say a simple horizontal line across the screen. But then you turn the camera 90 degrees, so that you are looking at the line end-on. All the vertices would basically be on top of each other. I canít think of a way to adjust the vertices (in-shader) in cases like that. Itís not just a matter of looking at the adjacent neighbors, every point in the line could overlap with almost every other point, do you see what I mean? Thatís why I think a geometry based approach wonít work if Iím doing the rendering in shaders. It seems like it would have to be a blending solution where triangles can overlap but the blending/alpha-check keeps the colors correct.

  10. #10
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    4,156
    Quote Originally Posted by bsabiston View Post
    Are you certain it’s possible to do that? I mean, imagine you draw a line, say a simple horizontal line across the screen. But then you turn the camera 90 degrees, so that you are looking at the line end-on.
    Yes. The max blending will cover that.

    I thought you were talking about how to get rid of the overlap between adjacent triangles in your polyline mesh. This would do that. However, you can try just throwing the max blending at it and ignoring that property of your triangle mesh.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •