ANY way to do proper fuzzy translucent overlapping lines in OpenGL?

It seems like the OpenGL model of rendering is the way most computer graphics is going, with the triangles and shaders and whatnot. I’ve been having a lot of trouble duplicating some old-style line-drawing code this way, however. Specifically, if I want to have nice, wide antialiased lines that transparently overlap without causing glitchy artifacts wherever segments intersect. Can this even be done as part of a regular render pass? For many thousands of lines, it will be far too slow to render each line into a separate buffer and then composite it onto the scene.

For example, take the common technique of drawing a wide line as a triangle strip for the main line, with secondary strips along the outside the edges which fade into transparency for the antialiasing/fuzzing. Even if these lines are not translucent in the middle, their fuzzy edges are. And where they overlap or intersect, you get artifacts where the translucency piles up.

Just wondering if anyone knows a technique of getting around this problem. I thought that perhaps I could keep an offscreen alpha buffer where the line’s alpha values are written, and then the fragment shader could look at that and determine whether to write new values…

Thanks for any suggestions!

[QUOTE=bsabiston;1288875]It seems like the OpenGL model of rendering is the way most computer graphics is going, with the triangles and shaders and whatnot. I’ve been having a lot of trouble duplicating some old-style line-drawing code this way, however. Specifically, if I want to have nice, wide antialiased lines that transparently overlap without causing glitchy artifacts wherever segments intersect. Can this even be done as part of a regular render pass? For many thousands of lines, it will be far too slow to render each line into a separate buffer and then composite it onto the scene.

For example, take the common technique of drawing a wide line as a triangle strip for the main line, with secondary strips along the outside the edges which fade into transparency for the antialiasing/fuzzing. Even if these lines are not translucent in the middle, their fuzzy edges are. And where they overlap or intersect, you get artifacts where the translucency piles up.

Just wondering if anyone knows a technique of getting around this problem. I thought that perhaps I could keep an offscreen alpha buffer where the line’s alpha values are written, and then the fragment shader could look at that and determine whether to write new values…

Thanks for any suggestions![/QUOTE]

If you dont want to go down traditional openGL pipeline . I can think of doing this using vector graphics. Give try to NanoVG vector graphics library using OpenGL.

I don’t want to use any libraries – would like to understand how to do it myself.

[QUOTE=bsabiston;1288875]…trouble duplicating some old-style line-drawing code
…I want to have nice, wide antialiased lines that transparently overlap without causing glitchy artifacts wherever segments intersect.
Can this even be done as part of a regular render pass?[/QUOTE]

Possibly. You have to define your requirements a bit more precisely though. For instance, are these lines 2D, 2.5D, or 3D? What (blending-wise) do you want to happen in areas where wide line segments intersect? Is there an explicit or implied ordering (e.g. layering)? Post a few pictures to make this clear.

…where they overlap or intersect, you get artifacts where the translucency piles up.

What do you mean by that?
Why does it happen, and what about the “old-style line-drawing code” prevents it from happening?

…I thought that perhaps I could keep an offscreen alpha buffer where the line’s alpha values are written, and then the fragment shader could look at that and determine whether to write new values…

That was one of the techniques that came to mind to me too, particularly since it’s possible to render an A-buffer in a single pass nowadays (e.g. per-pixel linked lists). But we need to know more about your problem first.

Thanks for replying. I’m not really porting old code – by ‘old-style line drawing code’, I just meant that in the old days, you had a framebuffer with an alpha channel, and you could create a separate alpha buffer to track the drawing of lines. When you went to draw a translucent value, you could check the alpha buffer and decide whether or not to draw that pixel, add to the alpha buffer, etc. I just want to find out if the same result is possible using vertex/fragment shaders and 3D.

So – the lines I’m drawing are in 3D, or maybe you would call them 2.5D? They have no 3D thickness like cylinders, but rather are polylines with a 2D thickness rendered in vertex/fragment shaders using screen space calculations so that as you turn the camera the lines do not appear to change in width (ie they don’t look like a paper-thin billboard from the side). As far as ordering, I am not quite sure how I will deal with that yet. Right now I’m drawing all opaque objects/lines first and then drawing all the transparent ones with depth write turned off.

What I mean by translucency piling up is that if I draw a half-transparent line, and it crosses over itself, then the intersection is twice as opaque as the rest. This is not a problem so much when the line just crosses itself, but rather when triangles in tight curves overlap and create unsightly spikes. (see first two images attached). I’d like to find a way to prevent that. If I am not antialiasing, then I can use a stencil buffer and then each brushstroke can use the stencil to make sure it doesn’t overwrite pixels that have already been drawn into. But with antialiasing, it is a problem, because you DO want the feathered edges to overlap. Otherwise the areas that fade to nothing will still mark the stencil and you get black edges where there is any crossover (see image 3).

So basically, I was thinking that if I could access an alpha buffer from the fragment shader, then as I draw lines I could write the current line alpha values to the buffer. And the shader could look at the existing values and decide whether to write or not. That way the fuzzy edges could be allowed to write over existing ones, increasing the alpha value until it hit the current line translucency, if that makes sense. Kind of an additive process. Basically mimicking the old framebuffer approach. But from what I’ve read, fragment shaders cannot access alpha channel values, can they? (Also I should mention I am using iOS/OSX Metal to do this, not strictly OpenGl although so far they seem pretty similar.)

If I can’t use an alpha buffer, maybe there’s some way to use blending modes? I know how to do it using a separate render pass and compositing, but I’m trying to figure out if there’s anyway to do it so an entire scene with many lines can be rendered in one pass, without having to render and composite each line one at a time.

Does that illuminate the problem more? Thanks!

[ATTACH=CONFIG]1588[/ATTACH][ATTACH=CONFIG]1589[/ATTACH][ATTACH=CONFIG]1590[/ATTACH]

edit: I don’t know why the images are so small, does it resize them automatically? I uploaded much larger ones.

[QUOTE=bsabiston;1288882]So – the lines I’m drawing are in 3D, or maybe you would call them 2.5D? …
As far as ordering, I am not quite sure how I will deal with that yet.
Right now I’m drawing all opaque objects/lines first and then drawing all the transparent ones with depth write turned off.[/QUOTE]
Ok. Sounds like 2.5D. 2D objects, but they have an implicit ordering or layering to them.

What I mean by translucency piling up is that if I draw a half-transparent line, and it crosses over itself, then the intersection is twice as opaque as the rest.

Ok. That’s true if you use additive blending and your incoming fragments aren’t opaque. However, you have control over both the blend functions and your fragment alphas.

This is not a problem so much when the line just crosses itself, but rather when triangles in tight curves overlap and create unsightly spikes. (see first two images attached). I’d like to find a way to prevent that.

Even with that choice of blend function and fragment alphas, this is only going to happen when the triangles in the mesh used to rasterize your non-overlapping line feature overlap each other. One solution is to modify how you’re generating the vertices of this strip of triangles such that the triangles don’t overlap. That’s not really too hard. First, compute your displaced vertex positions in a 2D plane. Then connect them with triangles such that no two triangles overlap. No need for a stencil buffer to fix that part of your problem.

If I am not antialiasing, then I can use a stencil buffer and then each brushstroke can use the stencil to make sure it doesn’t overwrite pixels that have already been drawn into. But with antialiasing, it is a problem, because you DO want the feathered edges to overlap. Otherwise the areas that fade to nothing will still mark the stencil and you get black edges where there is any crossover (see image 3).

Right. I’m starting to see what you might want here. It sounds what you really want is not additive blending (what it sounds like you’re doing now) but more like a “max” blending (which is supported). Just as a thought exercise imagine this: start with a black scratch framebuffer off-to-the-side which is the same size as your system framebuffer. Before rendering a line feature, forget what color you plan to draw that line feature. Instead, just render the line (using your trimesh) into the scratch framebuffer with an intensity value of 0…1 using MAX blending (0 = transparent, 1 = opaque). Then when you’re done, you’ve got 100% intensity in the core of the line (even where the line overlaps itself) and you’ve got nice fade-in partial intensity along the edges of the line (even where the line overlaps itself, with the fade-in region intensities combining using the MAX operation, avoiding the effect you describe above). Now that you have this intensity buffer (the scratch framebuffer), then you can go back and shade in all the pixels of your line into your system framebuffer. Example: for 0% intensity pixels, you have: 0% * line_color + 100% * background, and for 100% intensity pixels you add 100% * line_color + 0% background.

Leaving aside for a second that this suggests multipass, does this sound like it’d provide the look you want?

If so, then you can spend some cycles trying to think of how to make this single pass. For instance, one off-the-cuff idea is to do this “intensity buffer” generation in the alpha channel of the current framebuffer (with MAX blending). Then when blending on your line, you could potentially use destination alpha as the source of the alpha.

That’s just off-the-cuff. There’s no doubt better options. I don’t like this much because there are render state changes and full-screen clears of the alpha channel between each line (though if you think about it, you really don’t need a full-screen clear). However, you could batch all of the lines that are rendered with the same color in the same layer in this same pass, which would minimize the number of state changes and clears to just the number of line color layers you have.

Another option is use stencil to cut out the difficult areas and handle those specially. Another is to use different depths for opaque vs. translucent parts so that opaque parts always overwrite translucent parts (though you may still need something special for where translucent parts overlap, possibly such as some increasing depth slope on the translucent edges so that the right edge fragments “win” the depth test in the right areas to give you a symmetric look on where they meet.

So basically, I was thinking that if I could access an alpha buffer from the fragment shader, then as I draw lines I could write the current line alpha values to the buffer. And the shader could look at the existing values and decide whether to write or not.

Yeah, what you’re getting at is programmable blending. I believe you can do a limited form of this with Texture barriers. Basically, within a single fragment shader, read the current value, do your blending, and then write out the result. However, this doesn’t support multiple reads/writes to the same pixel/texel. For that kind of thing you need to get into writing shaders with side-effects (possibly using Image Load Store).

edit: I don’t know why the images are so small, does it resize them automatically? I uploaded much larger ones.

When you upload them to forums, yes it seems to (probably to save space). However, if you post an image link to a picture on another site, IIRC it doesn’t limit the size.

Thanks for the reply again!

Ok. Sounds like 2.5D. 2D objects, but they have an implicit ordering or layering to them. Even with that choice of blend function and fragment alphas, this is only going to happen when the triangles in the mesh used to rasterize your non-overlapping line feature overlap each other. One solution is to modify how you’re generating the vertices of this strip of triangles such that the triangles don’t overlap. That’s not really too hard. First, compute your displaced vertex positions in a 2D plane. Then connect them with triangles such that no two triangles overlap. No need for a stencil buffer to fix that part of your problem.

The problem is that this is not really 2.5D. The lines and points are in full 3D, but the rendering of them is in 2D. So I can turn the camera without ever recalculating any vertices – that means that intersections are going to happen, especially as you turn the camera side-on to the lines. Here is an example of this kind of technique:http://codeflow.org/entries/2012/aug/05/webgl-rendering-of-solid-trails/

So I really need/want some solution that just can handle self-intersection without me having to recalculate any vertices.

Your description of the “max” blending is how I handled things in my last project. But it was strictly 2D and doing multipasses was no problem. So the “intensity” buffer, using the alpha of the frame buffer, sounds promising. I can’t vary the depth for opaque/translucent because like I say these are in fact in 3D already, so I need them to appear with the correct perspective. I’ll spend some time today playing around with the MAX blending, maybe that can work – though I think it would take two draws of my translucent objects? One to form the intensity buffer and one to use it?

Texture barriers sounds like what I was looking for. I wonder if that is possible with Metal? If I could get the current alpha/color value in the fragment shader, then I could not let it go above the current ‘max’ value. It would not be perfect but it would stop those ugly triangle intersections. Of these two approaches the Max/intensity thing would be the best because the lines would be perfect. I feel like I spent a lot of time trying to do that in my last project and couldn’t get it to work, which is why I ultimately went with the multiple pass approach. But that was on Nintendo with more limited hardware capability so maybe I can find a way now…

th

Thanks for the reply again!


Ok. Sounds like 2.5D. 2D objects, but they have an implicit ordering or layering to them. Even with that choice of blend function and fragment alphas, this is only going to happen when the triangles in the mesh used to rasterize your non-overlapping line feature overlap each other. One solution is to modify how you’re generating the vertices of this strip of triangles such that the triangles don’t overlap. That’s not really too hard. First, compute your displaced vertex positions in a 2D plane. Then connect them with triangles such that no two triangles overlap. No need for a stencil buffer to fix that part of your problem.

The problem is that this is not really 2.5D, if I understand correctly. The lines and points are in full 3D, but the rendering of them is in 2D. So I can turn the camera without ever recalculating any vertices – that means that intersections are going to happen, especially as you turn the camera side-on to the lines. Here is an example of this kind of technique:http://codeflow.org/entries/2012/aug…-solid-trails/

In your suggestion, where would this fixing of the triangle intersections happen? In the vertex shader somehow, or before the render calls?

Your description of the “max” blending is how I handled things in my last project. But it was strictly 2D and doing multipasses was no problem. So the “intensity” buffer, using the alpha of the frame buffer, sounds promising. I can’t vary the depth for opaque/translucent because like I say these are in fact in 3D already, so I need them to appear with the correct perspective. I’ll spend some time today playing around with the MAX blending, maybe that can work – though I think it would take two draws of my translucent objects? One to form the intensity buffer and one to use it?

Texture barriers sounds like what I was looking for. I wonder if that is possible with Metal? If I could get the current alpha/color value in the fragment shader, then I could not let it go above the current ‘max’ value. It would not be perfect but it would stop those ugly triangle intersections. Of these two approaches the Max/intensity thing would be the best because the lines would be perfect. I feel like I spent a lot of time trying to do that in my last project and couldn’t get it to work, which is why I ultimately went with the multiple pass approach. But that was on Nintendo with more limited hardware capability so maybe I can find a way now…

Thanks, this all helps a lot!

Well, the first step is to understand where their coming from. I don’t know, but it sounds like you’re extruding quads (or triangles) independent of how adjacent quads (or triangles) are being extruded. You can’t do that.

What you need is a watertight 2D mesh. That is, it fills all contained space but with no overlaps.

So, first step toward that is to draw out on paper how this would work. As a starter proposal, draw the vertices making up the center polyline. Then take each vertex and clone it, moving one “left” and one “right”, where those directions are away from the centerline, along a line bisecting each interior angle formed by the original polyline verts (see the URL you posted in your last post for a picture). Then connect up those displaced vertices with quads. Then split each quad into 2 tris. There you go: watertight 2D mesh with no triangle overlap.

So that’s on paper. Then you can implement that either on the CPU (uploading the triangle verts to the GPU that way). Or you can use the GPU to transform a line into this trimesh in a shader.

Are you certain it’s possible to do that? I mean, imagine you draw a line, say a simple horizontal line across the screen. But then you turn the camera 90 degrees, so that you are looking at the line end-on. All the vertices would basically be on top of each other. I can’t think of a way to adjust the vertices (in-shader) in cases like that. It’s not just a matter of looking at the adjacent neighbors, every point in the line could overlap with almost every other point, do you see what I mean? That’s why I think a geometry based approach won’t work if I’m doing the rendering in shaders. It seems like it would have to be a blending solution where triangles can overlap but the blending/alpha-check keeps the colors correct.

Yes. The max blending will cover that.

I thought you were talking about how to get rid of the overlap between adjacent triangles in your polyline mesh. This would do that. However, you can try just throwing the max blending at it and ignoring that property of your triangle mesh.

Yes, that is what I was asking – are you certain it’s possible to fix the triangle mesh when the vertices can all pile up when seen from end-on? I’m not sure about that.

I haven’t had time to try the blending, been working on other stuff. I hope to try it this week, though I don’t have my hopes up for it working. After all, as you draw fragments into the framebuffer, they are blending with the background color, so further fragments would be tainted by that. I’m thinking of the areas around the edges where the alpha is not a constant but falls off to 0…

No, I was only talking about the case for avoiding overlap of adjacent triangles in the polyline mesh when laying them out on a 2D plane, not addressing how to avoid overlap when you treat that polyline as 3D and rotate it so as seen edge-on where geometry projects to the same pixels.

I haven’t had time to try the blending, been working on other stuff. I hope to try it this week, though I don’t have my hopes up for it working. After all, as you draw fragments into the framebuffer, they are blending with the background color, so further fragments would be tainted by that. I’m thinking of the areas around the edges where the alpha is not a constant but falls off to 0…

I understand your concern. However, I think you’re missing a subtle detail about what I was suggesting that perhaps I wasn’t clear about. First, we use max blending in the scratch buffer just to compute the per-pixel alpha values. Then we blend the line color onto the screen using those alpha values, ensuring that we never update a pixel more than once.

There are a number of ways to do that. One is to just use a full-screen quad of the right color using those alpha values, but that has a lot of potential fill cost. Another is to just draw the polyline to the screen and use stencil test/set to ensure that no pixel is updated more than once.

I finally got a chance to try this today – been sidetracked on some other things – and it works great! I draw the lines twice – the first time sets the max alpha in the framebuffer’s alpha channel. The second pass blends the Source RGB with the destAlpha, and it also stores 0 in the dest alpha to erase the buffer for the next pass. I haven’t tried using a stencil buffer yet – but I don’t really need to, because when the alpha is set to zero by the second-pass write, any further writes to that pixel are multiplied by the zero alpha.
https://imgur.com/PV0fIQ8

The only problem is that there’s a slight artifact at the joins where lines cross. For some reason it looks like the alpha is lighter than it should be at these creases. I don’t see why it would do that. Do you know?

https://imgur.com/wxQaFc3
https://imgur.com/OrVMpkF

That closeup is a picture of the alpha mask after the first write – so the problem is happening there, before the second stage. I would think the ‘max’ would make it darker than it should be, if anything. But it looks lighter?

Anyway thanks for the help, it is so much better than it was – is actually usable now.

It definitely looks like that. Not sure why. You may just want to trace it through and see what’s going on for those overlap texels. Are you sure there’s no testing/setting going on (e.g. depth, stencil, etc.) besides the alpha values with max blending? No MSAA rasterization and downsampling?