PDA

View Full Version : Wireframe in GL3



Jan
12-15-2008, 11:34 AM
Hi there

Just thinking, in GL3 there will be no GL_POLYGON (and even no GL_QUAD) anymore (well, in 3.1-without-deprecated-stuff that everyone is waiting for).

Now i am currently changing an editor to use VBOs, instead of immediate mode. The thing is, now i render all my polygons as triangles, which in wireframe mode gives me all those unwanted edges (quite annoying in orthographic view). Right now i COULD use GL_POLYGON together with NV_primitive_restart, to render actual polygons from VBOs, by sacrificing ATI compatibility (which i don't want). However, as i want to keep the porting for GL3 to a minimum, i thought about how i could do wireframe rendering without the extra edges in GL3.

The only solution i have come up so far is to generate 2 VBOs for each mesh, one which holds its actual triangles, for filled/textured view, and one that actually only holds GL_LINES, such that i would not render it as triangles, at all.

Obviously that has twice the memory consumption, plus some other overhead.

Some variation of that idea, is to store the vertices once, but have two different index-buffers, one that indexes the vertices to form triangles, the other to form lines.


I do know for each triangle, which edge is unwanted, when i create the VBO, so i could store some additional data per vertex. My hope was, that maybe i could remove the additional edge in a shader, but i honestly don't know of any good way to do that. I don't really care whether it takes geometry shaders to do it, as long as it is a relatively simple plug&play solution.

Any ideas?
Jan.


PS: Please spare me the "but you can do everything with GL3, that 2.1 can"-speech.

arekkusu
12-15-2008, 11:57 AM
GL3 deprecates edge flags, and doesn't give you any replacement. Internally, the implementations still have to track edge flags for clipping purposes, so this decision doesn't really make sense to me.

I think in GL3 you are stuck with maintaining two separate index lists.

CatDog
12-15-2008, 05:13 PM
Second that. I'm doing it like this (one vertex buffer, two index lists). Once you get over that memory consumption issue, you'll see that this is a very convenient way to go.

CatDog

scratt
12-15-2008, 10:52 PM
I have to say I am also a little confused and irritated by some of the GL3 decisions as I try to stay compliant with the new spec...

I understand why the decisions were made, but I am constantly coming up against solutions which eat more memory and more client-server bandwidth, or GPU memory.
Overall it's a little frustrating.

V-man
12-16-2008, 01:19 AM
I once implemented this for D3D by adding an extra info per vertex.
I was rendering triangles. I think what I did was to make 2 of the vertices have a values of 1.0
In the fragment shader, I believe I killed the fragments with value of 1.0 or more than 0.99 or something like that and it worked well.
I ended up with a triangle with one of the edges invisible.

Jackis
12-16-2008, 03:31 AM
I think that if removing wireframe mode would help improving drivers and so on (and it has to be so), then this decision is totally correct.
You know, when you are refactoring some very large piece of code like driver is, and you see some rarely used or deprecated functionality, which sinks it's teeth into the whole library, so you would kill it with no regrets, if future changes, needed to implement the same functionality, wouldn't cost much.
So I hope it's wise decision.

Xmas
12-16-2008, 04:03 AM
The overhead of generating an index buffer once in your application may actually be significantly lower than the driver overhead for every draw call that uses wireframe rendering.


GL3 deprecates edge flags, and doesn't give you any replacement. Internally, the implementations still have to track edge flags for clipping purposes, so this decision doesn't really make sense to me.
If all edges are boundary edges why would you have to track edge flags?

CatDog
12-16-2008, 04:26 AM
I am constantly coming up against solutions which eat more memory and more client-server bandwidth, or GPU memory.
Same same but different. Using GL_LINES and a (cache optimized) index buffer seems to be the most performant way to draw lines theses days. Nothing is for free, you always have to pay for performance - and the currency is memory.
And people always complained about several ways to do things in GL...

CatDog

Groovounet
12-16-2008, 05:05 AM
I used this double buffers idea on a project and it works pretty well. Drawing GL_LINES is a lot more efficient than using GL_TRIANGLES but using glPolygoneMode is two time slower that drawing filled triangle. The only issue is if you don't want to see the back faces, the back lines.

If your data structure allow you to create some GL_POLYGON that have only the wanted edges, I don't see why it would be different with your lines.

An optimisation issue is that drawing lines could involve a larger amount of memory than drawing triangles. To draw a triangle you need 3 indexes but to draw triangles edges you need 6 indexes ...

This amount of indexe can be divide by 2 if your manage shared edges between triangles ... this involve a really different data structure and will be expensive to develop for a not perfect result... you will probably endup on a data structure that take more memory on the CPU side and slower.

Often, wireframe is used in editor just to say "this object is selected". I'm not sure now days it is still relevance to do this. Blending a "selection color" will be very efficient and fast to develop, for really convencing result and depending on the scenario, working on in the HSV color space in the vertex shader of fragment shader to shift to your "selection color" "locked color" or else give really good results.

Jan
12-16-2008, 05:26 AM
"Often, wireframe is used in editor just to say "this object is selected". "

I don't agree with that. In a 3D preview, yes, but usually you work most in orthographic windows (maybe in CAD that's different, i only know a bit of Catia, i think there a gooch-shaded 3D view was the main working area, but i am not an expert on this).

In orthograpic views you usually WANT it to be all wireframe.

I implemented it with 2 index-buffers, which is quite easy to do and works well. But of course now i can see the lines of the back-faces. I am not sure, whether i want that. On complex objects it is a bit annoying. I could probably write a vertex-shader that projects vertices of back-facing lines to infinite or so.

I think the additional memory overhead for the second index-buffer is nothing to worry about. Personally i do agree with the decisions to drop every rarely useful feature.

Jan.

Groovounet
12-16-2008, 06:21 AM
I didn't wanted to say we don't need wireframe in general, just that sometime there is other solutions.

To remove the backface line you might also use a fragment shader to discard them but checking the dot product value between the view vector and the normal which I supose I still part of your vertex data ...

Probably slower but at least with this solution I would not worry on weird rendering on the side ... I never try actually this infinite projection to "discard" vertex sound good especially with orthographics projection.

arekkusu
12-16-2008, 12:00 PM
If all edges are boundary edges why would you have to track edge flags?

Using LINE PolygonMode, draw one triangle such that an edge is clipped outside the frustum. That clipping operation must insert a non-boundary edge.

Of course, you can play tricks here with guard band clipping etc, but the fundamental operation still requires edge flags. Only the API to set them has been removed.

Jan
12-16-2008, 04:26 PM
"Using LINE PolygonMode, draw one triangle such that an edge is clipped outside the frustum. That clipping operation must insert a non-boundary edge."

To be honest, i don't understand, why the hardware would need to insert ANY edge in this situation.

Xmas
12-17-2008, 04:16 AM
Using LINE PolygonMode, draw one triangle such that an edge is clipped outside the frustum. That clipping operation must insert a non-boundary edge.
That's incorrect. To quote from the spec:

Edge flags are associated with these vertices so that edges introduced by clipping are flagged as boundary (edge flag TRUE), and so that original edges of the polygon that become cut off at these vertices retain their original flags.

arekkusu
12-17-2008, 10:37 PM
Hm, it's interesting that the spec says that (and has, since at least 1.2.)

I believe this is an error in the spec, because it would make glPolygonMode broken with respect to clipping if you implement the spec faithfully. Luckily, nobody does that.

Xmas
12-19-2008, 06:14 AM
I believe this is an error in the spec, because it would make glPolygonMode broken with respect to clipping if you implement the spec faithfully. Luckily, nobody does that.
I don't think it's truly an error in the spec, as PolygonMode is the only use of edge flags. It's likely a case of the spec following some early pipeline concept that did things in a certain way, and people later realising that this behaviour is rather undesirable. A very similar case is the clipping behaviour for wide points and lines.

Fundamentally, the spec should probably drop any language on X and Y clipping. But this also illustrates that even to achieve the sensible behaviour tracking edge flags is not required: simply don't generate a new edge/line at all.

arekkusu
12-19-2008, 11:17 AM
That wouldn't handle Z or user clip planes. Run something like glutmech, where the scene clips through the far plane. Now imagine adding line segments for every clipped vertex-- that's clearly wrong.

I think that as long as PolygonMode is in the spec, edge flags have to stay. The spec should not be written in a way that relies on implementation tricks like guard band or fragment clipping.

Xmas
12-19-2008, 04:42 PM
I think that as long as PolygonMode is in the spec, edge flags have to stay. The spec should not be written in a way that relies on implementation tricks like guard band or fragment clipping.
They're no more "implementation tricks" than geometric clipping. It's simply a different choice. In the case of wide points and lines clipping based on the rasterised primitive is arguably more desirable than clipping based on the primitive vertices.

And I still don't understand why you'd need edge flags for wireframe mode. Most implementations today generate lines for every visible edge and simply clip them as lines. Where do edge flags come in here?