PDA

View Full Version : Open GL should support konkave polygons directly



LuisAK
08-21-2013, 05:48 AM
Open GL should support concave polygons directly.
I think itīs an easy algorythm which should be no problem.

mhagain
08-21-2013, 05:57 AM
This would be an easy enough algorithm if OpenGL was a software library.

OpenGL is not a software library.

If graphics hardware does not support concave polygons, then no matter how much you may want OpenGL to support concave polygons, you can't have it.

The only way to support concave polygons is therefore to run them through a preprocessing stage that converts an arbitrary concave polygon back to multiple convex polygons. And solutions already exist for that.

This request, coupled with your previous, demonstrates that you don't actually understand what OpenGL is. It may also be the case that you have some very specific problems, you've chosen OpenGL as a solution, and now you're finding that OpenGL is a bad fit for those problems. If that's so, then rather than wishing a bad fit was made good, you should instead be looking for a more appropriate solution.

Alfonse Reinheart
08-21-2013, 06:36 AM
OpenGL does already support "konkave" polygons. You can make your polygons concave all you want.

So long as they're triangles.

Quads and general GL_POLYGON primitives were removed in OpenGL 3.1. Yes, compatibility mode still has them around. But the ARB is not going to make any changes to OpenGL just for some compatibility-only feature.

Nowhere-01
08-21-2013, 08:09 AM
The only way to support concave polygons is therefore to run them through a preprocessing stage that converts an arbitrary concave polygon back to multiple convex polygons. And solutions already exist for that.

Well, OpenGL's tessellation is a hardware facility, that may be used for exactly such preprocessing. Or you can use stencil buffer. But yes, there's no native implementation of such "am easy algotythm"(i couldn't resist opening hidden OP's post. and it totally worth it) in OpenGL.

for OP:
Because almost no one actually needs it in realtime rendering and it requires either significant changes to rasterization pipeline or tessellation(which you can utilize yourself both software and hardware).


It may also be the case that you have some very specific problems, you've chosen OpenGL as a solution, and now you're finding that OpenGL is a bad fit for those problems. If that's so, then rather than wishing a bad fit was made good, you should instead be looking for a more appropriate solution.

OP is not likely to be an actual OpenGL programmer. Otherwise he would be busy actually doing something, instead of searching for flaws in API. which he fails miserably at, because of absolute lack of knowledge of any graphical API and rendering in general.

kRogue
08-21-2013, 12:47 PM
I think itīs am easy algotythm which should be no problem.


Actually it is not; triangulation of a closed non-intersecting line loop can still be tricky. I invite you to do a google search to see how deep the rabbit hole is for triangulation (admittedly triangulation of the fill of one closed line loop is much simpler than the triangulation of multiple potentially intersecting line loops). But to give you an idea, for triangulating a simple polygon, the simplest algorithm of "cutting ears" runs in O(n^2), using monotone polygons runs in O(n log n); getting faster than O(n log n) gives pretty damn complicated algorithms.

I think the OP's issue is the following: the OP does not realize that OpenGL is not a library but rather a standardized interface to GPU's. I suspect once the OP learns that, the OP's posts won't be quite so bad.

LuisAK
08-21-2013, 01:35 PM
I think itīs an easy algorythm which should be no problem.
Actually it is !

LuisAK
08-21-2013, 01:59 PM
triangulation of polygons
http://narfation.org/alggeo/triangulation.pdf
is too overelaborated
-------------------------------
a simple algorythm can render directly concave polygons

mhagain
08-21-2013, 03:17 PM
It still doesn't change the fact that OpenGL is NOT a software library.

You don't "implement algorithms" in OpenGL. All that OpenGL does is tell your graphics hardware what to do. Your graphics hardware does what you tell it.

I said it before and I'll repeat - no matter how much you want it, no matter how much you think it's a "simple algorithm", if hardware cannot do it, then it's not a suitable feature.

What part of that is difficult for you to understand?

GClements
08-21-2013, 05:29 PM
It may also be the case that you have some very specific problems, you've chosen OpenGL as a solution, and ...
".... now you have two problems" ;)

Alfonse Reinheart
08-21-2013, 06:40 PM
I said it before and I'll repeat - no matter how much you want it, no matter how much you think it's a "simple algorithm", if hardware cannot do it, then it's not a suitable feature.

To be perfectly fair (even though this idea doesn't deserve it), it is entirely doable on 4.x-class hardware. The OpenGL implementation could use a compute shader to implement the division, write out the triangles to a buffer, and then feed that buffer in to the rendering pipeline.

Now that's way too high level for OpenGL to do, and it's really stupid besides (since you can do it just once on the CPU). But it could be done.

LuisAK
08-22-2013, 10:51 AM
ok,
OpenGL should support concave polygons directly in the Graphics-Hardware.
To render them is as easy as triangles.

LuisAK
08-22-2013, 11:28 AM
OpenGL only support triangels in the hardware, because it is thought, that they
are easylier to be renderd than concave polygons, but in fact, the concave polygons can be rendered as easy as triangles.

thokra
08-22-2013, 11:44 AM
OpenGL should support concave polygons directly in the Graphics-Hardware.

The hardware itself has to support stuff for OpenGL to be able to expose it in the first place. You could expose all you want in OpenGL, but if the hardware hasn't a way to do it directly, you'd need to either emulate it in the hardware or do it on the CPU. As Alfonse said, this could probably be done - the fact that it's not there kind of speaks for itself ...


concave polygons can be rendered as easy as triangles

From where did you get this? Do you guess or do you know? What are your sources?

I can't believe I'm getting sucked in again ...

LuisAK
08-22-2013, 12:39 PM
From where did you get this?
From me and myself.


Do you guess or do you know?
I know it.


What are your sources?
I developed the sourcecode.

LuisAK
08-22-2013, 12:47 PM
As Alfonse said, this could probably be done - the fact that it's not there kind of speaks for itself ...

I think, we are talking about the next release/releases.

thokra
08-22-2013, 12:53 PM
I think, we are talking about the next release/releases.

We are. What I'm saying is that if a primitive type representing concave polygons was considered feasible by the ARB, it probably would already be in the spec by now. In contrast, the ARB has removed a great deal of primitive types in the past.

mhagain
08-22-2013, 02:48 PM
ok,
OpenGL should support concave polygons directly in the Graphics-Hardware.
To render them is as easy as triangles.

This is utterly meaningless.

OpenGL cannot "support" anything "in the hardware". That's not the way things work.

The way things work is roughly like this:

Hardware has certain capabilities. These capabilities are decided by the hardware manufacturers with occasional input from others.

OpenGL is a "software interface to graphics hardware". In other words, it provides a way to access those capabilities from software.

Hardware manufacturers implement the OpenGL specification in their drivers. You make OpenGL calls. The driver converts those calls to something that the hardware understands, then passes them on to the hardware. The hardware executes the calls.

But the fundamental requirement - that the hardware must support a feature - is absolutely nothing to do with OpenGL. It's entirely up to the hardware manufacturers.

Do you see where this is going?

There's no point whatsoever lobbying the OpenGL forums for this.
There's no point whatsoever lobbying the ARB for this.

The people you need to be talking to are the hardware manufacturers. You need to talk to AMD. You need to talk to NVIDIA. You need to talk to Intel. This is nothing to do with OpenGL; you need to ask the hardware manufacturers to support concave polygons in their hardware. And you need to persuade all of them to implement concave polygons in a consistent manner. Because otherwise you're just creating noise and wasting your own time. You want concave polygons? I've told you what to do. Go do it.

LuisAK
08-22-2013, 11:34 PM
OK,
the Graphics Hardware should support the rendering of concave polygons
Whereby the Graphics Hardware isnt only hardware, thereīs a CPU, memory and software.
And it should be easy to implement this in this Graphic-Software.

Arenīt there any guys from the ARB or companies like AMD, NVIDIA or INTEL reading this threads ?

Alfonse Reinheart
08-23-2013, 12:04 AM
Arenīt there any guys from the ARB or companies like AMD, NVIDIA or INTEL reading this threads ?

Are there? I highly doubt that. Why?

Because of poorly thought out suggestions like yours.

Who exactly is this going to help? The transformation of concave polygons into a series of triangles is simple enough that everyone who needs it already has the code for it. Everyone who needs it does it as a simple pre-processes to their input data, executing it before handing it off to OpenGL.

Therefore, the only legitimate benefit of this (FYI: because someone is too lazy to do it as a pre-process is not a legitimate reason to ask for something) is if you're dynamically generating concave polygons via some process, and you are doing this on the GPU. Therefore, you want the GPU to be able to accept these polygons directly, without having to do a GPU-CPU copy to process them, then a CPU-GPU copy back.

And if that's the case, there are two perfectly viable solutions:

1: Have your GPU process that generates the data generate triangles instead of concave polygons. That is, put whatever triangulation algorithm into the GPU process.

2: Do what I said earlier. Write a compute shader that takes individual polygons and converts them into a sequence of triangles. It's not the easiest thing in the world to write, but it's doable.

In short, this is a problem you can solve on your own; there is no need for a specific hardware feature or solution, besides the fact that you want it. This isn't even like your suggestion for a font system; that at least has some clear benefits to it. It's not going to happen for practical reasons, but there is at least an argument to be made for it. There was obvious utility to having the feature, even if it's not appropriate for OpenGL. This concave polygon thing is just personal wishlist crap that would help nobody but yourself.

Why should the ARB, AMD, NVIDIA, or Intel waste their precious time reading suggestions that are this useless? People suggesting pet features like this is exactly why they don't read these threads. Too much noise and not enough signal.

mhagain
08-23-2013, 03:24 AM
OK,
the Graphics Hardware should support the rendering of concave polygons
Whereby the Graphics Hardware isnt only hardware, thereīs a CPU, memory and software.
And it should be easy to implement this in this Graphic-Software.

Arenīt there any guys from the ARB or companies like AMD, NVIDIA or INTEL reading this threads ?

You're still not getting it, are you?

You claim that concave polygons are just as easy to rasterize. Got news for you - they're not.

Triangles have certain properties that make them ideally suitable. Some of these are common to all convex polygons, some are particular to triangles. You can't say any of this about concave polygons.



Interpolating texcoords or colours across a triangle is just a matter of taking a weighted average from 3 points.
All points on a triangle are guaranteed to lie on the same plane.
Filling a triangle can be as simple as starting at one edge and drawing a line until you meet another edge.
A triangle can be arbitrarily split in two and the result is two more triangles, the very same primitive type.


This can all be accelerated in hardware, so you need to drop your obsession with "in software" right now. An "in software" approach already exists for concave polygons - it's called GLU tesselation and all OpenGL implementations support it. But all that it does is convert the concave polygon to triangles, which are then dealt with by the hardware. "Graphics hardware isn't only hardware" is one of the most nonsensical things; it's either hardware or it's not, and triangle rasterization is currently implemented in hardware. If you want a serious competitive alternative, then it must also be implemented in hardware, otherwise forget about it.

GClements
08-23-2013, 05:04 AM
Filling a triangle can be as simple as starting at one edge and drawing a line until you meet another edge.


It can be even simpler: fill one or more rectangles which collectively cover the triangle, discarding a fragment if any of its barycentric coordinates are negative.

LuisAK
08-23-2013, 07:43 AM
It can be even simpler: fill one or more rectangles which collectively cover the triangle, discarding a fragment if any of its barycentric coordinates are negative.
To render a concav Polygon is so simple:
1. Draw a rectangle around the concave polygon.
2. Draw the polygonlines.
3. Scan the rectangle fom xmin to xmax and in this loop from ymin to ymax
4. If you pass the first line you get inside the polygon, at the next line you get outside an so on ..
5. Inside, you set the pixels, and so the concave polygon will be drawn.

kRogue
08-23-2013, 09:36 AM
To render a concav Polygon is so simple:
1. Draw a rectangle around the concave polygon.
2. Draw the polygonlines.
3. Scan the rectangle fom xmin to xmax and in this loop from ymin to ymax
4. If you pass the first line you get inside the polygon, at the next line you get outside an so on ..
5. Inside, you set the pixels, and so the concave polygon will be drawn.


I am going to try (again) to help you LuisAK.

What are going to be the values of the interpolates? Not only does one need to cover correctly where a polygon is on the screen, one needs to compute the interpolates on the interior. Here is an example, you have a simple polygon comprising of N points (p[1],p[2], ..., p[N]) each vertex has a color value given by (c[1],c[2],..,c[N]); i.e. vertex p[i] has color c[i]. For a point within the polygon what is the correct color value to use? For the case of a triangle one uses essentially barycentric coordinates.
The algorithm you give is not exactly efficient as it involves essentially counting edge crosses iteratively. It does not parallize well and thus maps poorly to a GPU. It is essentially a poor man's edge counter. If you detect edge crosses by tests against drawing the edges, that algorithm is going to break; if you do an analytic test, extra care is needed for horizontal or vertical edges (depending on if by cross you mean cross by changing y or changing x) and additional corner cases that come along the way.


Now if you really, really want to draw any polygon in GL now you can one of the two:

Triangulate it and then feed those triangles to GL

OR

use the stencil trick to compute the interior of a polygon. This trick also works with general paths with some limitations. The trick is this: 1) clear the stencil buffer. 2) pick a point, any point will actually work call this point P. Set the stencil operation as invert. Draw a triangle fan centered at P using the vertices of the polygon for the non-center point of the fan, ie. draw the triangles (P,p[1],p[2]), (P, p[2], p[3]),...., (P, p[i], p[i+1]),... (P, p[N], p[1]) . Where the stencil is non-zero is "inside" (according to the odd-even fill rule) and where it is 0 it is outside. For simple polygons, odd-even fill rule is equivalent to non-zero fill rule. Going further, one can use increment and decrement with wrapping to do a non-zero fill rule for general paths (subject to the complexity not exceeding 127).


That 2nd option has been in the OpenGL Red book for over a decade . You really need to start some reading, not doing so is going to leave you ignorant.

http://fly.cc.fer.hr/~unreal/theredbook/ (old online version of the Red book)
http://www.opengl.org/registry/doc/glspec44.core.pdf The OpenGL specification, a must read.
http://bps11.idav.ucdavis.edu/talks/04-realTimeRenderingArchitecture-BPS2011-houston.pdf excellent presentation explaining current desktop GPU architectures
http://www.arcsynthesis.org/gltut/ (in Alfhonses signature, a tutorial to get started doing 3D graphics with OpenGL)

LuisAK
08-23-2013, 12:56 PM
Thanks, but I don`t need your help, maybe you could help the guys designing the next hardware for OpenGL.
I red the OpenGL-Superbible 14 years ago and I know, what Iīm talking about, when I make some suggestions.
You know ? - Suggestions!

My next suggestion: The whole Graphic-API of a operating system (Windows, Linux ..) should base on a 3D-System like OpenGL or Direct 3D.
There should be no difference between 3D and 2D. What does it matter, if the z-coordinate is 0 at 2D

Therefore I also suggested that Fonds should be supported by a 3D-System therefore the fonds should be supported by the hardware.


Here you can see, what graphics-hardware is able to do.
http://de.wikipedia.org/wiki/Grafikprozessor
http://de.wikipedia.org/wiki/OpenCL
http://de.wikipedia.org/wiki/PhysX
Regarding all these abilities my suggestions easily should be possible.

You have to know, that Iīm often 20years in the future.
"Eppur si muove!"

Have a nice day
;-)

mhagain
08-23-2013, 01:32 PM
My next suggestion: The whole Graphic-API of a operating system (Windows, Linux ..) should base on a 3D-System like OpenGL or Direct 3D.
There should be no difference between 3D and 2D. What does it matter, if the z-coordinate is 0 at 2D

You have to know, that Iīm often 20years in the future.

...or 8 years in the past. This happened for every major OS a long time ago.

LuisAK
08-23-2013, 01:45 PM
...or 8 years in the past. This happened for every major OS a long time ago.
happened ? Maybe, there are a few aspects but not consequently!

1. Why can the simple Graphic API render polygons and OpenGL canīt ?
API: http://msdn.microsoft.com/en-us/library/windows/desktop/dd162814(v=vs.85).aspx
C#: http://msdn.microsoft.com/de-de/library/vstudio/89sks199.aspx
2. Why canīt OpenGL supply directly Fonts
:
:

mhagain
08-23-2013, 02:08 PM
happened ? Maybe, there are a few aspects but not consequently!

1. Why can the simple Graphic API render polygons and OpenGL canīt ?
API: http://msdn.microsoft.com/en-us/library/windows/desktop/dd162814(v=vs.85).aspx
C#: http://msdn.microsoft.com/de-de/library/vstudio/89sks199.aspx
2. Why canīt OpenGL supply directly Fonts
:
:

Because the "simple graphics API" is slow and implemented in software. OpenGL isn't. As I said in my very first reply to this thread - OpenGL is not the tool for what you want to achieve here.

Also note that the "simple graphics API" is a high-level abstraction, OpenGL is a low-level abstraction. The "simple graphics API" doesn't render things directly, what it does is decomposes things (in software) into lower-level primitives which are then rendered by a lower-level abstraction.

You don't want OpenGL. You want a high-level API. OpenGL is not that API.

Let's get this really straight. There will always be a need for a low-level abstraction. At the very least, one is needed to sit under the high-level abstraction you want (and that you seem to think OpenGL is - or should be). You're looking for high-level functionality in a low-level API and you're disappointed when you don't find it. Guess what? You're just looking in the wrong place. If you want to draw fonts, or concave polygons, or whatever, then just use a high-level abstraction and be done with it. OpenGL is not the solution you need for this.

LuisAK
08-23-2013, 02:12 PM
Hardware has certain capabilities. These capabilities are decided by the hardware manufacturers with occasional input from others.
OpenGL is a "software interface to graphics hardware". In other words, it provides a way to access those capabilities from software.

it seems that your`e talking about a hardware from 1985


as you can see in the following pages, the actual "hardware" is a system of parallel processing systems:
http://de.wikipedia.org/wiki/Grafikprozessor
http://de.wikipedia.org/wiki/OpenCL
http://de.wikipedia.org/wiki/PhysX

Therefore the two suggestions I made (Direct Font support, and rendering of polygons like the simple Grapics API can do)
easily could be implemented in the "hardware" and OpenGL could support the interface,

LuisAK
08-23-2013, 02:18 PM
in #25 you write:
"...or 8 years in the past. This happened for every major OS a long time ago."

and in #27 you write
"Because the "simple graphics API" is slow and implemented in software."

so that you show in #27 that your own statement in #25 is wrong !

in #25 :
"...or 8 years in the past. This happened for every major OS a long time ago."
-> This didnīt happen ! OK ?

LuisAK
08-23-2013, 02:25 PM
You don't want OpenGL. You want a high-level API. OpenGL is not that API.


Yourīe talking about the present, Iīm talking about the future - what could be. (Suggestions for the next release)

mhagain
08-23-2013, 02:56 PM
in #25 you write:
"...or 8 years in the past. This happened for every major OS a long time ago."

and in #27 you write
"Because the "simple graphics API" is slow and implemented in software."

so that you show in #27 that your own statement in #25 is wrong !

in #25 :
"...or 8 years in the past. This happened for every major OS a long time ago."
-> This didnīt happen ! OK ?

No.

The "simple graphics API" you're referring to is GDI, which Windows maintains support for on account of backwards compatibility, but which it no longer uses for it's main drawing.


Yourīe talking about the present, Iīm talking about the future - what could be. (Suggestions for the next release)

So you're talking about taking everything that OpenGL is and has worked towards for the past 20 years and throwing it out?

Really, you have no idea what you're talking about. You should learn this stuff before you go mouthing off.

You don't want OpenGL, you want a high-level abstraction. So just use a high-level abstraction instead, OK?

Nowhere-01
08-23-2013, 02:59 PM
...
Why do you keep responding and elaborating to someone, who clearly ignores all of your explanations, refuses to be educated(keeps assuming, that he knows everything better) and keeps posting repetitive, utter nonsense and mumbling? It's obvious, that he's either a troll or a complete dumbass. It's also so clear, that he's not trying to learn API or to produce something. Do you really want threads like this on this forum, especially in suggestions?

I said exactly the same thing in his previous topic about 6 month ago. And now it's the same. A bunch of over-tolerant people trying to educate an obvious troll or extremely dense, ignorant person. And why do you want any of them educated or treated well? Why do you educate in the first place? Maybe you're helping potential developers, because you want OpenGL to be more popular and used in some future projects and to have decent, mature community and support? Do you think he's able to contribute to that? You keep writing thousands of words, keeping attention to this stupid thread, instead of showing ability for good judgment and just saying GTFO in response to ovid ignorance and thickness. It's good, then community shows no tolerance for bullshit, it keeps itself clean and intelligent that way. And if he's actually not a troll, he may review his attitude and try to understand things he's being told over and over. Why don't you just report him for trolling\flaming\flooding on suggestions forum and wait until these threads are (re)moved? And spend your time on someone, who's actually willing to be educated and has any potential.

kRogue
08-23-2013, 03:09 PM
This is my last post to this thread and as a rule now, I am not going to try to help you LuisAK; you have repeatedly ignored what anyone has said, you have repeated the same statement although multiple contributors have given counter arguments for those statements.

If you really want to suggest functionality to find its way into the hardware here are some tips:

Have a clear way to implement said functionality in a highly parallel fashion
Have a clear notion of what the input and output are. Your request for "draw concave polygons" in no way addresses what the output should be: what are the interpolates, what happens if the line loop defining the polygon is self intersecting.
Be aware of various constraints, i.e. the tradeoff between flexibility and speed; generally speaking more flexible implies slower.


Nothing of what you have suggested (font support or rendering of simple polygons beyond triangles) has done ANY of the above.

Going further, to repeat what mhagain said: OpenGL is a low level API; it is supposed to map naturally and directly to GPU commands. If one looks at GL, one sees essentially that each command is one of the following:

Define data store (be it buffer objects or textures)
Set values of data store directly (i.e. glTexSubImage2D for example)
Set GL state to determine from what data stores to fetch data (binding textures and buffer objects for example)
Set GL state to what data stores to write (framebuffer objects and transform feedback for example)
Set GL state for parameters of fixed functionality (depth test, stencil test, color, depth, stencil masking and blending for example)
Specify how to process input vertex stream into values for the data store rights, i.e. define GLSL programs
Limited query (samples passed, syncs)
memory barriers (for random read/writes introduced in GL4 hardware).
Execute draw commands


None of that is high level except for the creation and compiling of GLSL (and that is implemented on CPU, with compiles caches often and is done only once per GLSL program/shader essentially).

There are plenty of interesting places to explore; for example that which is fixed function how much can it be made more flexible.

Other issues are related to API-niceties to make the GL API easier to work with (almost always after a new spec is release a begging for DSA comes).


I am out of this now; this is my last post to ANY thread you start LuisAK.

LuisAK
08-24-2013, 12:00 AM
I am not going to try to help you LuisAK;

I didnīt ask for help here, I just talked about suggestions.

And if you have a look PhysX, you can see, that the graphics hardware ist far away from low-level abstraction

LuisAK
08-25-2013, 01:22 AM
To render concave polygons directly is very easy.
Here an example:
1119
This concave polygon is rendered on pixel-level how it could be done by the graphics hardware
as efficient as triangles. Of course, there are a few little aspects in my algorithm, to detect the cross-points.
But in summary its much more efficient than triangulation.

My evidence is provided.

Alfonse Reinheart
08-25-2013, 02:10 AM
Because I enjoy feeding idiocy on this forum:


My evidence is provided.

Drawing a picture is not evidence. Anyone can draw a picture of an algorithm. That's not evidence that this algorithm can be implemented in hardware or implemented efficiently in hardware.

Unless you demonstrate an understanding of how this stuff is implemented in hardware, you cannot claim any kind of knowledge of whether this could be implemented reasonably.

And to preempt your linking me to PhysX or some other GPGPU library, let me stop you right there. That's all just executing programs on the GPU. You are not talking about shaders. You are talking about changing the rasterizer. The rasterizer is not done via shaders; it's hard-coded into the GPU.

What you are talking about is the equivalent of wanting to change how the division opcode on a CPU works. The fact that you can make the CPU do amazing things with a good compiler or other system has nothing to do with the feasibility of whatever else you wanted.

Lastly, you continue to miss the important question. Could IHVs implement this? Absolutely. WHY SHOULD THEY?! Nobody except you wants this. This solves a problem that nobody except the most lazy programmers have. Why should IHVs invest precious transistors solving a problem that nobody needs solving? That will be useful for one thousandth of a percent of their customers.

Why should they invest in this rather than anything else they could be doing? Like programmatic blending (rather than the user-heavy image load/store form). And so forth.

Every time you post a response that doesn't address this important question, I will simply repeat the question.

LuisAK
08-25-2013, 03:11 AM
Because I enjoy feeding idiocy on this forum:

Ok, when arguments go, you continue with insults.



Drawing a picture is not evidence. Anyone can draw a picture of an algorithm. That's not evidence that this algorithm can be implemented in hardware or implemented efficiently in hardware.


Where sould be the problem ?
You draw some lines ( something the hardware can already)
and then you fill the pixels inside with color (something the hardware can already do with triangles).

LuisAK
08-25-2013, 04:18 AM
It could be done efficiently:
1120
Also in the "Hardware" because there is only a very little trick.

mhagain
08-25-2013, 04:48 AM
It could be done efficiently:
1120
Also in the "Hardware" because there is only a very little trick.

Drawing flat-shaded polygons that are fully on-screen is nice. You're still completely failing to answer these questions:



How do you interpolate texcoords, colours and other vertex attribs across a concave polygon?
How do you split a concave polygon that's partially clipped?
How do you deal with cases where vertices of the polygon are not all on the same plane?


These are utterly trivial with triangles. If you're claiming that concave polygons are just as easy, then you must have answers to these questions too. Or did they even occur to you?

LuisAK
08-25-2013, 05:35 AM
How do you interpolate texcoords, colours and other vertex attribs across a concave polygon?

You can interpolate them by the surounding rectangle



How do you split a concave polygon that's partially clipped?

Therefore the stencil-buffer is existing



How do you deal with cases where vertices of the polygon are not all on the same plane?

When they are projected to the screen they are all flat, but normaly they are on a plane.



These are utterly trivial with triangles. If you're claiming that concave polygons are just as easy, then you must have answers to these questions too. Or did they even occur to you?

Whats that ?
1121

mhagain
08-25-2013, 06:26 AM
You can interpolate them by the surounding rectangle

And the calculations will be wrong.


Therefore the stencil-buffer is existing

And what if the programmer is already using the stencil buffer for something else? Besides which, do you even know where stencil test runs in the pipeline relative to clipping and polygon division? Or that stencil test is dependent on clipping and polygon division? You really haven't thought this through at all, have you?

You're now moving well away from "direct support" and towards "crutched up by a load of fragile hacks that are going to break if you even look funny at them".

Alfonse Reinheart
08-25-2013, 08:01 AM
As promised:

WHY SHOULD IHVs IMPLEMENT THIS?!

LuisAK
08-25-2013, 08:26 AM
As promised:
WHY SHOULD IHVs IMPLEMENT THIS?!
cause itīs cool :biggrin-new:

LuisAK
08-26-2013, 12:50 AM
GL_LINE_LOOP can be rendered very fast:
http://wiki.delphigl.com/index.php/glBegin
1122
Where should be the problem, to fill the inner side of these areas with pixels also
verry fast ?
Rediculous, that itīs not able !

thokra
08-26-2013, 03:48 AM
I move that a mod or admin close this thread. To me, the already next-to-zero value of this thread went straight to zero with "cause itīs cool".

LuisAK
08-26-2013, 11:11 AM
Buffers in OpenGL:
- Color buffers: front-left, front-right, back-left, back-right (und evtl. weitere)
- Depth buffer (z-buffer)
- Stencil buffer
- Accumulation buffer

I suggest a new one:
- Polygon rendering buffer

That would be cool.

Nowhere-01
08-26-2013, 11:19 AM
I hereby announce, that a next cosmic speed of idiocy has been just achieved in this thread.

mhagain
08-26-2013, 11:33 AM
No, it's actually a great idea. You see, there's little magic elves that live inside your graphics card, and sometimes they can be a bit stubborn, but if you tell them "because it's cool" they can get motivated and do some work for you. So, with a polygon rendering buffer they can take concave polygons and sprinkle magic elf dust all over them and thereby make those concave polygons be natively supported.

Or something.

LuisAK
08-26-2013, 11:53 AM
And here you can see, how it works:
left side: polygon Buffer ( help for construction to mark cross points, endpoints and the single lines ),
right side: frame buffer/color buffer with the rendered polygon
1123
The polygon buffer is cleared in the same loop as the frame buffer will be filled,
so the next polygon can be rendered
cool :cool:
http://www.opengl.org/discussion_boards/attachment.php?attachmentid=1123&d=1377539509

LuisAK
08-26-2013, 05:39 PM
here, I found an interesting discussion from Thursday, July 27, 2006
a guy suggested to use the stencil buffer for
Hardware accelerated polygon rendering

".. So in the spirit of writing down about the things I'm working on on a daily basis today comes hardware accelerated rendering of complex polygons. Ignacio Castaņo finally convinced me to this ingenious method so all credit for it should go to him. I've spent last few moments at the office today looking into this method and it's just gorgeous so I'll give a brief overview of it..."


http://zrusin.blogspot.de/2006/07/hardware-accelerated-polygon-rendering.html
http://photos1.blogger.com/blogger/4524/2946/320/rendering1.png
http://photos1.blogger.com/blogger/4524/2946/320/rendering1.png

here a comment from the linked page:
"The method is awesome, as it (in theory) doesn't involve any kind of client side computations (besides a trivial min/max test) and (again in theory) operates in whole on the hardware. A wonderful sideeffect of all of this is that we avoid robustness issues that tessellation introduces."

Who is Ignacio Castaņo ? :
https://twitter.com/castano/status/258414324949868545
http://www.google.de/#fp=2764204d11cc5912&q=Ignacio+Casta%C3%B1o+opengl
http://www.linkedin.com/in/castano
http://www.ludicon.com/castano/blog/about/resume/
actual: Senior Programmer at Thekla
former: Developer Technology Engineer at NVIDIA
3D Tools and Technology Engineer at NVIDIA
Game Programmer at Oddworld Inhabitants
Graphics Programmer at Relic Entertainment, Inc
Graphics Programmer at Nebula Technologies, SA
Lead Programmer at Crytek GmbH

----------------------------------------------------------------------------------------------------------------------
its said "the method is awsome" - as I say itīs cool
----------------------------------------------------------------------------------------------------------------------

If there would be an new polygonbuffer, it could be combined with the stencil buffer, which could do itīs native job.
That would be great.

mbentrup
08-26-2013, 11:44 PM
This "polygon buffer" method works great when you draw polygons sequentially, anything deserving the attribute "fast" on a GPU has to work parallelized.

LuisAK
08-27-2013, 02:05 AM
This "polygon buffer" method works great when you draw polygons sequentially, anything deserving the attribute "fast" on a GPU has to work parallelized.
It could be parallelized like existing tasks which belong to other buffers within a GPU.

LuisAK
08-27-2013, 02:09 AM
this example here shows, that a polygon buffer could also support an optimal font supply directly within the GPU:
http://zrusin.blogspot.de/2006/07/hardware-accelerated-polygon-rendering.html

and here itīs missed also:
http://stackoverflow.com/questions/2071621/opengl-live-text-rendering

letīs do it