PDA

View Full Version : index buffer still impractical for unique texture pts and normals?



MrUNOwen
08-09-2017, 09:52 PM
So, the part of me that cringes at wasted data, doesn't want to accept that I have to send the same vertex multiple times if multiple faces use them. Initially I thought index buffers would be the solution, but I now see all attributes are shared. So... is there still no practical way to get around this, like 3 buffer indexes (vertex, normal, texture)? If there is a way around (that is practical) what versions of OpenGL am I limited to?

Silence
08-09-2017, 10:52 PM
There aren't any other ways.

Are you such in a concern about memory ? What exactly are you trying to do ?

mhagain
08-10-2017, 12:01 AM
You also need to consider that indexing allows you to reduce draw calls by concatenating primitives, which is almost certainly guaranteed to be a net performance gain in spite of increased memory usage.

GClements
08-10-2017, 02:35 AM
So, the part of me that cringes at wasted data, doesn't want to accept that I have to send the same vertex multiple times if multiple faces use them. Initially I thought index buffers would be the solution, but I now see all attributes are shared. So... is there still no practical way to get around this, like 3 buffer indexes (vertex, normal, texture)? If there is a way around (that is practical) what versions of OpenGL am I limited to?
For a mesh which is an approximation to a smooth surface, vertices which share positions will usually also share texture coordinates and normals. Only at sharp edges will vertices share positions but have different normals, and only at texture seams will vertices share positions but have different texture coordinates. As the resolution of the mesh increases, the number of vertices increases quadratically but the number of vertices on sharp edges and texture seams only increases linearly, so the latter constitute a decreasing proportion of the total. And for low-resolution meshes, the number of vertices is small enough that it doesn't really matter about needing to duplicate vertices.

For meshes which are meant to be rendered as a series of flat faces, the normals will be per-face rather than per-vertex. This means that the corresponding vertex attribute should have the flat qualifier; for each triangle, the normal from only one of the three vertices (the "provoking" vertex) will be used, meaning that you only need a third as many vertices as may initially appear. Also, you don't actually need to use vertex attributes for normals for faceted meshes; you can calculate the face normal in the fragment shader using derivatives.

While it's possible to render an OBJ-style structure (separate indices for each attribute) directly, this is likely to be slower than converting the data, due to the fact that the lookups are performed as dependent fetches in the vertex shader rather than pre-fetched by the hardware.

MrUNOwen
08-10-2017, 04:29 PM
There aren't any other ways.

Are you such in a concern about memory ? What exactly are you trying to do ?

It just seems wasteful. It's not that I'm doing something taxing on the system. I just don't want to do something outdated and wasteful. If it's still standard practice to have a mesh sent where each triangle is composed of 8 values making up the vertex, normal & tex pt, I'm fine. I'm getting back to openGL after not doing it for a while and it just seems like there'd be a more compact way of doing things.

...Just one more thing though. VBO data is held in the GPU memory, it's not having to pipe it over each time I draw the mesh, right?

Silence
08-11-2017, 12:25 AM
It just seems wasteful. It's not that I'm doing something taxing on the system. I just don't want to do something outdated and wasteful. If it's still standard practice to have a mesh sent where each triangle is composed of 8 values making up the vertex, normal & tex pt, I'm fine. I'm getting back to openGL after not doing it for a while and it just seems like there'd be a more compact way of doing things.

Well it is not. As mhagain wrote, this will give performance gain, not only due to the reduced number of draw calls, but also due to the fact that your hardware will be able to use T&L cache.
You have other choices if you really want to remove some of the attributes. You can store only the normal x and y, and then deduce the z in the shader. You can also use an object-space normal-map, which will then get you free from the normals (at the cost of textures which is also expensive, but can be reused at some conditions).
Compactness is not the most important thing. What OpenGL (and direct3d) are focusing on nowadays are the ability to render as fast as possible. The game market is probably responsible for this. And for this, you often have to do a compromise with memory consumption.
If you already was here at the time of 'simple' vertex arrays, compiled vertex arrays or the nvidia memory alloc, then this is all the same thing, but well standardized. More functions exist for how to draw however.


...Just one more thing though. VBO data is held in the GPU memory, it's not having to pipe it over each time I draw the mesh, right?

Right. For some applications you might need to update some or all of the VBO content each frame (or less). For this, you generally change the VBO usage not to static draw (but AFAIK this is more just a hint to the driver). You can also map the VBO buffer on your system memory.