problem - almost blank display

I just finished a massive change to my 3D simulation/graphics/game engine, and now it doesn’t work. No surprise I suppose when thousands if not tens of thousands of lines were changed. However, I’m having difficulty figuring out the problem and perhaps someone here will recognize the symptoms. Note that the zillions of changes were not how the application works, but consequences of changing a fundamental coding decision… essentially how object identifiers are formulated and thus how objects are accessed everywhere in the application (hence trivial changes in a zillion places).

Okay, here is what I see.

When I run a test app on the old code, it displays a couple dozen simple objects in the upper half of the display. Each object is a “disk”, “tube” or other simple shape). They display properly, rotate properly (in response to applied torque forces), etc.

When i run the test app on the new code, all I see is a dot in the middle of the display. I set breakpoints and examined the modelviewprojection matrix, and all 16 values are the same in both applications. I have printf statements out the wazoo, all over the place (in both old and new so I can compare), and everything looks the same in both (other than the fact variable/vector/matrix addresses are different).

For testing purposes, various keyboard keys move and rotate the camera, which obviously makes the images of the objects on the display move. When I press keys to rotate the camera on the new code, the dot moves. The spot that is dead center ahead at the start takes 26 clicks of the “rotate camera around its vertical axis” key to move that point just of the left or right side of the frame. This happens on both. When I click the key to move the camera forward (towards the objects) it takes 16 clicks on both old and new code before the object at field center vanishes (because the camera passed through the x,y,z position of the object at the center of the field).

As I type this, the problem seems obvious. Somehow all vertices are at 0,0,0 in object local coordinates. Except it doesn’t seem so based on print statement output (and besides, the code that generates the indices and vertices for each shape has not been changed). I’ll triple check that again, but that doesn’t seem likely, even thought it partially matches the symptoms. But that doesn’t explain why two dozen different objects moved to two dozens different places in world-coordinates would display as a single dot. The two dozen objects should at least display as two dozen dots even if all their vertices are at 0,0,0 in their local-coordinates.

Seems like this should be simple. But… that just means I’m missing something very, stupidly simple.

[FONT=palatino linotype]My print statement before[FONT=courier new] glDrawElements() indicates each VAO, IBO, VBO contains the same number (and number of bytes) of indices/elements and vertices on both old and new code.[/FONT]

Any ideas based on these symptoms?

Thanks![/FONT]

That’s one of the reasons that I never do this. Find ways to test incrementally. If you do have to make a lot of changes, build up and test incrementally. It’s a whole lot easier to test 5 components linearly each with testing cost N (5*N) than to throw them all together and feel like you’re testing something with N^5 complexity (each component is an unknown and all the interactions between them is an unknown, because it’s all essentially unvalidated).

Somehow all vertices are at 0,0,0 in object local coordinates. Except it doesn’t seem so based on print statement output … Any ideas based on these symptoms?

Sure, lots. But without any more grounding in what your code is doing, they’re probably useless.

For instance, could it be your modeling transforms either aren’t being properly: 1) computed, 2) composited with other transforms, 3) sent to the shader, or 4) applied in the shader?

Failing any eureka moments, I’d suggest breaking down your code and testing your changed components incrementally from the bottom up. Another option you might consider is to bring up your program (before and after changes) in a GL call trace viewer/visualizer where you can actually take a look at the GL state active during your draw calls, watch your scene being rendered, and compare state between the two versions. That might help you to spot what’s going wrong.

–> use github or so to save between changes you make (or just 7zip)
–> wrap small blocks of code into classes, make use of .h AND .cpp files
–> if you’re not sure, make a “interface class” (only virtual methodes) and code some variations, decide later what to use / keep

basically i structure my stuff into 3 main classes:
–> input = interact with everything
–> physics = animate everything
–> graphics = draw everything

scene graph structures are very useful, your scene can be wrapped into 1 node (root)
each node has several “components”, like transformation, meshes to draw, etc whatever you want (easier than deep class hierarchies)

define structures that describe what to draw, so that you can decouple acualy opengl rendering and building the render data. keep classes as small as possible …

Thanks for the comments[FONT=courier new] DarkPhoton and john_conner.

The engine still isn’t working completely right, but at least displays objects where they belong in the window.

Apparently the problem had something to do with specifying or mapping locations with attributes or uniforms in the engine.

The shaders contain layout and location specifictions for every vertex-attribute and every uniform… so the confusion was definitely not in the shaders. So somehow the engine wasn’t dealing with something correctly. After I posted this message I putzed around with the code that queries this information from OpenGL and stores this information in the engine (in the engine program object structure and the engine batch object structure), and part of that putzing was to move those sections of code a bit.

I’m not sure exactly which of my changes caused the objects to display in the window, but this does raise a point… namely:

I HAVE NEVER FULLY UNDERSTOOD HOW THIS STUFF WORKS.

I refer-to:

#1: the relationship of vertex-attributes to the VAO.
#2: the relationship of vertex-attributes to the OpenGL program object.
#3: the relationship of uniform values (scalars, vectors, matrices) to the OpenGL program object.
#4: exactly when certain actions need to occur (actions relating to making VAOs, IBOs, VBOs active and ditto for uniforms).

Rather than pollute this thread with a detailed discussion of these topics, I will create a new thread about these related issues. These issues probably qualify as somewhat more advanced so I will write and post in the “OpenGL coding: advanced” section.

PS: I agree with both of you completely. This is one of those cases where I decided to change something massively fundamental to the engine… how objects (of all kinds) are identified. Given the engine structure, this means the code to access object information had to change for EVERY kind of object and EVERYWHERE in the program that did anything with any kind of object, including: gpu objects, display objects, window objects, shader objects, program objects, image objects (textures, bumpmaps, heightmaps, etc), [displayable] shape objects (including cameras and lights), socket objects, sound objects, video objects, etc. You get the picture… everything.

Previously I had one uniform set of identifiers that were integer values from 1 to 2^63… much like OpenGL identifiers except every object identifier was unique (which is not true of OpenGL identifiers). This worked great in many ways, but I ran into too many situations where inefficiencies crop in when the engine supports large numbers of “shape objects” (which just means objects that get drawn… though cameras and lights are also shape objects that may have no indices/elements in an IBO and therefore may not be rendered).

Therefore, in the past, the code could take any objid (generic object identifier) and grab the structure that contains information about the object with this code:

ig_object* object = igstate.object[objid]; // the object identifier is the index into the array of object structures

The first 64-bytes of every kind of object structure is identical, so code would typically do something like the following next:

if (object->kind != IG_KIND_SHAPE) { return (ERROR_KIND_INVALID); } // because this function expects a “shape” object.
ig_shape* shape = (ig_shape*) object; // cast to shape structure

[FONT=palatino linotype]The rest of the function would process the shape object by reading, writing and processing variables in the[FONT=courier new] ig_shape structure.

The key point is, objid was an index into that single array of objects of all kinds.

The fundamental change in objid broke the s64 objid variable into four fields:

bits 00 to 31 == index ::: this is an index into the array of structures for that kind of object, and only that kind of object
bits 32 to 39 == kind ::: this specifies which kind of object this is
bits 40 to 47 == extra ::: reserved
bits 48 to 63 == owner ::: owner/creator of this object (for network distributed applications like networked many-player games)
[/FONT][/FONT][/FONT]
[FONT=courier new][FONT=palatino linotype]The obvious consequences are:

#1: code cannot just index into a single array of structures with objid like before (due to the higher 3 fields in objid).
#2: kind can be determined from the objid itself (but the kind field in all structures is also confirmation of this).
#3: now many object arrays exist for each kind of object, not just one igstate.object[] array of structures.
#4: access object structure directly from kind-specific igstate.kindname[] array.
#5: the low 32-bits of the objid can be an index into specialty arrays.

One (of many) reasons I made this change is so the code can create an array of transformation matrices (one per shape object), transfer them to the GPU, then have the shaders access the transformation matrix via a 16-bit integer field in each vertex (would be nice if it was a 32-bit field, but don’t have enough bits in cache-efficient size vertex (64-bytes).

Anyway… you probably didn’t want to read all that, but there you go.

Thanks again for your comments. Look for my questions about VAOs, program objects, layout, locations, etc. [/FONT][/FONT]

[QUOTE=bootstrap;1288304]I HAVE NEVER FULLY UNDERSTOOD HOW THIS STUFF WORKS.

I refer-to:

#1: the relationship of vertex-attributes to the VAO.
#2: the relationship of vertex-attributes to the OpenGL program object.
#3: the relationship of uniform values (scalars, vectors, matrices) to the OpenGL program object.
#4: exactly when certain actions need to occur (actions relating to making VAOs, IBOs, VBOs active and ditto for uniforms).[/QUOTE]
Frame your thinking in terms of draw calls. It’s all about the draw calls. What state is active when you make them, and what processing happens in them.

#2: the relationship of vertex-attributes to the OpenGL program object.
[FONT=palatino linotype][FONT=palatino linotype]
When you make a draw call, the GPU/driver shovels vertex data from your enabled vertex attributes into the vertex shaders, one value for every vertex shader execution.

#1: the relationship of vertex-attributes to the VAO.
[/FONT]
To issue a draw call, you need to setup the bindings and enables for a number of vertex attributes (up to 16 or so each). Worst case, that’s a lot of state setup calls to GL – just to issue one draw call!

VAOs let you just make one GL call to do all that setup rather than invoking the 32-48 (worst case) GL calls you might otherwise need to do.

Think of VAOs like writing a function in C/C++. When you need to do A,B,C,D,E,F,G,H,… over and over, you can be a bad programmer and just copy/paste those steps around your code. OR you can just write a function f() that does A,B,C,D,E,F,G,H,… for you, and then when you want those steps executed just call f(). That’s exactly what a VAO is to vertex attribute setup in GL.

[/FONT]

#3: the relationship of uniform values (scalars, vectors, matrices) to the OpenGL program object.
[FONT=palatino linotype]
You know what vertex-attributes are: within a single draw call, different values are piped in for every vertex shader execution. Uniforms are like that except that the “same” value is piped in for every vertex shader execution (same for fragment shaders, geometry shaders, …any shaders). They’re “uniform” (the same) for all executions within that draw call.

[/FONT][FONT=palatino linotype]

#4: exactly when certain actions need to occur (actions relating to making VAOs, IBOs, VBOs active and ditto for uniforms).

[/FONT]Whenever you need them. But always in order to setup GL state for executing draw calls.

Thanks for the help. Part of the reason for my frustration is… the engine just stopped working (only drew one point in the center of the window). Though my changes were extensive, no code related to VAO or drawing was changed. The only change was a slight difference in the order functions were executed (because I also made program objects full objects, which previously I put off until “someday” and finally implemented). Note: I don’t mean C++ objects, I mean “objects” in the sense my C program functions in terms of objects of various kinds… much like OpenGL does, albeit somewhat different.

Anyway, the fact that a slight difference in execution order made drawing stop working was a surprise to me, and demonstrated that I didn’t fully understand what needs to happen before what else for everything to function [properly].

Thanks to you and Alfonse (primarily), I understand better now. Maybe not 100%, but better. Of course, now that better understand the whole VAO situation (in the old/normal approach), Alfonse points out the “new” approach with glVertexArrayVertexAttrib?Format() and glVertexArrayVertexBuffer() and glVertexArrayVertexAttribBinding() is better given my goals. That threw me back into confusion again, but after struggling to understand for hours in my dreams last night, I think I sorta “get it” now. We’ll see when I modify my code and click “run”.

Honestly, there’s something seriously wrong with the way pretty much everyone describes OpenGL. Even the sources I like the best are confusing to me. Probably this is true because pretty much everyone picks up the approach from the specification documents… though nobody explains as terribly as the specification documents!

What would I do?

Well, I think you hit upon the correct approach in your message when you said, “it’s all about the draw calls” and “think in terms of draw calls”. For the VAO and related topics, that’s the correct approach. What I would probably do (not having thought this through carefully), is to say something like:

When an application wants to draw, the software driver and/or GPU hardware need to know the following to perform the draw operation:

  • what primitives to draw ::: points, lines or triangles (surfaces)
    [SIZE=3] - which IBO holds the indices/elements
  • datatype of each index/element (u08, u16, u32)
  • which index/element is the first to process (byte offset to first)
  • how many indices/elements to process (and thereby how many points, lines, triangles to draw)
  • within each vertex…
    — where is each attribute in the vertex (byte-offset)
    — what datatype is each attribute (s08, u08, s16, u16, s32, u32, s64, u64, f16, f32, f64)
    — how many variables of specified datatype in each attribute (1,2,3,4)
    — what datatype to deliver the attribute to the GPU (convert to f16, f32, f64 or not)
    — which 16-byte location (or locations for matrices) to deliver each attribute
    — which VBO to grab the vertex attributes from (or which VBO if you’re a SoA fan)

I probably forgot something, but you get the point. Truth is, the list above should probably also contain items like “program object to perform the draw” and more (probably everything that impacts the draw should be mentioned, even if only to say “bind the one you want [say where to bind it to] to make it active/accessed/operational during the draw”.

Then explain how to specify all these items so the draw works. For example:

#1: The type of primitive to be drawn (points, lines, triangles) is specified by the “gmode” argument in the glDrawElements() function, not by OpenGL state. Then refer to the section of the document that explains the different ways lines and triangles can be constructed.

#2: The number of indices processed (and thus the number of points, lines, triangles to be rendered) is specified by the “count” argument in the glDrawElements() function. A count of 6 will draw 6 points, 3 to 5 lines (depending on the specific line drawing mode specified in “gmode”), or 2 to 4 triangles (depending on the specific triangle drawing mode specified in “gmode”). See section yada-yada-yada for details.

#3. Most of the information required to draw is specified by the contents of the VAO that is currently made active by calling function glBindVertexArray(vao). And this is where to go into great detail to explain what information needs to be inserted into the VAO, why draw functions need the information, and and how each impacts the draw operation. In other words, a page or several of text about VAO state follows.


Something that still isn’t clear to me… and maybe doesn’t even have a singular answer for every way to work with OpenGL… is whether telling the draw functions how to send vertex-attributes (and also the contents of uniform blocks and shader storage blocks) is fully independent of the program object or not (meaning, independent of what the program object expects to get based upon the contents of layout location and binding syntax in the shaders).

I may be in the minority (not sure), but I’d prefer the whole VAO and draw function part be 100% independent of the program object (and what it expects). I mean, it is fine, great, wonderful and totally desirable to have (and call when desired) functions like glGetVertexAttribLocation() and similar for UBO and SSBO elements for debugging and for programmers who always like to make sure “nothing screwy happened”. But as far as I’m concerned, if the draw function does what is specified in the VAO and arguments, then it has done its job properly. Likewise, if the shaders grab attributes where specified by layout location and binding syntax, the shaders have done their jobs properly too. If some moron writes OpenGL that puts the position in 16-byte location #0 and the normal in 16-byte location #1, but the shader decides to call location #0 “normal” and location #1 “position”… and proceed to write sensible code based upon those names, well, that is not my problem, that is not the problem of the VAO, that is not the problem of the OpenGL programmer, that is not the problem of the draw function, that is not the problem of GLSL, that is not the problem of the GPU… that is abject carelessness on the part of the shader writer. Of course it may be the OpenGL programmer made this mistake, not the shader programmer, in which case the careless moron is someone else.

It is great to have those query functions available for debugging… to help the programmers figure out who was the careless moron and fix the problem quickly. But there is no reason to couple the VAO and draw functions with the shaders and program objects. At least in my opinion.

Just to add a bit of color to my opinion, note that my vertices contain “zenith”, “north” and “east” vectors… not “normal”, “tangent” and “bi-tangent” vectors — even though they more-or-less mean the same thing. You might ask why. Does bootstrap just like to be a jerk? Just like to be different? No, not really. In fact, I hate the proliferation of vague and imprecise language! Which, in fact, is why I chose “zenith”, “north”, “east” for my vertex vector names. Fact is, I can easily, naturally, intuitively imagine looking down on any vertex on any surface and seeing the vector pointing towards the “zenith” from that vertex, another vector pointing “north” along the surface, and another vector pointing “east” along the surface. And you know what? I immediately know I’m working with a right-handed coordinate system, because otherwise one of those vectors would be pointing the opposite direction. Try to imagine such a clear and precise visualization based upon “normal”, “tangent”, “bitangent”. Good luck with that. Plus, you cannot know whether you are working with a right-handed or left-handed coordinate system.

And so, if someone wants to write a shader that contains the terms “normal”, “tangent”, “bitangent” for the vectors… he or she will have no problem understanding exactly what the “zenith”, “north”, “east” vectors in the engine mean… and how to map his names to the engine names. Likewise, if I wanted to write a shader to work with an engine that had “normal”, “tangent”, “bitangent” vectors, I could figure that out. Not quite as easily, of course, due to the question of handedness, and also due to the fact that nothing absolutely forces the “tangent” vector to point “north” (could be north, south, east or west) and nothing absolutely forces the “bitangent” vector to point 90-degrees clockwise or counterclockwise from the “normal” vector.

Anyway, too much babble I suppose. I appreciate that you started your explanation from a wise, sensible, practical point of view. But honestly, that is quite rare for OpenGL documentation and also for OpenGL conversations (probably because people tend to talk in ways they learned from OpenGL documentation or books).

PS: I know very well that writing well is extremely difficult. That’s one reason why I don’t write — not counting software applications… or screenplays. But I truly believe that OpenGL has a much worse reputation than it should have due to the ineffective way OpenGL is described in writing. Maybe (not sure) the specifications need to be written the way it is. But nothing else about OpenGL need be written so non-intuitively.

Thanks for the help.

Oh, one last comment. If someone ever does write a new OpenGL book (and boy should they ever!!!), they could make OpenGL vastly, vastly, vastly easier to understand if every non-trivial discussion included one to a few drawings to present the elements and their connections and encapsulations in a fully visual way. The human brain is extremely good at comprehending visual configurations and representations… vastly better than abstract lingo, none of which presents the entire structure or configuration in a single glance. If someone was to do write such a book, and present topics in the way you approached this one (from an operational or functional perspective), OpenGL would be vastly easier to consume and comprehend. Oh, and leave out OpenGl history, or put in an appendix somewhere (better yet, in some other book). Just present the latest and greatest and most powerful and lowest driver-overhead approaches… and leave out everything else. Let them read other books to accomplish things “the bad old lame ways”.[/SIZE]

Short answer: yes, they can be.

Longer answer: Specification is (can be) independent. It’s only when you add in utility that there’s any dependence.

Your vertex attribute setup/enables is all the driver needs to shovel these attributes toward your shaders. However, whether your shader actually gets them is determined by whether they match (or are a superset) of the shader’s expectations. For instance, positions are shoveled in on vertex attribute 0 AND shader is expecting to receive them on vertex attribute 0. Similarly for other vertex attributes. If you have an implicit agreed-upon interface, then you don’t need code hooking-up and verifying vertex attributes to program objects in the renderer. They just match, by design. Typically, this is all pre-baked into the assets you load.

Same here. Except that:

  1. With uniforms/SSBOs, your shader tends to pull values into the shader rather than have the driver/GPU push them in.
  2. As uniforms/SSBOs, these will likely be managed at a higher level than the batch state in your engine.
    [/SIZE]

Maybe because the normal vector is mathematically a normal of the surface at that vector position. And maybe because the tangent vector is a tangent of the surface curve at that vector position ?
Zenith, north and east mean absolutely nothing. Both at mathematical and physical levels. Where is the north ? North torward which vector direction ?

I really advise you to stuck with mathematical and physical meaning when doing your 3D engine (at least for the public API if you want to keep the blackbox internal unswimmable). Except if you would like your engine users to lose their hairs using your soft. But that won’t be a slick idea.

[QUOTE=Silence;1288373]Maybe because the normal vector is mathematically a normal of the surface at that vector position. And maybe because the tangent vector is a tangent of the surface curve at that vector position ?
Zenith, north and east mean absolutely nothing. Both at mathematical and physical levels. Where is the north ? North torward which vector direction ?

I really advise you to stuck with mathematical and physical meaning when doing your 3D engine (at least for the public API if you want to keep the blackbox internal unswimmable). Except if you would like your engine users to lose their hairs using your soft. But that won’t be a slick idea.[/QUOTE]

Actually, you didn’t quite think that all the way through. True enough, what constitutes “north” on a sphere or other shape object can be chosen at random (though if the object has a natural rotational axis (like a planet), one would almost certainly chose one of the rotational poles to be “north”. However, the combination of “north” and “east” says something, while the combination of “tangent” and “bitangent” says nothing (other than both tend to be tangent to the surface for most purposes). You can’t tell from “normal”, “tangent” and “bitangent” what kind of coordinate system you have (right-hand or left-hand). You can tell from “zenith”, “north”, “east” that you are working with a right-handed coordinate system (otherwise you’d be working with “zenith”, “north”, “west” or “zenith”, “south”, “east”.

The other difference between “tangent”, “bitangent” and “north”, “east” is the fact that on some shapes (anything remotely like a sphere or ellipse, especially ones that tend to rotate around some axis), a reasonable person knows which direction “north” points (and thus “east” too). At worst he knows it is one of two directions, but a quick glance at positive and negatives will infer the + direction points north" and the - direction points “south” (along that z-axis).

The same is true for other shapes. And BTW, in graphics applications, a “tangent vector” is not just “any” of the infinite number of possible tangent vectors at each point on a surface… unless that point is the only point [of interest] on the surface. All the points on all surfaces of a physical object need to have a consistent, coordinated sense of the surface coordinate system, otherwise most graphics won’t work (certainly texturemaps, normalmaps, conemaps, specularmaps and othermaps won’t work). I thought you would have noticed that part of the point of “north” and “east” as surface vector names is precisely that they give a clue as to which of the infinite possible tangent directions each refers-to.

Anyway, obviously my clues don’t work for everyone, so I’ll be sure to be very clear and explicit in the documentation. Another factoid one would learn from the documentation is… the fact that the z-axis of object local coordinates is the axis of symmetry for shapes that have [just] one, and absent that the [natural, nominal, common, intuitive] axis of rotation. Of course that’s different than the surface coordinate system, but there is a natural, nominal, common, intuitive relationship that most people would recognize, guess or infer.

[SIZE=3]And BTW, “zenith”, “north”, “east” has specific physical and mathematical meaning, and in fact a more specific meaning than “normal”, “tangent”, “bitangent” for the reasons I stated. To some degree both are more specific than “x, y, z” vectors (where z is short for “zenith” of course… hahaha).

If the engine becomes available to the public, it will almost certainly be open-source. While I’ve been a self-employed scientist, engineer, inventor, product-developer for a living since before I finished school, an engine like this is not my idea of a “product” to make money on (except maybe by providing technical support and such… hahaha). But also, the engine will come with a “theory of operation” document that explains a great many aspects of the engine… including the vector naming convention. Such a document will be unavoidable, since the engine contains several uncommon features (like procedurally generated content in general, and how to construct complex procedurally generated objects from more basic procedurally generated shapes… including objects that have many levels of hierarchical articulation, which is quite easy and natural with this engine).

PS: If you don’t like the names I chose, that’s fine; we all have our own personal preferences. Just don’t confuse personal preferences with reality or fact.
[/SIZE]


Yeah… yikes… loading assets! That tends to have a different meaning or significance in an engine like mine that does (or will do) most everything with “procedurally generated content” approaches. While the engine can load artist-generated objects (just one format so far), and the engine can save and load “procedurally generated objects” that it created, the nominal approach is supposed to be… content is created procedurally (by code in the application that calls functions in the engine).

As a nod to artists who are truly “allergic” to programming, the API functions are designed so one can “specify” objects and object assembly with intuitive text that maps directly to object create, configure, manipulate functions. And so, “artists” will be able to specify “procedurally generated content” too… without writing code (engine functions will read the text and call the appropriate functions with the specified arguments to do what the artist desired).

OTOH, that part of the engine isn’t implemented yet. This is part of the mechanism by which remote applications (like another “player” in a multi-player game) can inform other engines on other machines elsewhere on the internet (or LAN) of newly created or destroyed objects as well as forces applied to them and pretty much everything else the engine can do too. I’m sure I’ll discover problematic situations when merging assets from all over the universe into single running instances of games or simulations. Theoretically everything was envisioned in the original design, but we know the realistic answer to that. As in “yeah, right” and “no such luck”.

As for UBOs and SSBOs… yes, the are very unlikely to change from batch to batch. In fact, so far the plan is, they are permanently fixed and specified. Of course what is specified is a superset of everything needed by every program object. So probably no displayble object or program object will access every UBO or SSBO.

My next two pieces of work are to rework the VAO code I have, plus implement bindless textures for “images” (texturemaps, surfacemaps AKA normalmaps, conemaps, specularmaps, heightmaps, displacementmaps, othermaps). I’m hoping I can create two element vectors of u64 texture handles so I can pack the maximum number of texture handles into the smallest UBO (with no gaps). However, I’m not even sure a two element u64 vector has been defined in OpenGL or GLSL yet. I hate to create half-empty buffers, which may otherwise be what I’m faced with to pass in hundreds or thousands of u64 image/texture handles (in an array of u64 handle values).

Then let us deal with “reality or fact”.

When dealing with mapping of textures to a surface, “tangent” and “bitangent” have established meanings. There is wide-ranging literature in the CG community which uses these terms. These are terms in common usage among graphics programmers. These terms are very much not “vague and imprecise language”; not to the people that use this stuff on a daily basis.

None of that is true of “north” and “east” with respect to texture mapping. Computer graphics researchers do not commonly use such terms, nor are they in common use among graphics programmers.

Established conventions very much are “reality or fact”. Conventions exist to make it easier for us to talk about complex topics while still being understood by those who need to. Choosing your “personal preferences” over established conventions doesn’t makes it harder for people to communicate with you. Just look here; to get someone to understand what you’re talking about, you had to write a lengthy dissertation explaining what your words meant.

Which kind of defeats the purpose of using those words at all :wink:

If you’re coding for yourself, fine, do whatever you want. But if you expect anyone else to understand you, you have to adhere to the established conventions.

And on a “personal preference” note, I don’t know why you would care if a particular tangent space mapping was right handed or left handed. It doesn’t change the math one bit. Whether tangent cross bitangent goes into the object or out of it doesn’t change what those vectors mean or the math operations you do to convert between coordinate systems.

Mathematics is the lingua franca of our profession.

Yeah… yikes… loading assets! That tends to have a different meaning or significance in an engine like mine that does (or will do) most everything with “procedurally generated content” approaches.

Doing it procedurally makes things easier, not harder, since it’s all under your control.