Convert to HLSL clip-space as a last step in vertex shader?

From what I can tell OpenGL uses a -1,+1 clip space (along Z) and Direct3D uses a 0,1 clip space (the Y axis is also flipped I think??)

I have this arrangement where the client app doesn’t know what the underlying vendor API is going to be, so it is up to the shader (ideally) to deal with the agnostic inputs.

Given OpenGL conventions running in an HLSL shader. The only challenge I think is the clip space.

I am hoping that this can be wrapped up in the last few lines of the vertex shader. And thought it would not hurt to just ask:whistle:

FYI: For what it’s worth, I’ve always liked OpenGL (column major) matrices on the CPU side, though I know people bemoan them. The real value in my opinion is having the axes and position of the matrix accessible as contiguous memory. I don’t know if that was the original motivation for the arrangement, but if your code needs to be around for a long time I think this is best for everyone on the maintainer side. I’m not as crazy about the -1,1 clip space (at least it’s consistent in all 3 dimensions) but I reckon it would help to just pick a side, and OpenGL is obviously the better side from an openness standpoint.

For what it’s worth; after a little monkeying around I came up with the following technique…

//OpenGL->Direct3D clip-space
" Out.pos.z/=-Out.pos.w; "
" Out.pos.z+=1; Out.pos.z*=0.5; "
" Out.pos.z*=Out.pos.w; "
" return Out; "

A different formulation might end up with w being 1; basically doing the perspective divide in the vertex shader. But I do not know if that is safe or not. EDITED: It’s not safe; See A.R.'s comments below. In my tests I also ended up with a divide by 0. I don’t know how likely that is in the real world, or what would be the end result. I am guessing this doesn’t happen in theory due to clipping at the near plane before division would normally take place.

Furthermore I do realize that this gets computed every time the vertex shader runs. It’s easy to make an argument that the matrices should just be prepared for the shader; but on the other hand it also makes sense to not burden client applications with low level details. Time will tell I suppose what makes sense. But a simple environment variable can always optimize the post processing away.

I am primarily only interested in whether or not this looks correct. I have not done extensive testing. The X axis may be flipped for all I know. But it looks normal enough.

I am also concerned about 0 divide, but not really.

Wouldn’t it just make more sense to have OpenGL and D3D-specific projection matrices? You know, have different functions to compute projection matrices and call them based on the renderer? After all, it is the projection matrix that defines the particular clip space.

Also, doing the perspective divide outside of hardware means that you won’t be getting perspective-correct interpolation of components.

[b]Yeah you’re right there[1] I don’t count myself a genius, but the code in the second post I think just transforms between the clip-conventions, and as you can see keeps the w component. If I see perspective oddities at some point I will be sure to report otherwise. Right now I am just getting started with this render framework.

As for micro managing the projection matrices etc. I touched on this above. I realize it would be better. But this is being implemented in a vendor API agnostic organization. I realize the only real games in town are probably OpenGL and Direct3D. But the project is intended to be long term, and the abstraction layer is not concerned with the contents of the shader registers for instance–the interfaces do not even mention vertex or pixel shaders.

I reckon the post processing needs will be specified by preprocessor macros. And if the client application can reasonably guess the underlying implementation they can do the necessary preprocessing; and if the guess is right, get a minor performance boost.


  1. /b ↩︎

But this is being implemented in a vendor API agnostic organization.

I don’t see how that has anything to do with this. Being API agnostic on the outside does not mean you’re API agnostic on the inside. Indeed, that’s the whole point of a platform-neutral API: you put all your platform specific code behind a wall of platform neutrality.

Your outside user should not be making projection matrices by themselves. Or if they are, then you need to modify those matrices before passing them along internally. And if your abstraction is too low-level to do this (that is, you don’t know where the projection matrix is or can’t otherwise identify it), then your abstraction is too low-level to be a proper API-agnostic abstraction. And then that is what you should be fixing.

^The point of the middling API is to be pluggable. So the number one point of order for it is to be easy to implement and maintain, as there can easily be many multiple platform targets… every version of every vendor API are candidates including remote rendering and diagnostic and version compatibility layers.

The API just assembles and runs GPU programs, loads registers. It doesn’t need to know about matrices because those are shader variables and there can be any number of them or none. It also manages local (eg. video memory) buffers and accelerates presentation (SwapBuffers) via a portable OS layer. It’s maintainer focused and intended to be easy to implement with as few export (virtual method) requirements as possible.

If you must know :slight_smile:

So the number one point of order for it is to be easy to implement and maintain

So… how does that jive with having to hack a vertex shader and back-door this conversion between GLSL’s clip-space and HLSL’s clip-space? Because that’s not easy to implement, and even you haven’t found a correct implementation yet. And let’s not forget the fact that it becomes rather more complex to do if the user is using tessellation shaders or a geometry shader.

“easy to implement and maintain” is not going to work at the level of abstraction you’re trying to work with. At least, it’s not going to work with “cross-platform from user code”. Because this won’t be the last of the irreconcilable differences between OpenGL and D3D, especially if someone tries to use older GL versions. So either your abstraction is going to leak (ie: external code will have to do different things based on the platform) or implementing it is going to be a pain.

It’s maintainer focused and intended to be easy to implement with as few export (virtual method) requirements as possible.

The number of "virtual method"s is a terrible measure of ease of implementation. What matters is what those methods actually do.

Goodness.

Why would this matter for either of these shader stages? If they are not the same thing.

I don’t consider it a hack. You just advertise the convention in play. Don’t get me wrong. Another layer can be implemented to do these things. But it’s above and beyond actually interfacing with the vendor APIs because you can implement this without touching the APIs. Also the conventions are much broader than the actual APIs. If you are interested I can show you how this all works with some links but it would be going further off topic. Maybe PM if you really want to know.

“easy to implement and maintain” is not going to work at the level of abstraction you’re trying to work with. At least, it’s not going to work with “cross-platform from user code”. Because this won’t be the last of the irreconcilable differences between OpenGL and D3D, especially if someone tries to use older GL versions. So either your abstraction is going to leak (ie: external code will have to do different things based on the platform) or implementing it is going to be a pain.

Maybe we are talking about two different goals. I can’t really follow this line of questioning anyway. Sorry. External code will have to do different things depending on what it wants to do. Some things may be redundant. The idea is just to be able to swap out the vendor API without rewriting the code that touches the abstraction API. If a plugin fails it just fails, the end user can try another one and file a complaint.

The number of "virtual method"s is a terrible measure of ease of implementation. What matters is what those methods actually do.

Well in my experience if there are many, there is a lot more that must be done before a maintainer can start testing things; and that scares them away. And it makes headers a mess to follow and document. If you are making something monolithic that is fine. But if you are making a specification for a plugin keeping it simple makes things easier for everyone (including myself)

Just to be clear on something I said a little while back that is a little confusing on reading.

In plain English. The “macros” would be set (they are; but you know what I mean) via an API that adds it to the shaders as a #define one way or another. And the “preprocessing” has nothing to do with the shader. That refers to prepping the projection (and or transform) matrices pre shader. So the basic workflow is the client application does things however they want to. It may be in control of the shaders or it may not. It adds something like “OPENGL_CLIP” to the list of macros. And then the shader will handle the macro. Probably it will have an include header added to it, and be expected to add a footer to the end of the vertex shader to catch the OPENGL_CLIP. If the shader is GLSL nothing is done. If its HLSL then a little extra work is done by the shader. No big deal. If the client app can’t live with the work it can define DIRECT3D_CLIP and rework its projection matrices itself and hope for the best.

The client app might guess D3D because it loaded a plugin called Direct3D or it might guess OpenGL because the plugin was able to assemble an OpenGL program.

To be honest. It looks pretty clear to me that we are moving towards configuring the entire device via text files. And the APIs are doing not much more than managing video memory. So this kind of framework works out just fine.

At this stage the spec only does assembly but if at some point it will specify a pseudo shader language most likely based on functional blocks that the plugins can reasonably implement.

The goal is to provide bare bone tools for every day people who want to make very presentable full fledged video games with a focus on the fundamentals.

I mean I was looking at the D3D11 documentation the other day, and there are just so many interfaces methods and constants all over the place it is bewildering. But video games still look the same as they always have. So you have to imagine that most of it is just cruft.

The goal is to provide bare bone tools for every day people who want to make very presentable full fledged video games with a focus on the fundamentals.

Then you are really going about this the wrong way. You want to make a real abstraction, not a bunch of macros and other such nonsense.

People who want to make “very presentable full fledged video games” are either people with money (and therefore aren’t going to use your tool when they can hire a programmer to write and maintain their engine for them) or indies. And indies aren’t trying to use low-level tools; they’re using higher level stuff like UDK, Unity3D, XNA, and so forth. They don’t want do deal with the low-level details, because low-level details are taking precious development time away from important things like gameplay and getting the game finished.

So the only people who would use this tool you’re developing are hobbiests. People who noodle around with game code in their spare time, with no real determination to make a better game.

I mean I was looking at the D3D11 documentation the other day, and there are just so many interfaces methods and constants all over the place it is bewildering. But video games still look the same as they always have. So you have to imagine that most of it is just cruft.

… what? Let’s ignore the fact that “video games still look the same as they always have” is patently false. You are saying, in all seriousness, that most of D3D11’s API is cruft because, in your estimation, games don’t look any differently. That the number of API functions is in any way related to the way games look. That if an API has a lot of functions, then that must mean games should look better or else the API is broken.

There is no logical reasoning between “API has lots of functions” and “games don’t look better” that leads to “API with lots of functions that don’t matter.”

I am going to give you the benefit of the doubt here. I program C++ with include files macros and what have you. That’s how we program. Shaders work the same way. So I don’t understand the disconnect you have here.

People who want to make “very presentable full fledged video games” are either people with money (and therefore aren’t going to use your tool when they can hire a programmer to write and maintain their engine for them) or indies. And indies aren’t trying to use low-level tools; they’re using higher level stuff like UDK, Unity3D, XNA, and so forth. They don’t want do deal with the low-level details, because low-level details are taking precious development time away from important things like gameplay and getting the game finished.

This is further offtopic. But many people don’t feel intellectually served by these institutions. I don’t personally consider “indie” games to even be relevant. I think we would have different conceptions of what “very presentable full fledged video games mean”. I don’t personally even think most “AAA” titles nowadays would qualify for the distinction. I think what I mean is professional presentation, polished, and a real full featured 3D adventure game; you know, what people usually mean when someone says “real game”.

So the only people who would use this tool you’re developing are hobbiests. People who noodle around with game code in their spare time, with no real determination to make a better game.

Today’s hobbiests have resources above and beyond what businesses were doing with video games up to around 1995 and even up into 2000. People with opinions I respect consider the 90s the golden age of video games.

… what? Let’s ignore the fact that “video games still look the same as they always have” is patently false.

Again. Games are still just texture mapped triangles; for the last 5 years with really ugly shadow techniques. The formula is remarkably stable. OpenGL ES is a great example of what a video game really amounts to.

You are saying, in all seriousness, that most of D3D11’s API is cruft because, in your estimation, games don’t look any differently. That the number of API functions is in any way related to the way games look. That if an API has a lot of functions, then that must mean games should look better or else the API is broken.

There is no logical reasoning between “API has lots of functions” and “games don’t look better” that leads to “API with lots of functions that don’t matter.”

It was not a critique of D3D11. It exposes hardware functionality presumably. So it can’t be allowed to take liberties one would assume. But the basic gist of what a game needs to be able to do in the graphical dept. can be summed up in a remarkably small number of hypothetical API procedures (thanks to the advent of programmable GPUs of course)

I am going to give you the benefit of the doubt here. I program C++ with include files macros and what have you. That’s how we program. Shaders work the same way. So I don’t understand the disconnect you have here.

I think you’re missing the point here. It has nothing to do with the fact that it’s a macro. My point is that “bare bones tools” are not what people actually want.

If what you want to do is make a library to be used by hobbiests and “People with opinions I respect,” that’s your prerogative. I’m simply telling you the realities of the situation on the ground: the tool you are trying to make will be too technical for most people who might want to use it, and will be too inferior for most people with the technical know-how and resources to put it to use. That doesn’t mean nobody will use it, but you’re aiming in the wrong direction here.

Just look at XNA for an example of how to develop an API for hobbiest and indie game developers. That is what they want: plug-and-play. Simple, easy-to-use, powerful, fast, no messing around with shaders (unless they want to), no messing around with macros, API plugins, or other such nonsense. There is a reason why people have made successful games (though certainly not “real games” by your definition) with tools like XNA.

I think what I mean is professional presentation, polished, and a real full featured 3D adventure game; you know, what people usually mean when someone says “real game”.

Let’s ignore the “3D adventure game” part (where you effectively claim that every other genre of games is by definition not a “real game”). Where do you think the money for “professional presentation, polished” is going to come from?

Anyone who can afford “professional presentation, polished” graphics can also afford either a person to develop and maintain a graphics code base, or can buy a license for a 3rd-party engine that will be perfectly capable of “professional presentation, polished” graphics when properly used. In short, they don’t need your tool.

People who can afford to hire the artists and others needed to make “professional presentation, polished” graphics do not use some random code they found off of the Internet. You are aiming at a market that simply does not exist; your tech is not putting Transgaming out of business.

Today’s hobbiests have resources above and beyond what businesses were doing with video games up to around 1995 and even up into 2000.

No, hobbiests do not have more resources than actual game development companies in the 90s. Even back then, games cost millions of dollars to make. Granted, it was in the low millions, but it still cost more than most hobbiests are willing/able to spend. And it cost a lot more time than what hobbiests have to spend.

Yes, hobbiests have access to engines and codebases of a quality not seen before. And yes, those programming tools are more refined than what game developers of the 90s have. However, that alone is insufficient. You need money to hire artists; they’re still important because they’re what actually gives a game “professional presentation, polished” graphics.

Programmer art is not “professional presentation, polished.”

People with opinions I respect consider the 90s the golden age of video games.

The fact that you respect their opinions doesn’t make them right. Personally, I find nostalgic pap about “golden ages” of any medium to be what it is: garbage from old people who are too tied to the past to see and respect what they have now. I miss some of the non-linearity and some other gameplay elements of yore. But I don’t miss the rampant nonsense, ridiculous difficulty, tedious grinding, relentless padding, and brutally unfair gameplay that has absolutely no respect for the player as a human being.

Having lived through those days, I remember them for what they were. For good and for bad. There was no golden age; it was just an age with its own specific quality. And if that’s what you liked, fine. But that doesn’t make it superior to what we have now.

Most importantly of all… what does this have to do with your graphics programming tools? Do you honestly think that the reason the 90s were the golden age of videogames, and that modern games aren’t suited to your tastes, is because their graphics programming tools were crappier? And thus making poor-quality graphics tools will somehow make gameplay better? This is cargo-cult thinking, that if you build a runway and have people walk around waving sticks in the air, the planes will come back.

If you truly believe the 90s were the Golden Age of Videogames, and you want to do something to spur their return by helping hobbiest developers, this isn’t it. XNA and Unity3D, easy-to-use tools that take the grunt-work out of game development, have done far more to revive the elder days of 90s gaming than a few GLSL macros and “bare bone tools” ever will.

Games are still just texture mapped triangles; for the last 5 years with really ugly shadow techniques. The formula is remarkably stable. OpenGL ES is a great example of what a video game really amounts to.

Games are just bitmaps blasted to the screen; for the last few years, with really ugly parallax background techniques. The formula is remarkably stable. The Super NES is a great example of what a video game really amounts to.

This statement is about as accurate as yours. You can reduce any computer process down to basic math, but that doesn’t mean nothing has changed in the history of computing. That doesn’t mean that those “bitmaps blasted onto the screen” games all look the same any more than those “texture mapped triangles” games all look the same.

And in all seriousness, if you can claim with a straight face that Quake looks the same as Portal 2, then I have to question your claim to being a graphics programmer.

Wow Alfonse this is really heating up!

I think the subject needs to be changed at this point. I am happy to have a conversation here. Maybe it can generate some publicity. And these things are always interesting to discuss. Even if the discussion is aggressive :slight_smile:

FYI: I need to run. But this subsystem we’ve been inadvertently discussing is way down deep in the belly of a larger system that is much much simpler than XNA or any of the indie tools and more interesting than games with “mod” functionality. In other words it’s not the tool itself. It is like a plugin though that an end user and or author chooses one way or another if not directly according to preference.

First let me say that I am unsure how much of this post I can actually reply to, or even how much I want to reply to. I cannot edit the subject of this thread at this point (or even older posts it seems)

[QUOTE=Alfonse Reinheart;1242402]
If what you want to do is make a library to be used by hobbiests and “People with opinions I respect,” that’s your prerogative. I’m simply telling you the realities of the situation on the ground: the tool you are trying to make will be too technical for most people who might want to use it, and will be too inferior for most people with the technical know-how and resources to put it to use. That doesn’t mean nobody will use it, but you’re aiming in the wrong direction here.[/QUOTE]

A) If you do not know of any public figures that are disappointed with the state of video gaming then you are living in a bubble.

B) We live in a time where all tools are becoming decentralized and accessible to everyone. No one says that an author is not a pro because they work mostly by themselves. Indeed that’s why books are the basis for most big budget media because its a medium that can still be done by an individual with a singular creative vision. That’s what makes the work have integrity. All of the for profit businesses and institutions in the world have pretty much failed to deliver a tool that actually allows the everyman to make a video game. The process of making a game in itself is appealing to a lot of people whether anyone else ever knows said game exists or not. Bottom line your acerbic arguments are already irrelevant whether you like it or not.

Video games are kind of complicated but they are no different than anything else. It’s really more of a social problem than a technical one. Sooner or later we will get it together. I am just not one to sit around and wait.

Just look at XNA for an example of how to develop an API for hobbiest and indie game developers. That is what they want: plug-and-play. Simple, easy-to-use, powerful, fast, no messing around with shaders (unless they want to), no messing around with macros, API plugins, or other such nonsense. There is a reason why people have made successful games (though certainly not “real games” by your definition) with tools like XNA.

I did not invent the term “real game” … its a demographic consensus. Though I have my own preferences. IMO people do not even want this. That’s why so many games are awful. What every day people really want is something like the “Game Maker” games of the 90s. Which disappeared once businesses realized it was a paradox in terms of a business model. I do not mean to air my laundry. But basically what I am working on (lately) is basically an emulator for a promising such tool. So that the games can be played with much better performance. And extended without limitation. With any luck it will become the de facto 3D adventure maker with built in class. If it doesn’t, hopefully something else will.

Let’s ignore the “3D adventure game” part (where you effectively claim that every other genre of games is by definition not a “real game”).

I didn’t. Everyone else does. Eg. a motion control game is not a real game. A puzzle game is not a real game. We know what people mean when they say that. And so do AAA developers.

Where do you think the money for “professional presentation, polished” is going to come from?

No where. Games are information. They have no material substance. So like everything else with no material substance, eventually games will be of no material value. If people like a game hopefully they will be life support for the artists.

Anyone who can afford “professional presentation, polished” graphics can also afford either a person to develop and maintain a graphics code base, or can buy a license for a 3rd-party engine that will be perfectly capable of “professional presentation, polished” graphics when properly used. In short, they don’t need your tool.

In my experience any “one” with that mentality is statistically going to produce a crap game. So who cares.

People who can afford to hire the artists and others needed to make “professional presentation, polished” graphics do not use some random code they found off of the Internet. You are aiming at a market that simply does not exist; your tech is not putting Transgaming out of business.

You are being awfully presumptuous don’t you think? To be honest. As we move towards everyone making games. Games will have to become media like an mp3 file since we can’t just trust every executable file found around the internet as you say. So there will be interesting race to produce “the” common media player so to speak. But I don’t have anything much to add to that.

No, hobbiests do not have more resources than actual game development companies in the 90s. Even back then, games cost millions of dollars to make. Granted, it was in the low millions, but it still cost more than most hobbiests are willing/able to spend. And it cost a lot more time than what hobbiests have to spend.

You are correct. It’s more of a social problem. We have to pool artwork and standardize a lot of things. But it will get done. And if it doesn’t, you’ll never see really compelling virtual reality landscapes like in the movies. Huge game companies can’t produce anything you’d want to spend much time in; and it takes them forever to do it. There is just 0 diversity there. And that is killer of a species.

Yes, hobbiests have access to engines and codebases of a quality not seen before. And yes, those programming tools are more refined than what game developers of the 90s have. However, that alone is insufficient. You need money to hire artists; they’re still important because they’re what actually gives a game “professional presentation, polished” graphics.

I don’t think they have any real killer apps. Mainly we have various technologies. Especially internet technologies. And we know so much more. You can learn more about 3d graphics in a week than most professionals even knew about prior to 2000.

Programmer art is not “professional presentation, polished.”

Professional presentation doesn’t have a lot to do with art. It just means something looks finished and not slapped together. A lot of professional games look that way, so maybe professional is the wrong word. Basically it means standardized if you are talking about amateur games. Because amateurs can’t be expected to make something that is polished unless there is a framework already in place for them that cannot permit anything less.

The fact that you respect their opinions doesn’t make them right. Personally, I find nostalgic pap about “golden ages” of any medium to be what it is: garbage from old people who are too tied to the past to see and respect what they have now. I miss some of the non-linearity and some other gameplay elements of yore. But I don’t miss the rampant nonsense, ridiculous difficulty, tedious grinding, relentless padding, and brutally unfair gameplay that has absolutely no respect for the player as a human being.

All media has its golden ages and if you wait long enough renaissances. There is something cyclical to it. Anyway. I consider the 90s the high water mark just on artistic merit alone. Not being stupid more often than not. But the games before then were good too. But I think we reached the zenith of 2D pretty easily. But 3D is an order of magnitude different. And I don’t think we’ve even begun with 3D. We never learned the fundamentals. So all games are more and more crap as the amateur aspect of making them is lost. Thanks to the internet we can begin to reverse that and breath some life into things. Most likely corporate games will implode all on their own anyway.

Having lived through those days, I remember them for what they were. For good and for bad. There was no golden age; it was just an age with its own specific quality. And if that’s what you liked, fine. But that doesn’t make it superior to what we have now.

I am 31 and counting, and I’ve made it a point to play every single game from the advent of games (I am a decade or two behind but thankfully so much of the last decade do not even qualify) even Japanese PC games pre consoles. I will tell you a secret. Hyper commercialization tends to spoil things.

Most importantly of all… what does this have to do with your graphics programming tools? Do you honestly think that the reason the 90s were the golden age of videogames, and that modern games aren’t suited to your tastes, is because their graphics programming tools were crappier? And thus making poor-quality graphics tools will somehow make gameplay better? This is cargo-cult thinking, that if you build a runway and have people walk around waving sticks in the air, the planes will come back.

They were the golden age because the studios were more or less amateurs and they had just the right level of tools at their disposal to not get too lost in fruitless endeavors. Sometimes less is more. Also games were not yet being targeted to a general audience demographic. So you had games with personality that take risks… or maybe people just did not know what “everyone” wants. Who knows. It is what it is.

If you truly believe the 90s were the Golden Age of Videogames, and you want to do something to spur their return by helping hobbiest developers, this isn’t it. XNA and Unity3D, easy-to-use tools that take the grunt-work out of game development, have done far more to revive the elder days of 90s gaming than a few GLSL macros and “bare bone tools” ever will.

Those tools are way way too inaccessible and have yet to produce anything that is particularly impressive. Much less anything that looks and feels like a classic (again you are showing some immaturity with the “GLSL macros” barbs.)

Games are just bitmaps blasted to the screen; for the last few years, with really ugly parallax background techniques. The formula is remarkably stable. The Super NES is a great example of what a video game really amounts to.

Traditional 2D games are as much. There are more than a few ways to render a scene. But we’ve pretty much settled upon triangle rasterizers. And with the hardware acceleration there is a lot of inertia there. It will probably be the fashion for a good deal yet.

This statement is about as accurate as yours. You can reduce any computer process down to basic math, but that doesn’t mean nothing has changed in the history of computing. That doesn’t mean that those “bitmaps blasted onto the screen” games all look the same any more than those “texture mapped triangles” games all look the same.

To me this was a positive statement about the bedrock being firm. I will post what this means before I tire of this thread :slight_smile:

And in all seriousness, if you can claim with a straight face that Quake looks the same as Portal 2, then I have to question your claim to being a graphics programmer.

Quake used a software rasterizer I think. But yeah, conceptually they are the same. But this did not really standardize until after TnL lighting was fully embraced. And with shaders the whole thing was really stripped down to the likes of Open GL ES. And I suspect if history is any indicator of the future, games will basically look a lot like glDrawElements for a long time to come.

PS: Please! For the love of god do not reply to this post point by point :smiley:

Attachment:

You (Alfonse) seem very interested. So here is the plugin specification that you (we) are discussing.

=stripped down to the C++ API

  
struct SOMPAINT_LIB(server);
#define SOMPAINT SOMPAINT_LIB(server)*

struct SOMPAINT_LIB(buffer); //video mem
#define SOMPAINT_PAL SOMPAINT_LIB(buffer)*

struct SOMPAINT_LIB(raster); //OS handle
#define SOMPAINT_PIX SOMPAINT_LIB(raster)*

struct SOMPAINT_LIB(window); //rectangle
#define SOMPAINT_BOX SOMPAINT_LIB(window)*

struct SOMPAINT_LIB(server) 
{
#ifndef __cplusplus 
     /*probably should not rely on this working
    //Use the SOMPAINT_LIB(Status) API instead*/
    void *vtable; /*debugger*/
#endif

    int status;
    
    float progress; //0~1

    const wchar_t *message; 

#ifdef __cplusplus

    SOMPAINT_LIB(server)()
    {
        status = 0; message = L""; progress = 0.0;
    }

    /*0: Normal idling status
    //Busy at work if greater
    //Fatal error if negative*/
    inline operator int(){ return this?status:-1; }

    /*the device is lost and cannot reset
    //should only occur if expose is used*/
    inline bool lost(){ return this?status==-2:false; }

    /*compiler: MSVC2005 has trouble calling 
    //virtual members from an object reference*/
    inline SOMPAINT operator->(){ return this; }
    
    /*See struct SOMPAINT_LIB(buffer)*/
    virtual SOMPAINT_PAL buffer(void **io)=0;

    template<typename T>
    inline SOMPAINT_PAL pal(T *t){ return buffer(&t->pal); }
    template<typename T>
    inline SOMPAINT_PAL pal(T &t){ return buffer(&t.pal); }

    /*See C API comments before using*/
    virtual bool share(void **io, void **io2)=0;    
    virtual bool format2(void **io, const char*,va_list)=0;
    virtual bool source(void **io, const wchar_t[MAX_PATH])=0;
    virtual bool expose2(void **io, const char*,va_list)=0;
    virtual void discard(void **io)=0;

    inline bool format(void **io, const char *f,...)
    {
        va_list v; va_start(v,f); bool o = format2(io,f,v); va_end(v); return o;
    }
    inline bool expose(void **io, const char *f,...)
    {
        va_list v; va_start(v,f); bool o = expose2(io,f,v); va_end(v); return o;
    }

    virtual void *lock(void **io, const char *mode, size_t inout[4], int plane=0)=0;
    virtual void unlock(void **io)=0; 

    virtual const wchar_t *assemble(const char *code, size_t codelen=-1)=0;

    virtual bool run(const wchar_t*)=0;        
    virtual int load2(const char*,va_list)=0;
    virtual int load3(const char*,void**,size_t)=0;

    inline int load(const char *f,...)
    {
        va_list v; va_start(v,f); int o = load2(f,v); va_end(v); return o;
    }

    virtual void reclaim(const wchar_t*)=0;        

    /*C-like preprocessor macros APIs*/
    virtual void **define(const char *d, const char *def="", void **io=0)=0;        
        
    inline void **define(const char *d, int e, void **io=0)
    {
        char def[64]=""; if(sprintf(def,"%d",e)) return define(d,def,io);
    }
    inline void **define(const char *d, double e, void **io=0)    
    {
        char def[64]=""; if(sprintf(def,"%f",e)) return define(d,def,io);
    }
    inline void  **undef(const char *d, void **io=0)
    {
        return define(d,0,io);      
    }

    virtual const char *ifdef(const char *d, void **io=0)=0;

    /*painting abstraction layer APIs*/
    virtual bool reset(const char *unused=0)=0;    
    virtual bool frame(size_t inout[4])=0; 
    virtual bool clip(size_t inout[4])=0; 
      
    /*portable operating systems APIs*/
    virtual SOMPAINT_PIX raster(void **io, void *local=0, const char *typeid_local_raw_name=0)=0;    
    virtual SOMPAINT_BOX window(void **io, void *local=0, const char *typeid_local_raw_name=0)=0;    

    template<typename T> 
    inline SOMPAINT_PIX raster(void **io, T *local)
    {
        return raster(io,local,typeid(T).raw_name()); 
    }
    template<typename T>
    inline SOMPAINT_BOX window(void **io, T *local)
    {
        return window(io,local,typeid(T).raw_name()); 
    }

protected: /*See C API comments*/
    
    /*disconnect should return 0 unless an 
    //alternative unload protocol needs to
    //be used within the build environment*/
    virtual void *disconnect(bool host)=0;

#endif
};

struct SOMPAINT_LIB(buffer)
{
#ifdef __cplusplus

    virtual bool apply(int i=0)=0;
    virtual bool setup(int i=0)=0;
    virtual bool clear(int mask=~0)=0;    
    virtual bool sample(int i=0)=0;
    virtual bool layout(const wchar_t*)=0;
    virtual int stream(int n, const void *up, size_t up_s)=0;
    virtual bool draw(int start, int count, int vstart, int vcount)=0;

    /*portable operating systems APIs*/
    virtual bool present(int zoom, SOMPAINT_BOX src, SOMPAINT_BOX dst, SOMPAINT_PIX)=0;
    virtual bool fill(int m, int n, int wrap, int zoom, SOMPAINT_BOX, SOMPAINT_BOX, SOMPAINT_PIX)=0;
    virtual bool print(int width, int height, SOMPAINT_PIX)=0;

#endif
};

The comments for all of the methods are among the C API declaration. Otherwise it would’ve been a nightmare to strip them out to fit in a post.

Anyway, bottom line this is enough to render any kind of game I am able to conceive of. I do not claim brilliance but I can make things happen in this problem domain with relative ease.

The define and ifdef methods are not needed or even used internally as no compiler is provided (yet) and neither are fill or print needed. source is for remote caching and share you can live without. expose is a backdoor, so you don’t need that either (unless you need it)

The format method can do a lot, as it comes with a parser. But shaders are text files, so why not format a buffer with a text string right? The variadic methods expect printf formatting.

PS: The origin of this library actually has more to do with image processing. It’s supposed to be a replacement for MS Paint but it does 3D too.

EDITED: This BTW is what I mean by the bedrock. Visually a game can be chalked up to just this much when you get down to brass tax.