That’s the stuff I’m talking about.
Dark Photon is correct.
How would you do physics calculations that could generate and error or unsolvable big problems when precision isn’t high enough?
(can happen, these kind of physics models exist, don’t tell me it can’t because there can always be bugs in someone’s code.)
Example:
The program Celestia, which is a free and open source space simulator. Can simulate the universe (simplified somewhat) and renders a lot of stars. Many people want to place stars and planets in far away galaxies in addons. But they can’t because of accuracy issues. The stars would get stacked on top of each other. Or orbits with spacecrafts, single precision is not good enough for these. And in some future cases, maybe even more precision than 64bit floating point is needed. It is also a problem when using the telescope feature, trying to view exoplanet stars and far away bodies in our solar system is a problem.
For zooming with the telescope feature on a spacecraft that is located in a solar system addon for Celestia, in a galaxy cluster 10^16?10^20 light years away requires definitely more precision than any current datatype can foresee (even more than float128). :mad:
short-sighted:
I try to never say:
I don’t see a use now, let’s consider it never is needed.
It’s like the people who say.
If I can’t see it, it doesn’t exist!
(Then they should be able to see atoms, but nobody can.)
People who say this are very short-sighted and egocentric.
Those people should be ashamed of even suggesting this sort of anti-progressive behaviour.
I don’t need this [censored] and whining about how it’s not useful, get over it and realize other people might have other needs.
This forum is about discussing what could be improved, added in OpenGL.
Not just about whining about and adding the missing stuff compared to the newest DirectX version.
And by using 64bit datatypes, the precision is just better, not adjustable to everybody’s needs.
I don’t need it currently, but maybe someone will need it.
And it’s important to realize that!
I don’t know for sure, but neither can you be sure it is not needed.
Datatypes and parameters:
Here is the solution to precision problems and also encoding problems in datatypes.
Datatypes with parameters!
e.g.
(These examples are just for illustration.)
integer: int(32) /*an integer with 32bit, one bit is reserved for the sign /
int() / an integer with a default value, could be 32bit signed /
int(u,16) / an unsigned integer with 16 bit /
int(512) / an integer with 512bit */
float: float(256) /* a float with 256bit */
/* About strings, there is the encoding issues.
There are a lot of encodings and you just don’t know which one the language is using under the hood. Or want to force a certain encoding. String encoding parameters can add this kind of flexible behaviour. */
string(UTF8) /* a string with UTF8 encoding /
char(UTF8) / a character with UTF8 encoding */
These things also count for OpenCL.
They solve the problem that sometimes it’s not clear how many bits the compiler reserves for the data types, and exchanging source code produces different results on different computers. Making debugging more difficult because more parameters are involved.
binding system once again
The binding system is totally useless.
Binding can be improved, replaced by atomic operations.
Binding makes code larger and there for harder to debug
Binding takes in space while being completely unnecessary/replaceable with something better.
Binding system is bloat.
OGL ES:
There is a problem with expectation, how the people see 1.0 and 2.0 relate. The Fixed Function pipeline should be noticeable in the name for clearness and avoiding confusion among the general public. Ignoring this can harm OGL’s reputation.