GL_EXT_abgr

GL_EXT_abgr is the oldest EXT and universally supported as far as I can tell. But somehow has never made it into the core or even to an ARB extension.

Well, think about it: what good is it exactly?

The pixel transfer parameters GL_ABGR_EXT and GL_UNSIGNED_INT_8_8_8_8 are the exact same as GL_RGBA with GL_UNSIGNED_INT_8_8_8_8_REV. The same goes for 4444 and 4444_REV. The only place where there would be a difference is 5551 vs. 1555_REV.

Do you need to do ABGR uploading for 16-bit per-channel formats? I doubt the hardware orders the channels that way, so I don’t think it’s going to help anything.

Just because something is an extension and widely supported doesn’t mean that it’s good or needs to be brought into core.

Just because something is an extension and widely supported doesn’t mean that it’s good or needs to be brought into core.

Funny that, one would think that it is evidence that it is used and supported which implies a candidate to go to core.

GL_UNSIGNED_INT_8_8_8_8_REV is not a solution if writing cross-platform code because of endianess issues.

Funny that, one would think that it is evidence that it is used and supported which implies a candidate to go to core.

What is the evidence that it is widely used? Supported, yes. Used?

GL_UNSIGNED_INT_8_8_8_8_REV is not a solution if writing cross-platform code because of endianess issues.

And ABGR is? How are these two any different with regard to endian issues?

If you’re having endian issues, you should be using GL_PACK/UNPACK_SWAP_BYTES anyway.

Because the order of bytes is preserved across different endian platforms but the order of unsigned ints is not. Thus GL_ABGR_EXT w/ GL_BYTE will give you the same results across platforms which GL_UNSIGNED_INT_8_8_8_8 will not.

This suffers from the exact same endian problem as using unsigned ints. It introduces a separate code path across platforms and also obfuscates the meaning of the code.

This suffers from the exact same endian problem as using unsigned ints. It introduces a separate code path across platforms and also obfuscates the meaning of the code.

Except that you only have to set it in one place at the beginning of your code. And from then on, everything works. That’s the problem with ABGR; you can already get the equivalent functionality without it.

Also, I would point out that the swap-bytes solution will also work for 16-bit shorts, 16-bit floats, 32-bit shorts, 32-bit floats, 16-bit 565, 16-bit 4444, 32-bit 10F_11F_11F, 32-bit 5999, … I can keep going, but I think my point is clear. ABGR solves this for exactly one format (ie: bytes). Swap bytes solves it for everything.

Hence swap bytes is standard, and ABGR is not.

I’m not going to argue with you Korval.

I agree with Alfonse on this, endianness issues are what the GL_PACK/UNPACK_SWAP_BYTES pixel transfer option is made for. Adding GL_EXT_abgr to core would result simply in redundant functionality.

The other option is to detect your endianness at startup and use formats and types as appropriate. That will also work, and will also be portable.

To be honest portability seems a fairly weak excuse for saying “thou shalt not” when there are plenty of options available to deal with it.

SWAP_BYTES is terrible. In introduces separate code-paths on different CPU hardware breaking compatibility. It sets a global state that affects the operation of functions possibly far far away in the source code. And that state may be overwritten in other places far far away in the source code. It’s hides the intent of simple operations making reading the code more difficult. It’s probably not there at all in most code (since most code isn’t written to take into account endian issues) meaning that code will be broken and have to be debugged if moved to a different endian platform.

You may think it’s a trivial thing because you’ve never had to write multi-platform code and it’s only a few lines extra. But the problem is everyone thinks that about things and identifying all those “few lines extra” can end up beings a rather large chunk of work.

and as for using UNSIGNED_INT_8_8_8_8 / _REV…

glTexImage2D( target, level, internalFormat, width, height, border, GL_ABGR_EXT, GL_BYTE, data );

expresses exactly what the function is supposed to do. It’s portable and can be written and forgotten.

#ifdef LITTLE_ENDIAN
glTexImage2D( target, level, internalFormat, width, height, border, GL_RGBA, GL_UNSIGNED_INT_8_8_8_8, data );
#else // BIG_ENDIAN
glTexImage2D( target, level, internalFormat, width, height, border, GL_RGBA, GL_UNSIGNED_INT_8_8_8_8_REV, data );
#endif

Is a mess. How many people are going to be able to look at the first (little-endian) version and realize that it actually byte swaps and expects data organized ABGRABGRABGRABGR at a glance?

Why would I ever use something like that when I have a universally supported option already in GL_ABGR_EXT? And if it’s already universally supported and demonstrably better than alternative methods why not move it into the core?

Isn’t the whole point of OpenGL to be a portable graphics API?

Seriously! How many times one may set SWAP_BYTES in a project? Maximum once! Why one would change the pack/unpack byte swap order in the middle of the application?

Oh hey! Are we arguing with punctuation now? Didn’t realize! Are you not aware that glPixelStorei will also affect things such as float textures, z-buffer transfers, etc. and it’s probably not a good idea to do it globally?

If you upload float textures, you also have to use the same SWAP_BYTES configuration. Your CPU is either little endian or big endian. Based on that, you select the SWAP_BYTES state, then you can upload float textures, integer textures or whatever, as from now it is guaranteed that the GL will take into account the endianness of the client which is unlikely to change during the execution of the application :slight_smile:

Although the endianess of the client is unlikely to change the endianess of the data is not guaranteed to be global. My data may be a combination of files i’ve loaded from external storage (may or may not be same endianess as client) and data i’ve procedurally generated at runtime (will be same endianess as client). Doing things such as for example reading back a buffer from the graphics card and downsampling it may also be affected depending on method.

There’s also the problem of bugs being caused by setting a global state with glPixelStorei may cause. Not so much with a single programmer, but when working on a larger team you can lose days with one programmer trying to track down why some subset of textures are suddenly not loading because a different programmer working on a different platform changed something in an unrelated file.

When working with a large cross-endian codebase you want to avoid doing things with toggles (swap/noswap, reverse/noreverse) as much as possible. Perhaps I should be asking for UNSIGNED_INT_8_8_8_8_REV to be replaced by UNSIGNED_INT_8_8_8_8_LE / UNSIGNED_INT_8_8_8_8_BE instead. But we already have a simple, ancient, universally exported extension…

GL_UNPACK_SWAP_BYTES and GL_ABGR are not interchangeable.

They address two completely different concepts:
While GL_UNPACK_SWAP_BYTES adresses the byte ordering of each single component,
GL_ABGR is addresses the ordering of components.

Only for the GL_UNSIGNED_BYTE type, it happens that byte ordering and component ordering mean the same.

You can’t ‘emulate’ GL_ABGR+GL_FLOAT using GL_UNPACK_SWAP_BYTES.

I agree on that, but let us just check the overview of the EXT_abgr extension:

EXT_abgr extends the list of host-memory color formats. Specifically, it provides a reverse-order alternative to image format RGBA. The ABGR component order matches the cpack Iris GL format on big-endian machines.

Based on that, I’m pretty unsure whether implementations supporting the EXT_abgr extension do actually support this for floating point textures and such.

The extension itself is pretty ancient and the language used is pretty far from what an extension going into core GL 4.2+ should look like. What I want to say is that I wouldn’t suggest it to be included for the next release of OpenGL (this is what this post should be about).

The extension explicitly mentions that it targets the issue of the difference between LE and BE and that’s also what Pop N Fresh suggests as well. But we have already the SWAP_BYTES semantics for that already.

Also, I don’t see any real life use case scenarios when one would like to load textures with components ordered in ABGR order, except in case of byte-sized components where we already have the UNSIGNED_INT_8_8_8_8_REV format.

Isn’t the whole point of OpenGL to be a portable graphics API? [/QUOTE]

The whole point is to be a graphics API. Portability is a bonus. If OpenGL works on your hardware it’s because someone invested the time and effort required to do an implementation, not because of any inherent magical qualities in OpenGL itself.

OK, can we agree that if portability to architectures with different endianness is not an objective of your program, then there is no reason whatsoever to not use formats and types that are endian-sensitive? And that if such portability is an objective then there are options available to help you with that too? And that neither situation requires that the approach suitable for it demands that the other situation should adopt the same approach?

Is this ABGR a GPU feature or is it some driver thing?
I’m thinking it isn’t since AFAIK, GPUs tend to only support BGRA.

Might as well do the swapping yourself and send BGRA format on certain machine and ABGR on other machines (if such GPUs exist).

Portability is only great if you are talking about the same hardware but the OS is of a different flavor such as (Windows vs Linux on the x86).