PDA

View Full Version : GL_EXT_abgr



Pop N Fresh
06-07-2011, 12:10 AM
GL_EXT_abgr is the oldest EXT and universally supported as far as I can tell. But somehow has never made it into the core or even to an ARB extension.

Alfonse Reinheart
06-07-2011, 12:18 AM
Well, think about it: what good is it exactly?

The pixel transfer parameters GL_ABGR_EXT and GL_UNSIGNED_INT_8_8_8_8 are the exact same as GL_RGBA with GL_UNSIGNED_INT_8_8_8_8_REV. The same goes for 4444 and 4444_REV. The only place where there would be a difference is 5551 vs. 1555_REV.

Do you need to do ABGR uploading for 16-bit per-channel formats? I doubt the hardware orders the channels that way, so I don't think it's going to help anything.

Just because something is an extension and widely supported doesn't mean that it's good or needs to be brought into core.

kRogue
06-07-2011, 03:57 AM
Just because something is an extension and widely supported doesn't mean that it's good or needs to be brought into core.

Funny that, one would think that it is evidence that it is used and supported which implies a candidate to go to core.

Pop N Fresh
06-07-2011, 09:28 AM
GL_UNSIGNED_INT_8_8_8_8_REV is not a solution if writing cross-platform code because of endianess issues.

Alfonse Reinheart
06-07-2011, 11:12 AM
Funny that, one would think that it is evidence that it is used and supported which implies a candidate to go to core.

What is the evidence that it is widely used? Supported, yes. Used?


GL_UNSIGNED_INT_8_8_8_8_REV is not a solution if writing cross-platform code because of endianess issues.

And ABGR is? How are these two any different with regard to endian issues?

If you're having endian issues, you should be using GL_PACK/UNPACK_SWAP_BYTES anyway.

Pop N Fresh
06-07-2011, 04:18 PM
And ABGR is? How are these two any different with regard to endian issues?
Because the order of bytes is preserved across different endian platforms but the order of unsigned ints is not. Thus GL_ABGR_EXT w/ GL_BYTE will give you the same results across platforms which GL_UNSIGNED_INT_8_8_8_8 will not.


If you're having endian issues, you should be using GL_PACK/UNPACK_SWAP_BYTES anyway.
This suffers from the exact same endian problem as using unsigned ints. It introduces a separate code path across platforms and also obfuscates the meaning of the code.

Alfonse Reinheart
06-07-2011, 04:39 PM
This suffers from the exact same endian problem as using unsigned ints. It introduces a separate code path across platforms and also obfuscates the meaning of the code.

Except that you only have to set it in one place at the beginning of your code. And from then on, everything works. That's the problem with ABGR; you can already get the equivalent functionality without it.

Also, I would point out that the swap-bytes solution will also work for 16-bit shorts, 16-bit floats, 32-bit shorts, 32-bit floats, 16-bit 565, 16-bit 4444, 32-bit 10F_11F_11F, 32-bit 5999, ... I can keep going, but I think my point is clear. ABGR solves this for exactly one format (ie: bytes). Swap bytes solves it for everything.

Hence swap bytes is standard, and ABGR is not.

Pop N Fresh
06-07-2011, 05:19 PM
I'm not going to argue with you Korval.

aqnuep
06-08-2011, 02:10 AM
I agree with Alfonse on this, endianness issues are what the GL_PACK/UNPACK_SWAP_BYTES pixel transfer option is made for. Adding GL_EXT_abgr to core would result simply in redundant functionality.

mhagain
06-08-2011, 03:40 AM
The other option is to detect your endianness at startup and use formats and types as appropriate. That will also work, and will also be portable.

To be honest portability seems a fairly weak excuse for saying "thou shalt not" when there are plenty of options available to deal with it.

Pop N Fresh
06-08-2011, 04:12 AM
SWAP_BYTES is terrible. In introduces separate code-paths on different CPU hardware breaking compatibility. It sets a global state that affects the operation of functions possibly far far away in the source code. And that state may be overwritten in other places far far away in the source code. It's hides the intent of simple operations making reading the code more difficult. It's probably not there at all in most code (since most code isn't written to take into account endian issues) meaning that code will be broken and have to be debugged if moved to a different endian platform.

You may think it's a trivial thing because you've never had to write multi-platform code and it's only a few lines extra. But the problem is *everyone* thinks that about things and identifying all those "few lines extra" can end up beings a rather large chunk of work.

and as for using UNSIGNED_INT_8_8_8_8 / _REV...

glTexImage2D( target, level, internalFormat, width, height, border, GL_ABGR_EXT, GL_BYTE, data );

expresses exactly what the function is supposed to do. It's portable and can be written and forgotten.

#ifdef LITTLE_ENDIAN
glTexImage2D( target, level, internalFormat, width, height, border, GL_RGBA, GL_UNSIGNED_INT_8_8_8_8, data );
#else // BIG_ENDIAN
glTexImage2D( target, level, internalFormat, width, height, border, GL_RGBA, GL_UNSIGNED_INT_8_8_8_8_REV, data );
#endif

Is a mess. How many people are going to be able to look at the first (little-endian) version and realize that it actually byte swaps and expects data organized ABGRABGRABGRABGR at a glance?

Why would I ever use something like that when I have a universally supported option already in GL_ABGR_EXT? And if it's already universally supported and demonstrably better than alternative methods why not move it into the core?

Pop N Fresh
06-08-2011, 04:16 AM
To be honest portability seems a fairly weak excuse for saying "thou shalt not" when there are plenty of options available to deal with it.
Isn't the whole point of OpenGL to be a portable graphics API?

aqnuep
06-08-2011, 04:31 AM
SWAP_BYTES is terrible. In introduces separate code-paths on different CPU hardware breaking compatibility. It sets a global state that affects the operation of functions possibly far far away in the source code. And that state may be overwritten in other places far far away in the source code...

Seriously! How many times one may set SWAP_BYTES in a project? Maximum once! Why one would change the pack/unpack byte swap order in the middle of the application?

Pop N Fresh
06-08-2011, 04:48 AM
Oh hey! Are we arguing with punctuation now? Didn't realize! Are you not aware that glPixelStorei will also affect things such as float textures, z-buffer transfers, etc. and it's probably not a good idea to do it globally?

aqnuep
06-08-2011, 04:51 AM
If you upload float textures, you also have to use the same SWAP_BYTES configuration. Your CPU is either little endian or big endian. Based on that, you select the SWAP_BYTES state, then you can upload float textures, integer textures or whatever, as from now it is guaranteed that the GL will take into account the endianness of the client which is unlikely to change during the execution of the application :)

Pop N Fresh
06-08-2011, 05:23 AM
Although the endianess of the client is unlikely to change the endianess of the data is not guaranteed to be global. My data may be a combination of files i've loaded from external storage (may or may not be same endianess as client) and data i've procedurally generated at runtime (will be same endianess as client). Doing things such as for example reading back a buffer from the graphics card and downsampling it may also be affected depending on method.

There's also the problem of bugs being caused by setting a global state with glPixelStorei may cause. Not so much with a single programmer, but when working on a larger team you can lose days with one programmer trying to track down why some subset of textures are suddenly not loading because a different programmer working on a different platform changed something in an unrelated file.

When working with a large cross-endian codebase you want to avoid doing things with toggles (swap/noswap, reverse/noreverse) as much as possible. Perhaps I should be asking for UNSIGNED_INT_8_8_8_8_REV to be replaced by UNSIGNED_INT_8_8_8_8_LE / UNSIGNED_INT_8_8_8_8_BE instead. But we already have a simple, ancient, universally exported extension...

skynet
06-08-2011, 08:09 AM
GL_UNPACK_SWAP_BYTES and GL_ABGR are not interchangeable.

They address two completely different concepts:
While GL_UNPACK_SWAP_BYTES adresses the byte ordering of each single component,
GL_ABGR is addresses the ordering of components.

Only for the GL_UNSIGNED_BYTE type, it happens that byte ordering and component ordering mean the same.

You can't 'emulate' GL_ABGR+GL_FLOAT using GL_UNPACK_SWAP_BYTES.

aqnuep
06-08-2011, 08:24 AM
You can't 'emulate' GL_ABGR+GL_FLOAT using GL_UNPACK_SWAP_BYTES.

I agree on that, but let us just check the overview of the EXT_abgr extension:


EXT_abgr extends the list of host-memory color formats. Specifically, it provides a reverse-order alternative to image format RGBA. The ABGR component order matches the cpack Iris GL format on big-endian machines.

Based on that, I'm pretty unsure whether implementations supporting the EXT_abgr extension do actually support this for floating point textures and such.

The extension itself is pretty ancient and the language used is pretty far from what an extension going into core GL 4.2+ should look like. What I want to say is that I wouldn't suggest it to be included for the next release of OpenGL (this is what this post should be about).

The extension explicitly mentions that it targets the issue of the difference between LE and BE and that's also what Pop N Fresh suggests as well. But we have already the SWAP_BYTES semantics for that already.

Also, I don't see any real life use case scenarios when one would like to load textures with components ordered in ABGR order, except in case of byte-sized components where we already have the UNSIGNED_INT_8_8_8_8_REV format.

mhagain
06-08-2011, 11:40 AM
To be honest portability seems a fairly weak excuse for saying "thou shalt not" when there are plenty of options available to deal with it.
Isn't the whole point of OpenGL to be a portable graphics API?

The whole point is to be a graphics API. Portability is a bonus. If OpenGL works on your hardware it's because someone invested the time and effort required to do an implementation, not because of any inherent magical qualities in OpenGL itself.

OK, can we agree that if portability to architectures with different endianness is not an objective of your program, then there is no reason whatsoever to not use formats and types that are endian-sensitive? And that if such portability is an objective then there are options available to help you with that too? And that neither situation requires that the approach suitable for it demands that the other situation should adopt the same approach?

V-man
06-08-2011, 07:19 PM
Is this ABGR a GPU feature or is it some driver thing?
I'm thinking it isn't since AFAIK, GPUs tend to only support BGRA.

Might as well do the swapping yourself and send BGRA format on certain machine and ABGR on other machines (if such GPUs exist).

Portability is only great if you are talking about the same hardware but the OS is of a different flavor such as (Windows vs Linux on the x86).

Gedolo
07-29-2011, 08:30 AM
ABGR is there because you could put textures and stuff in that ordering that DirectX applications can use.

Some versions of DirectX have abgr ordering.

EXT_abgr is not interchangeable with the texture swizzla switch stuff. It will produce different results depending on the size of the components. e.g. 16 bit components.

Gedolo
07-31-2011, 06:07 AM
You can't 'emulate' GL_ABGR+GL_FLOAT using GL_UNPACK_SWAP_BYTES.

I agree on that, but let us just check the overview of the EXT_abgr extension:


EXT_abgr extends the list of host-memory color formats. Specifically, it provides a reverse-order alternative to image format RGBA. The ABGR component order matches the cpack Iris GL format on big-endian machines.

Based on that, I'm pretty unsure whether implementations supporting the EXT_abgr extension do actually support this for floating point textures and such.

The extension itself is pretty ancient and the language used is pretty far from what an extension going into core GL 4.2+ should look like. What I want to say is that I wouldn't suggest it to be included for the next release of OpenGL (this is what this post should be about).

The extension explicitly mentions that it targets the issue of the difference between LE and BE and that's also what Pop N Fresh suggests as well. But we have already the SWAP_BYTES semantics for that already.

Also, I don't see any real life use case scenarios when one would like to load textures with components ordered in ABGR order, except in case of byte-sized components where we already have the UNSIGNED_INT_8_8_8_8_REV format.

EXT_abgr extension does NOT mentions that it targets the issues of the difference between Little and Big endian.
EXT_abgr extension does NOT do this.

The text in EXT_abgr extension merely mentions that it has the same order as some other texture format if the machine it runs on is a Big Endian machine. It does not mean or imply that the extension has anything to do with endiannes.

aqnuep
07-31-2011, 09:04 AM
EXT_abgr extension does NOT mentions that it targets the issues of the difference between Little and Big endian.
EXT_abgr extension does NOT do this.

The text in EXT_abgr extension merely mentions that it has the same order as some other texture format if the machine it runs on is a Big Endian machine. It does not mean or imply that the extension has anything to do with endiannes.

Without further explanation:


EXT_abgr extends the list of host-memory color formats. Specifically,
it provides a reverse-order alternative to image format RGBA. The ABGR
component order matches the cpack Iris GL format on *big-endian* machines.

Gedolo
07-31-2011, 09:32 AM
What do you mean?
Without further explanation.

And with that that extension has nothing to do with endianness is that I mean that the extension is not made to do things with or because of endianness. It was made so you could switch out textures with DirectX. Because that's the order DirectX uses.

aqnuep
07-31-2011, 03:37 PM
EXT_abgr wasn't made because of DirectX, it's a too old extension for that as at that time DirectX wasn't really a match for OpenGL.

It was made because, as stated in the extension overview, to have such component ordering that matches the cpack Iris GL format on big-endian machines.

So it was not for DX at that time and now we already have alternative way to specify textures with ABGR component ordering formats.

Gedolo
08-01-2011, 01:05 PM
I now see what you mean.
Thought that it was about something else.

Thanks for explaining.

mhagain
09-27-2011, 07:45 AM
GL_UNSIGNED_INT_8_8_8_8_REV is not a solution if writing cross-platform code because of endianess issues.

Bump.

Have you actually tested this or are you just speaking theoretically?

I dispute that there is an endianness issue at all.

Firstly, we're not talking about CPUs here, we're talking about GPUs.

Secondly, the OpenGL spec clearly defines the bit layout for this type (and others). Even if you go back to the old GL_EXT_packed_pixels (http://www.opengl.org/registry/specs/EXT/packed_pixels.txt) spec you will see clearly defined layouts. Endianness doesn't even come into it.

Thirdly, this is not actually an "int" or "unsigned int" data type - it's a 32-bit data type that may be conveniently represented by an int. If you want to you can use bitwise operations to write the data in directly. Or use another 32-bit data type.

Fourthly, the OpenGL specification does not contain one instance of the word "endian". If the spec states that data is laid out in a certain manner then data is laid out in that manner and endianness is irrelevant. The implication is that GL_UNSIGNED_INT_8_8_8_8_REV is portable across different endianness.

Unless a test case can demonstrate that GL_UNSIGNED_INT_8_8_8_8_REV is problematic on different endian architecture, and unless that test case is otherwise correct (i.e. it doesn't attempt to manipulate the usage of the type in order to prove a point) I call bull.

Pop N Fresh
09-28-2011, 03:58 AM
This issue is actually taken care of in OpenGL 3.3 with GL_ARB_texture_swizzle which effectively obsoletes GL_EXT_ABGR (along with GL_BGR and GL_BGRA).

I hadn't gotten up to speed with the new standards and totally missed it's existence.

(Do you make a habit of bumping dead threads so you can respond to months old posts and "call bull"?)

Eosie
09-30-2011, 04:32 PM
Not correct.

GL_ARB_texture_swizzle only works with GPU data.

GL_UNSIGNED_INT_8_8_8_8_REV, GL_BGR, GL_BGRA, GL_ABGR_EXT, etc. only work with CPU data, i.e. the data you specify in glTexImage and similar functions. You can allocate an RGBA8 texture and call TexSubImage to fill one half of it with BGRA data and the other half with RGBA data. It's like swizzling for your CPU data.

The 'format' and 'type' parameters of glTexImage have nothing to do with the real texture format the GPU is using.

Alfonse Reinheart
09-30-2011, 05:46 PM
What he's saying is that you could use the texture swizzle mask to rearrange the color components. So if your data was actually stored in RGBA order, but you uploaded it as ARGB, you cans use the swizzle mask to put the right components back into the right place.

aqnuep
10-01-2011, 11:18 AM
So actually GL_EXT_abgr is subsumed by multiple separate functionalities already in core, so topic closed :)