Why OpenGLES 2 spec doesn't support BGRA texture format?

Reading the OpenGLES 2 specs. I realize the only internal formats supported are GL_ALPHA, GL_LUMINANCE, GL_LUMINANCE_ALPHA, GL_RGB, GL_RGBA.

I thought RGBA was not the best way to store datas in the GPU and as OpenGLES 2 is for embedded systems (with limited power) I thought “maybe it’s not the case anymore”.

So two questions:

  1. I want your advise embedded system experts: “Are mobile/embedded GPUs more efficient with BGRA or RGBA? Or we don’t care anymore?”
  2. If BGRA is still the way to go, what was the hint of OpenGL spec peoples when they choosed to do not support BGRA while Desktop OpenGL 2 (the version OGLES 2 is supposed to “work with” hum hum…) support it?

A big thanks in advance! :slight_smile:

PS: The original concern is here.

First, as you have pointed out, question #1 is irrelevant, since you cannot choose GL_BGRA for your pixel transfer format in OpenGL ES. The option that actually works is by definition faster than the one that refuses to execute. :wink:

Second, your post suggests that you have an unfortunate, significant, and (sadly) common misunderstanding about how desktop OpenGL works. Textures do not, and never have had a “texture format” of GL_BGRA. What you’re talking about is the pixel transfer format, which describes the format of the data you are passing to OpenGL. It says absolutely nothing about how the texture will store the data. That is defined by the texture’s internal format. And GL_BGRA is not a legal internal format.

FYI: the reason that passing pixel data with GL_BGRA is faster (assuming your internal format is 32-bits-per-pixel, and your data format is 32-bits-per-pixel) is because the driver doesn’t have to flip bits when it transfers data to the internal texture memory. Why would it store things that way? Because Intel chips are little endian, so if you look at a whole 32-bit-per-pixel pixel in BGRA, then turn it around, you get ARGB. Which tends to be the preferred little endian storage for 32-bpp data.

Also, everything I said above applies only to desktop OpenGL. OpenGL ES differs because:

Third, the reason OpenGL ES does it this way is because OpenGL ES wants to minimize the amount of driver-side pixel transfer conversion. Desktop GL implementations have to be able to cope with any pixel data you want. You could use an internal format of GL_R16, while passing pixel data using a format of RGB, encoded as 3/3/2, and the implementation just has to deal with it, culling the G and B channels while expanding the 3-bits-per-red data to 16-bits.

In GL ES, your pixel data’s format and type parameters actually define how OpenGL ES stores the texture. The internal format parameter in this case is irrelevant. Furthermore, ES implementations live in ARM-land, and ARM chips are big endian. There, RGBA is the preferred storage order.

So that’s why it’s done.

A big thanks for the very clear answer! :slight_smile:

I forgot to to say I use GL_EXT_texture_format_BGRA8888 extension supported by 98% of devices according to OpenGL ES Hardware Database. But I get your idea. :slight_smile:

Thanks again! :slight_smile:

After some investigations this is not completely true, ARM chips are bi endian. Worse: By default gcc arm default flags are little endian and it seems the “good practice” when writing low level cross platform code is to stay in little endian, even on bi endian processor. :frowning:

So my question remain:

Why OpenGL ES 2 specs only provide RGBA component order for pixel transfer?

I’m very curious to know who choose that, and why. :slight_smile:

Thanks in advance! :slight_smile:

Not GL ES, but GL ES2 specifically:

GLES2 is a bit odd in that its internal format defines only support unsized internal formats (where size hints are taken from format and type). GLES3 fixes this, restoring the familiar concept of sized internal formats.

Furthermore, ES implementations live in ARM-land, and ARM chips are big endian. There, RGBA is the preferred storage order.

ES implementations live in ARM-land?

Some do, quite a few don’t! PowerVR, Qualcomm, nVidia, etc. Not to even mention all the desktop implementations.

PowerVR for instance prefers BGRA.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.