PDA

View Full Version : Why OpenGLES 2 spec doesn't support BGRA texture format?



Narann
12-18-2014, 10:46 PM
Reading the OpenGLES 2 specs (https://www.khronos.org/registry/gles/specs/2.0/es_full_spec_2.0.25.pdf). I realize the only internal formats supported are GL_ALPHA, GL_LUMINANCE, GL_LUMINANCE_ALPHA, GL_RGB, GL_RGBA.

I thought RGBA was not the best way to store datas in the GPU and as OpenGLES 2 is for embedded systems (with limited power) I thought "maybe it's not the case anymore".

So two questions:
1) I want your advise embedded system experts: "Are mobile/embedded GPUs more efficient with BGRA or RGBA? Or we don't care anymore?"
2) If BGRA is still the way to go, what was the hint of OpenGL spec peoples when they choosed to do not support BGRA while Desktop OpenGL 2 (the version OGLES 2 is supposed to "work with" hum hum...) support it?

A big thanks in advance! :)

PS: The original concern is here (https://github.com/mupen64plus/mupen64plus-video-rice/issues/29).

Alfonse Reinheart
12-18-2014, 11:47 PM
First, as you have pointed out, question #1 is irrelevant, since you cannot choose GL_BGRA for your pixel transfer format in OpenGL ES. The option that actually works is by definition faster than the one that refuses to execute. ;)

Second, your post suggests that you have an unfortunate, significant, and (sadly) common misunderstanding about how desktop OpenGL works. Textures do not, and never have had a "texture format" of GL_BGRA. What you're talking about is the pixel transfer format (https://www.opengl.org/wiki/Pixel_Transfer), which describes the format of the data you are passing to OpenGL. It says absolutely nothing about how the texture will store the data. That is defined by the texture's internal format (https://www.opengl.org/wiki/Image_Format). And GL_BGRA is not a legal internal format.

FYI: the reason that passing pixel data with GL_BGRA is faster (assuming your internal format is 32-bits-per-pixel, and your data format is 32-bits-per-pixel) is because the driver doesn't have to flip bits when it transfers data to the internal texture memory. Why would it store things that way? Because Intel chips are little endian, so if you look at a whole 32-bit-per-pixel pixel in BGRA, then turn it around, you get ARGB. Which tends to be the preferred little endian storage for 32-bpp data.

Also, everything I said above applies only to desktop OpenGL. OpenGL ES differs because:

Third, the reason OpenGL ES does it this way is because OpenGL ES wants to minimize the amount of driver-side pixel transfer conversion. Desktop GL implementations have to be able to cope with any pixel data you want. You could use an internal format of GL_R16, while passing pixel data using a format of RGB, encoded as 3/3/2, and the implementation just has to deal with it, culling the G and B channels while expanding the 3-bits-per-red data to 16-bits.

In GL ES, your pixel data's format and type parameters actually define how OpenGL ES stores the texture. The internal format parameter in this case is irrelevant. Furthermore, ES implementations live in ARM-land, and ARM chips are big endian. There, RGBA is the preferred storage order.

So that's why it's done.

Narann
12-19-2014, 12:17 AM
A big thanks for the very clear answer! :)

First, as you have pointed out, question #1 is irrelevant, since you cannot choose GL_BGRA for your pixel transfer format in OpenGL ES. The option that actually works is by definition faster than the one that refuses to execute. ;)
I forgot to to say I use GL_EXT_texture_format_BGRA8888 extension supported by 98% of devices according to OpenGL ES Hardware Database (http://delphigl.de/glcapsviewer/gles_extensions.php?orderby=by%20coverage%20desc). But I get your idea. :)

Thanks again! :)

Narann
12-20-2014, 10:46 AM
ARM chips are big endian.
After some investigations (https://github.com/mupen64plus/mupen64plus-video-rice/issues/29#issuecomment-67727813) this is not completely true, ARM chips are bi endian. Worse: By default gcc arm default flags (https://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html) are little endian and it seems the "good practice" when writing low level cross platform code is to stay in little endian, even on bi endian processor. :(

So my question remain:

Why OpenGL ES 2 specs only provide RGBA component order for pixel transfer?

I'm very curious to know who choose that, and why. :)

Thanks in advance! :)

Dark Photon
12-20-2014, 04:12 PM
In GL ES, your pixel data's format and type parameters actually define how OpenGL ES stores the texture. The internal format parameter in this case is irrelevant.

Not GL ES, but GL ES2 specifically:


The GL stores the resulting texture with internal component resolutions of its own choosing. The allocation of internal component resolution may vary based on any TexImage2D parameter (except target), but the allocation must not be a
function of any other state...Components are then selected from the resulting R, G, B, or A values to obtain a texture with the base internal format specified by internalformat, which must match format; no conversions between formats are supported during texture image processing.


GLES2 is a bit odd in that its internal format defines only support unsized internal formats (where size hints are taken from format and type). GLES3 fixes this, restoring the familiar concept of sized internal formats.


Furthermore, ES implementations live in ARM-land, and ARM chips are big endian. There, RGBA is the preferred storage order.

ES implementations live in ARM-land?

Some do, quite a few don't! PowerVR, Qualcomm, nVidia, etc. Not to even mention all the desktop implementations.

PowerVR for instance prefers BGRA.