PDA

View Full Version : RGB or BGR for textures?



zed
12-07-2004, 07:51 PM
ive been using BGR(A) as the base internal format for my textures since the dawn of time, now checking the spec it doesnt seem to mention this as a valid base internal format (though it works), though it does mention its used for draw/readpixels, now quite why i switched over to BGR from RGB im not exactly sure, are there any benifits of using BGR, i seem to recall this is the native format for windows so there might be performance benifits, is this correct.

cheers zed

Marc
12-07-2004, 11:06 PM
Most images are stored in BGR-format, so it is more simple to load an image and use it as a texture. For RGB you would have to swap the R and the B bytes. The BGR(A) format is a quite old extension if I'm right. It's so old that even the Microsoft OGL implementation supports it.

Marc

knackered
12-08-2004, 01:12 AM
BGR is not an internal format, it's a format enum to tell the driver what your source image data is in, not how the driver should store it in the texture. It is the native format for win32 bitmaps, so if you're uploading win32 image data then you should use the BGR enum.

yjh1982
12-08-2004, 04:00 PM
use BGR let texture load more quickly on win32
platform.but only about load speed

Korval
12-08-2004, 05:51 PM
It's not just Win32 machines. Any little endian system (non-Macs) will benifit from it. It has more to do with hardware than software.

knackered
12-09-2004, 12:05 AM
It's nothing to do with the endian of the hardware.

V-man
12-09-2004, 06:59 AM
Originally posted by knackered:
It's nothing to do with the endian of the hardware.I think so. On windows, the decision to swap red and blue stems from some compatibility issue.
They wanted to make DIBs "device independent".

At least that's what a very old MSDN document said.

Somehow, a few images formats chose the swapped red and blue approach.

Note that a DIB memory layout is 0xXXRRGGBB, while RGBA format means 0xXXBBGGRR. Nothing to do with endianess.

Upload your textures as GL_BGRA, which tends to be faster. I think textures are commonly stored this way.

Also, some processors are switchable between little and big endian. PowerPC was one of them.

CatAtWork
12-09-2004, 09:23 AM
Does uploading from a BGR format really speed things up?

arekkusu
12-09-2004, 01:16 PM
GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV is the fastest upload format on Macintosh.

jra101
12-09-2004, 01:27 PM
Originally posted by CatAtWork:
Does uploading from a BGR format really speed things up?On NVIDIA GPUs, yes.

zed
12-09-2004, 10:33 PM
ran a few tests on my gffx5900 intf == internal format, NOTE when it saiz RGBA12 or someit i most likely get given RGBA8 etc, also the slight descrepences ~0.1% in timing are due to the timer

now BGRA is faster than RGBA BUT
something must be wrong, RGB with packed pixels is giving faster results than BGR.
So, is my code flawed? (i cant see how)

glTexSubImage2D Mpix/sec( 624.2) size( 128) intf(GL_RGBA8 ) format(GL_BGRA ) type(GL_UNSIGNED_INT_8_8_8_8_REV)
glTexSubImage2D Mpix/sec( 623.5) size( 128) intf(GL_RGBA12 ) format(GL_BGRA ) type(GL_UNSIGNED_INT_8_8_8_8_REV)
glTexSubImage2D Mpix/sec( 623.5) size( 128) intf(GL_RGBA16 ) format(GL_BGRA ) type(GL_UNSIGNED_INT_8_8_8_8_REV)
glTexSubImage2D Mpix/sec( 622.8) size( 128) intf(GL_RGBA ) format(GL_BGRA ) type(GL_UNSIGNED_INT_8_8_8_8_REV)
glTexSubImage2D Mpix/sec( 622.8) size( 128) intf(GL_RGBA8 ) format(GL_BGRA ) type(GL_UNSIGNED_BYTE)
glTexSubImage2D Mpix/sec( 622.8) size( 128) intf(GL_RGB10_A2 ) format(GL_BGRA ) type(GL_UNSIGNED_BYTE)
glTexSubImage2D Mpix/sec( 622.8) size( 128) intf(GL_RGB10_A2 ) format(GL_BGRA ) type(GL_UNSIGNED_INT_8_8_8_8_REV)
glTexSubImage2D Mpix/sec( 622.1) size( 128) intf(GL_RGBA16 ) format(GL_BGRA ) type(GL_UNSIGNED_BYTE)
glTexSubImage2D Mpix/sec( 619.9) size( 128) intf(GL_RGBA ) format(GL_BGRA ) type(GL_UNSIGNED_BYTE)
glTexSubImage2D Mpix/sec( 619.9) size( 128) intf(GL_RGBA12 ) format(GL_BGRA ) type(GL_UNSIGNED_BYTE)
glTexSubImage2D Mpix/sec( 246.2) size( 128) intf(GL_RGBA12 ) format(GL_BGRA ) type(GL_UNSIGNED_INT_8_8_8_8)
..

glTexSubImage2D Mpix/sec( 390.1) size( 128) intf(GL_RGBA16 ) format(GL_RGBA ) type(GL_UNSIGNED_INT_8_8_8_8)
glTexSubImage2D Mpix/sec( 388.5) size( 128) intf(GL_RGBA ) format(GL_RGBA ) type(GL_UNSIGNED_INT_8_8_8_8)
glTexSubImage2D Mpix/sec( 387.3) size( 128) intf(GL_RGB10_A2 ) format(GL_RGBA ) type(GL_UNSIGNED_INT_8_8_8_8)
glTexSubImage2D Mpix/sec( 386.8) size( 128) intf(GL_RGBA12 ) format(GL_RGBA ) type(GL_UNSIGNED_INT_8_8_8_8)
glTexSubImage2D Mpix/sec( 385.1) size( 128) intf(GL_RGBA8 ) format(GL_RGBA ) type(GL_UNSIGNED_INT_8_8_8_8)
glTexSubImage2D Mpix/sec( 272.0) size( 128) intf(GL_RGBA16 ) format(GL_RGBA ) type(GL_UNSIGNED_INT_8_8_8_8_REV)
glTexSubImage2D Mpix/sec( 271.7) size( 128) intf(GL_RGBA16 ) format(GL_RGBA ) type(GL_UNSIGNED_BYTE)
glTexSubImage2D Mpix/sec( 271.7) size( 128) intf(GL_RGBA12 ) format(GL_RGBA ) type(GL_UNSIGNED_BYTE)
glTexSubImage2D Mpix/sec( 271.1) size( 128) intf(GL_RGB10_A2 ) format(GL_RGBA ) type(GL_UNSIGNED_BYTE)
..

glTexSubImage2D Mpix/sec( 384.6) size( 128) intf(GL_RGB10 ) format(GL_BGR ) type(GL_UNSIGNED_BYTE)
glTexSubImage2D Mpix/sec( 384.6) size( 128) intf(GL_RGB ) format(GL_BGR ) type(GL_UNSIGNED_BYTE)
glTexSubImage2D Mpix/sec( 382.9) size( 128) intf(GL_RGB8 ) format(GL_BGR ) type(GL_UNSIGNED_BYTE)
glTexSubImage2D Mpix/sec( 381.8) size( 128) intf(GL_RGB12 ) format(GL_BGR ) type(GL_UNSIGNED_BYTE)
glTexSubImage2D Mpix/sec( 380.2) size( 128) intf(GL_RGB16 ) format(GL_BGR ) type(GL_UNSIGNED_BYTE)
glTexSubImage2D Mpix/sec( 219.3) size( 128) intf(GL_RGB5 ) format(GL_BGR ) type(GL_UNSIGNED_BYTE)
glTexSubImage2D Mpix/sec( 217.9) size( 128) intf(GL_R3_G3_B2 ) format(GL_BGR ) type(GL_UNSIGNED_BYTE)
glTexSubImage2D Mpix/sec( 215.4) size( 128) intf(GL_RGB4 ) format(GL_BGR ) type(GL_UNSIGNED_BYTE)
glTexSubImage2D Mpix/sec( 41.2) size( 128) intf(GL_RGB4 ) format(GL_BGR ) type(GL_UNSIGNED_INT )
..

glTexSubImage2D Mpix/sec(1819.9) size( 128) intf(GL_RGB ) format(GL_RGB ) type(GL_UNSIGNED_SHORT_5_6_5)
glTexSubImage2D Mpix/sec(1819.9) size( 128) intf(GL_RGB4 ) format(GL_RGB ) type(GL_UNSIGNED_SHORT_5_6_5)
glTexSubImage2D Mpix/sec(1819.9) size( 128) intf(GL_R3_G3_B2 ) format(GL_RGB ) type(GL_UNSIGNED_SHORT_5_6_5)
glTexSubImage2D Mpix/sec(1813.7) size( 128) intf(GL_RGB5 ) format(GL_RGB ) type(GL_UNSIGNED_SHORT_5_6_5)
glTexSubImage2D Mpix/sec( 411.1) size( 128) intf(GL_RGB8 ) format(GL_RGB ) type(GL_UNSIGNED_BYTE)
glTexSubImage2D Mpix/sec( 411.1) size( 128) intf(GL_RGB ) format(GL_RGB ) type(GL_UNSIGNED_BYTE)
..

SeskaPeel
12-10-2004, 07:43 AM
Ahem ... Zed ... do you mean uploading using BGR is twice faster compared to RGB ? Can this be possibly true ?

SeskaPeel.

zed
12-10-2004, 03:40 PM
i changed this demo as well
http://www.adrian.lark.btinternet.co.uk/GLBench.htm

it backs up my results RGB is faster than BGR but according to everyone here this shouldnt be true. so whats going on?

Zengar
12-10-2004, 10:25 PM
Video cards tend to implement BGR textures in hardware(as windows machines are most populat one). It looks like they implement RGB but that's only fake. Actually, all OpenGL drivers are full of hacks ;)

jwatte
12-11-2004, 08:30 AM
To answer the original question:

The only formats that are value for "internal format" are 1,2,3,4 (for compatibility), GL_RGB, GL_RGBA, GL_ALPHA, GL_LUMINANCE, GL_LUMINANCE_ALPHA, and the various depth, bump, compression etc extensions, plus their component-signed (GL_RGB8 and friends) versions.

GL_BGR/GL_BGRA are not valid as internal formats -- you should specify GL_RGB/GL_RGBA.

However, this doesn't matter, because the component ordering of the internal format (what's stored on the card) is not visible to you through the API. All "GL_RGB" means for internal format is that the card, internally, uses one of each of the R, G and B channels.

Meanwhile, for external formats, as everyone has said, GL_BGRA/GL_UNSIGNED_BYTE, or GL_BGRA/GL_UNSIGNED_INT_8_8_8_8_REV, is usually the format that hardware optimizes the most for. This is because most image file formats typically store pixels in BGRA format in memory (so that, on a little-endian machine, you can write them as 0xaarrggbb as an integer).

zed
12-11-2004, 09:29 AM
GL_BGR/GL_BGRA are not valid as internal formats -- you should specify GL_RGB/GL_RGBA.sorry about the confusion in my first post, somehow the word 'internal' slipped in there. i meant base format.
though the question still remains why is RGB (base format) faster than BGR, change the above demo (takes 2 minutes)

V-man
12-11-2004, 03:42 PM
IV. Texturing
25. How can I maximize texture downloading performance?
Best RGB/RGBA texture image formats/types in order of performance:

Image Format Image Type Texture Internal Format
GL_RGB GL_UNSIGNED_SHORT_5_6_5 GL_RGB
GL_BGRA GL_UNSIGNED_SHORT_1_5_5_5_REV GL_RGBA
GL_BGRA GL_UNSIGNED_SHORT_4_4_4_4_REV GL_RGBA
GL_BGRA GL_UNSIGNED_INT_8_8_8_8_REV GL_RGBA
GL_RGBA GL_UNSIGNED_INT_8_8_8_8 GL_RGBA

Bear in mind that the NVIDIA GPUs store all 24-bit texels in 32-bit entries, so try using the
spare alpha channel for something worthwhile, or it will just be wasted space. Moreover, 32-bit
texels can be downloaded at twice the speed of 24-bit texels. Single or dual component texture
formats such as GL_LUMINANCE, GL_ALPHA and GL_LUMINANCE_ALPHA are also very
effective, as well as space efficient, particularly when they are blended with a constant color (e.g.
grass, sky, etc.). Most importantly, always use glTexSubImage2D instead of glTexImage2D
(and glCopyTexSubImage2D instead of glCopyTexImage2D) when updating texture images.
The former call avoids any memory freeing or allocation, while the latter call may be required to
reallocate its texture buffer for the newly defined texture.

Cab
12-14-2004, 10:34 AM
In http://developer.nvidia.com/object/nv_ogl_texture_formats.html

there is an interesting list of texture formats supported by NVIDIA hw.
I'm suprised that except for GF6200, the RGB8 format is stored as RGBA8...

Obli
12-14-2004, 01:36 PM
I'm quite surprised there's still a difference between RGB and BGR.
Well, since I'm used to RGB I won't change my mind now but someone measured upload performance for real? Having RGB uploaded at half speed than BGR is definetly beyond my expectations. It's just a swap after all.

zeckensack
12-15-2004, 08:20 AM
Originally posted by zed:
though the question still remains why is RGB (base format) faster than BGR, change the above demo (takes 2 minutes)AFAICS that's only the case with "R5G6B5" internal format. This doesn't exist as an enumerant, but RGB5 (which the plain GL_RGB internal format may be promoted to if you don't request anything else) is a close enough match.

To make sure what you're measuring, you should query the internal component resolution of the texture (after loading image data).

void
print_texture_2d_components()
{
int red_bits=0;
int green_bits=0;
int blue_bits=0;
int alpha_bits=0;
int luma_bits=0;
int intensity_bits=0;
int depth_bits=0;

glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTUR E_RED_SIZE,&red_bits);
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTUR E_GREEN_SIZE,&green_bits);
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTUR E_BLUE_SIZE,&blue_bits);
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTUR E_ALPHA_SIZE,&alpha_bits);
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTUR E_LUMINANCE_SIZE,&luma_bits);
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTUR E_INTENSITY_SIZE,&intensity_bits);

//requires ARB_depth_texture
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTUR E_DEPTH_SIZE,&depth_bits);

if (red_bits) printf("R%u",red_bits);
if (geen_bits) printf("G%u",green_bits);
if (blue_bits) printf("B%u",blue_bits);
if (alpha_bits) printf("A%u",alpha_bits);
if (luma_bits) printf("L%u",luma_bits);
if (intensity_bits) printf("I%u",intensity_bits);
if (depth_bits) printf("D%u",depth_bits);

printf("\n");
}

CatAtWork
12-15-2004, 08:59 AM
This doesn't return the internal precision substition indicated by NVidia's documents.

NitroGL
12-15-2004, 10:49 AM
Originally posted by Cab:
In http://developer.nvidia.com/object/nv_ogl_texture_formats.html

there is an interesting list of texture formats supported by NVIDIA hw.
I'm suprised that except for GF6200, the RGB8 format is stored as RGBA8...I'm surprised that they don't actually support RGB(A)16 formats, those are really useful! At least ATI supports them.