PBO + ATI + compression -> exception

I’ve encountered some very strange behavior and I hope someone can help me.

The following code (OpenTK syntax) works on my nVidia 8600 (I have an uncompressed .jpg image (bgr format) in someBuffer) :

int pbo;
GL.Genbuffers(1, out pbo);
GL.BindBuffer(BufferTarget.PixelUnpackBuffer, pbo);
GL.BufferData(BufferTarget.PixelUnpackBuffer, (IntPtr)someBuffer.Length, someBuffer, BufferUsageHint.StaticDraw);

// the culprit:
GL.TexImage2D(TextureTarget.Texture2D,
0,
PixelInterfalFormat.CompressedRgbS3tcDxt1Ext,
someWidth,
someHeight,
0,
PixelFormat.Bgr,
PixelType.UnsignedByte,
IntPtr.Zero);

The same code crashes on an ATI 4250 (which does support GL_EXT_texture_compression_s3tc) on the last call -> “attempted to read or write protected memory”.

If I change PixelInterfalFormat.CompressedRgbS3tcDxt1Ext to PixelInterfalFormat.Bgr it works fine (but I want the compression).

If I do not use the pbo it works fine as well (but I would miss the performance gain of using the pbo).

So what could make the specific combination of using a using a PBO + ATI (OpenGL 3.3) + CompressedRgbS3tcDxt1Ext cause the exception?

Any help is greatly appreciated!

If you want to upload data that is compressed, you have to use glCompressedTex(Sub)Image.

The image is NOT compressed yet. The TexImage2D call compresses it and it works flawlessly on nVidia.

Since you are using a 3 byte data type, and OpenGL has a default value of 4 for GL_UNPACK_ALIGNMENT. Have you ensure your rows are 4 byte aligned, or called:


glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

Otherwise OpenGL is expecting that you have inserted extra bytes at the end of each row to ensure alignment, and hence if you haven’t, it will read beyond the buffer you have provided.

http://www.opengl.org/wiki/Common_Mistakes#Texture_upload_and_pixel_reads

Keeping the default value of 4 for GL_UNPACK_ALIGNMENT will only work for certain widths, which are a multiple of 4.

eg. when width = 20 = 54, each row would be 54*3 = 60 bytes, which is a multiple of 4. When width = 30, each row would be 30 * 3 = 90 bytes, which is not a multiple of 4.

The spec does state:

So if ATi is crashing instead of returning the error, it’s a bug, and if NVidia isn’t returning INVALID_OPERATION, it’s a bug too.

Sorry, should have mentioned that I of course tried glPixelStorei(GL_UNPACK_ALIGNMENT, 1) but to no avail. The source image IS a power of two (256x256).

Again, it DOES work on nVidia, it DOES work if not using compression, it DOES work when not using PBO.

It’s the combination that fails.

PS How do I put code in a code block (new to the forum)?

I tried reproducing the problem on ATi Mobility Radeon HD 5650 + Catalyst 11.8 drivers in Delphi with:

const
  twidth = 256;
  theight = 256;
var
  PBO: GLuint;
  tex: GLuint;
  data: array[0..theight-1, 0..twidth-1, 0..2] of Byte;
begin
  glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

  glGenBuffers(1, @PBO);
  glBindBuffer(GL_PIXEL_UNPACK_BUFFER, PBO);
  glBufferData(GL_PIXEL_UNPACK_BUFFER, twidth*theight*3, @data[0,0,0], GL_STATIC_DRAW);

  glGenTextures(1, @tex);
  glBindTexture(GL_TEXTURE_2D, tex);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
  glTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGB_S3TC_DXT1_EXT, twidth, theight, 0, GL_BGR, GL_UNSIGNED_BYTE, 0);
end;

This worked ok for me, so apart from the following, I’m not sure what to suggest:

  1. Have you got up to date drivers?
  2. Is it all compressed formats that crash, or do other ones work?
  3. Have you tried a larger than necessary buffer?
  4. Are you modifying any other texture properties before calling glTexImage2D?

Use square brackets [ code ] #### [ /code]

@Dan Bartlett: you seem indeed to be doing exactly what I do. I’ve got no idea what the cause of my error could be.

1.Of course all drivers etc. are up to date
2.Not a single compressed format works, only RGB (except again if I don’t use PBO, than compressed works fine as well)
3.Yep, still crashes
4.No

Again, it is the very combination that goes wrong:

*On nVidia cards it’s all ok
*Without PBO it’s all ok
*Without compression it’s all ok

So it’s like this:


if (ATI && usingPBO && pixelInternalFormat != RGB)
  Crash();
else
  Allfine();

A quick follor-up: though Windows Update ensured me I had the latest ATI drivers installed on the system with the ATI 4250 I installed the latest Catalyst drivers and now the crash is gone.

There is still a problem however because now the combination ATI + s3tc (and others like compressedRgb) compression + use of pbo just gives me an uncompressed image!

As before, not using pbo gives the correct result.

Why o why does the use of a pbo make ATI refuse to compress the image?

Edit: I’m not the only who encountered this problem : http://www.pouet.net/topic.php?which=6270

There is still a problem however because now the combination ATI + s3tc (and others like compressedRgb) compression + use of pbo just gives me an uncompressed image!

How do you know it gives you an uncompressed image?

Why o why does the use of a pbo make ATI refuse to compress the image?

Because quite frankly it is a terrible idea.

The entire purpose of using PBOs with glTex(Sub)Image* is to do asynchronous uploads of pixel data. Asynchronous in this case meaning, “the CPU doesn’t have to get involved.” Your GPU cannot perform S3TC texture compression; only the CPU can do that. So basically, the driver (all on the CPU and therefore not async) has to take your PBO pixel data, compress it, store that into an internal buffer which it uses to actually upload to the GPU.

You are giving OpenGL contradictory commands: upload asynchronously, but also do this CPU-based compression. You will get virtually nothing out of using PBOs by doing this; you may as well just hand glTex(Sub)Image a pointer to client memory.

To get anything from PBOs, you must match the given data format with the internal format you choose. If you want the driver to do format conversion for you, then you lose most of the advantage PBOs might have given you.

The reason that few people have run into this problem is because most people who don’t do proper format matching also aren’t using semi-advanced features like PBOs to do uploading. This is not well-covered ground for actual applications. Games will always match formats, and most non-game applications that are given an image to load will look at the format of the image and pick the OpenGL internal format that matches it.

How do you know it gives you an uncompressed image?

Because I do

GL.GetTexLevelParameter(TextureTarget.Texture2D, 0, GetTextureParameter.TextureCompressed, out compressed);

and

GL.GetTexLevelParameter(TextureTarget.Texture2D, 0, GetTextureParameter.TextureCompressedImageSize, out compressedImageSize);

Telling me if it was compressed and what the compressed size is (and even though compressed can be 1 I noticed compressedImageSize is equal to the size of the uncompressed image (in case of ATI).

Your comment does make a lot of sense though! I was already wondering why it worked in the first place (using nVidia) but since it did work I thought the problems was with ATI.

I guess it does make sense to use a pbo if no compression is needed but if a as you say cpu based compression is needed a pbo should not be used.

Thanks again for your reply. It all makes sense now.