Loading bitmap with SOIL then convert to int[][]

I know how to load a bitmap using the SOIL lib and have textured it too a quad but now I want to load a bitmap and get the int value for the color stored at each pixel of the 2D texture so that I can use this data. Can’t seem to find a good example on extracting from GL_TEXTURE_2D using OpenGL 4.


Texture texture = Texture();

texture = texture.loadTexture(filename);

glGetTextImage(texture.getTextureID(), GL_TEXTURE_2D, 0, GL_RGB, GL_INT, pixels);


How do I store this into pixels[][] for any width,height?

If you want the image data in client memory, you’re better off using SOIL_load_image() to get the data, then (if necessary) uploading it to a texture yourself.

The main case for using glGetTexImage() is if the texture was generated by the video hardware (e.g. render to texture or glGenerateMipmap() etc) (you can also use glReadPixels() to read from a texture which is attached to a framebuffer).

In order to use glGetTexImage(), you first have to know the texture dimensions so that you can allocate a suitable buffer (a pointer to which is passed as the last parameter). These can be obtained using glGetTexLevelParameteriv() with GL_TEXTURE_WIDTH and GL_TEXTURE_HEIGHT.

Well, I was doing this:


Texture Texture::loadTexture(char* filename)
{
	/* load an image file directly as a new OpenGL texture */
	setTextureID(SOIL_load_OGL_texture
					(
					filename,
					SOIL_LOAD_AUTO,
					SOIL_CREATE_NEW_ID,
					SOIL_FLAG_INVERT_Y
					));

	return *this;
}

Thanks but from your response, I still don’t understand where I access the color values of each pixel. I need them physically in an int array, not a pointer to a texture stored on the driver? I am just reading in a 64x64 .bmp so I don’t want to attach the texture to a frame buffer and then read off it with glReadPixels(), sounds excessive. Also, I won’t be using the texture at all later , I just need to read the pixel values once and be done with it. Looking for a simple method.

If you use SOIL_load_image(), it returns a pointer to the pixel data (as opposed to SOIL_load_OGL_texture(), which creates a texture from it and returns the texture name).

Otherwise, you need to allocate an array and use glGetTexImage2D to read the pixel data into it.

How do I use this point to the pixel data though?


Bitmap::Bitmap(char* filename)
{
	unsigned char* pixels_ = SOIL_load_image(filename, &width, &height, 0, SOIL_LOAD_RGB);

	for (int i = 0; i < width; i++)
	{
		for (int j = 0; j < height; j++)
		{
			if (pixels_[i + j * width] == 0)
			{
				//Do Something
			}
		}
	}
}

Returns a pointer to and array of 1 with ‘\0’. The filepath is valid too.

An RGB image will have 3 bytes per pixel, so the innermost loop might start with e.g.


    unsigned char r = pixels_[(i + j * width) * 3 + 0];
    unsigned char g = pixels_[(i + j * width) * 3 + 1];
    unsigned char b = pixels_[(i + j * width) * 3 + 2];

Also, unless you specifically need to operate column-by-column rather than row-by-row, the order of the loops would normally be reversed, so that you’re scanning through memory sequentially.

That still doesn’t help me. I know how to operate on the data. I need to get the data.


level = new Bitmap("level1.bmp");

...

Bitmap::Bitmap(char* filename)

{
	unsigned char* pixels_ = SOIL_load_image(filename, &width, &height, 0, SOIL_LOAD_RGB);

	for (int i = 0; i < width; i++)
	{
		for (int j = 0; j < height; j++)
		{
			if (pixels_[i + j * width] == 1)
			{
				//Do Something With Data
				unsigned char r = pixels_[(i + j * width) * 3 + 0];
				unsigned char g = pixels_[(i + j * width) * 3 + 1];
				unsigned char b = pixels_[(i + j * width) * 3 + 2];

				//...
			}
		}
	}
}

...


returns the following:

width = 64
height = 64

pixels_ = ‘\0’

The file is in the same folder as the project. What do?

[QUOTE=ParagonArcade;1262857]returns the following:

width = 64
height = 64

pixels_ = ‘\0’
[/QUOTE]
The code should work fine (other than that “if” statement, which doesn’t make sense). If you’re examining pixels_ with a debugger, the debugger won’t know how much data it points to, so it won’t show the entire array.

If the image is 64x64 and you’re forcing it to RGB, it will have the same layout as


unsigned char pixels[64][64][3];
unsigned char *pixels_ = &pixels[0][0][0];

If you know that the image will always have that size, you could cast the pointer, e.g.:


unsigned char (*pixels)[64][3] = (unsigned char (*)[64][3]) pixels_;
int y,x,c;
for (y = 0; y < 64; y++)
    for (x = 0; x < 64; x++) {
        unsigned char r = pixels[y][x][0];
        unsigned char g = pixels[y][x][1];
        unsigned char b = pixels[y][x][2];
        // etc
    }
}

But this won’t work if you need to cope with arbitrary-sized images because C array access requires that the strides are fixed at compile time.