texture arrays in 3.3

I’m having trouble trying to get texture arrays to work in my scene. I’ve found a few examples, like this, but they’re all using later versions, so I haven’t been able to learn much from them.

My code is based off this, except I’m using glTexImage3D in place of glTexStorage3D. It doesn’t give me any errors messages, but it just displays black.

To generate the textures:

	int twidth, theight;
	unsigned char* image1 = SOIL_load_image("awesomeface.png", &twidth, &theight, 0, SOIL_LOAD_RGB);
	unsigned char* image2 = SOIL_load_image("container.jpg", &twidth, &theight, 0, SOIL_LOAD_RGB);

	glGenTextures(1, &textures);
	glBindTexture(GL_TEXTURE_2D_ARRAY, textures);
	glTexImage3D(GL_TEXTURE_2D_ARRAY,
		0,                // level
		GL_RGBA8,         // Internal format
		twidth, theight, 1, // width,height,depth
		0,                
		GL_RGBA,          // format
		GL_UNSIGNED_BYTE, // type
		0);               // pointer to data
	glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, 0, twidth, theight, 2, GL_RGBA, GL_UNSIGNED_BYTE, image1);
	glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, 1, twidth, theight, 2, GL_RGBA, GL_UNSIGNED_BYTE, image2);

	glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
	glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
	glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
	glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

pass texture to shaders:

	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_2D_ARRAY, textures);
	glUniform1i(glGetUniformLocation(simulationObjectsShaderProgram, "textures"), 0);

and my fragment shader. Just hardcoding things for now. I’ll worry about getting the right texture coordinates when I can actually load the texture.

#version 330 core
in vec3 vertexColor;
in vec3 TexCoord;

out vec4 color;
uniform sampler2DArray textures;

void main()
{
    //color = vec4(vertexColor, 1.0f);
	color = texture(textures, vec3(.5, .5, 0.0));
}

any help is much appreciated!

A few questions:

  1. Do you know that the width and height of your two textures are the same?
  2. Do you know that they are loading successfully (image1 and image2 != NULL for instance).
  3. If you have two 2D images, why are you allocating a texture array with only 1 slice?
  4. In your 2D image subloads into the 2D texture array, why are you providing a depth (width in the slice dimension) of 2? Each 2D image is 1 slice deep, right?
  5. Why aren’t you checking for GL errors. I suspect you’re causing GL to throw some.
  1. yes. Both 512 x 512.
  2. I’m pretty sure they’re loading successfully. They aren’t null, and I’ve also used those same lines successfully in an earlier iteration, before I started to look into texture arrays.
      1. I know this isn’t a good answer, but if I change either of those parameters I get an exception: “Access violation reading location” and “nvoglv32.pdb not loaded”
  3. Probably a dumb question, but how do I check for GL errors?

That leads me to suspect that any valid glSubTexImage* call will generate an exception, and you’re basically relying upon OpenGL rejecting the call in order to avoid the exception.

Given that you created the texture with a depth of 1, the second call is invalid due to the zoffset of 1, and both calls are invalid due to the depth of 2.

[QUOTE=Sunny_Lime;1286882]
5) Probably a dumb question, but how do I check for GL errors?[/QUOTE]
glGetError().

Related links from the OpenGL Wiki:

When you want to know something, the OpenGL Wiki is a good place to start searching. It’s not the authoritative source (the OpenGL spec is), but for most folks it’s easier to read. It also tends to give more contextual information to help you make sense of why some part of OpenGL works the way it does. Just be aware that its most thorough coverage is for Modern OpenGL. Much of Legacy OpenGL (e.g. deprecated APIs) isn’t covered.

I appreciate your patience. So here’s where I’m at right now:

	int twidth, theight;
	unsigned char* image1 = SOIL_load_image("awesomeface.png", &twidth, &theight, 0, SOIL_LOAD_RGB);
	unsigned char* image2 = SOIL_load_image("container.jpg", &twidth, &theight, 0, SOIL_LOAD_RGB);

	glGenTextures(1, &textures);
	glBindTexture(GL_TEXTURE_2D_ARRAY, textures);
	glTexImage3D(GL_TEXTURE_2D_ARRAY,
		0,                // level
		GL_RGBA8,         // Internal format
		twidth, theight, 2, // width,height,depth
		0,                
		GL_RGBA,          // format
		GL_UNSIGNED_BYTE, // type
		0);               // pointer to data

	glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, 0, twidth, theight, 0, GL_RGBA, GL_UNSIGNED_BYTE, image1);
	glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, 1, twidth, theight, 0, GL_RGBA, GL_UNSIGNED_BYTE, image2);

	glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
	glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
	glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
	glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

	GLenum errorCode;
	while ((errorCode = glGetError()) != GL_NO_ERROR)
	{
		std::string error;
		switch (errorCode)
		{
		case GL_INVALID_ENUM:                  error = "INVALID_ENUM"; break;
		case GL_INVALID_VALUE:                 error = "INVALID_VALUE"; break;
		case GL_INVALID_OPERATION:             error = "INVALID_OPERATION"; break;
		case GL_STACK_OVERFLOW:                error = "STACK_OVERFLOW"; break;
		case GL_STACK_UNDERFLOW:               error = "STACK_UNDERFLOW"; break;
		case GL_OUT_OF_MEMORY:                 error = "OUT_OF_MEMORY"; break;
		case GL_INVALID_FRAMEBUFFER_OPERATION: error = "INVALID_FRAMEBUFFER_OPERATION"; break;
		}
		std::cout << error << std::endl;
	}

It does not throw any errors, but still just gives me a black screen. I had thought that the depth parameters for the glTexSubImage3D should be 1 rather than 0, but that throws the same “Access violation reading location” as previously. The wiki says “The texels referenced by data replace the portion of the existing texture array with x indices xoffset and xoffset+width−1, inclusive, y indices yoffset and yoffset+height−1, inclusive, and z indices zoffset and zoffset+depth−1, inclusive. This region may not include any texels outside the range of the texture array as it was originally specified”.

My best guess as to what that means is that my zoffset + depth - 1 = 1 is out of range. But out of range of what? It’s not the depth parameter in glTexImage3D

Anytime your program crashes, you need to learn to debug the crash.

Read the documentation for your debugger, and learn how to examine your image1, image2 pointers.
For example you should be able to examine image1[twidth * theight * 4] bytes. If that doesn’t work in the debugger, then it CAN’T work, because your image loader really didn’t allocate what you think it did.

For instance, because you’ve asked for SOIL_LOAD_RGB but then tell OpenGL to use RGBA, so you really only have 75% of the bytes you think you do.

See also previous thread.

your fragmentshader has to sample from various texcoords, too: (otherwise the sampled color would be constant everywhere)

#version 330 core
in vec3 vertexColor;
in vec3 TexCoord;
 
out vec4 color;
uniform sampler2DArray textures;
 
void main()
{
    int layer = 0; /* or 1 */
    color = texture(textures, vec3(TexCoord, layer));
}

then you have to decide if you want to have a RGB or RGBA texture array:

if RGB, then use SOIL_LOAD_RGB as parameter in the SOIL function
and choose GL_RGB8 as internal format, and GL_RGB as format in glTexSubImage2D(…)

if RGBA, then use SOIL_LOAD_RGBA as parameter in the SOIL function
and choose GL_RGBA8 as internal format, and GL_RGBA as format in glTexSubImage2D(…)

finally choose a texture unit, bind it there, and leave it there:


GLuint texture_unit = 4; /* for example */

glActiveTexture(GL_TEXTURE0 + texture_unit);
glBindTexture(GL_TEXTURE_2D_ARRAY, texture);

GLint location = glGetUniformLocation(program, "textures");
glUseProgram(program);
glUniform1i(location, texture_unit);
glUseProgram(0);

Just adding to what everyone else is saying here.

For a texture array in non-DSA OpenGL:

void glTexSubImage3D(GLenum target,    // GL_TEXTURE_2D_ARRAY
    GLint level,                    // mipmap level to load
    GLint xoffset,                    // offset in an existing texture to load to
    GLint yoffset,                    // offset in an existing texture to load to
    GLint zoffset,                    // first array slice to load to
    GLsizei width,                    // width of data to be loaded
    GLsizei height,                    // height of data to be loaded
    GLsizei depth,                    // number of array slices to load
    GLenum format,                    // format of data to be loaded
    GLenum type,                    // type of data to be loaded
    const GLvoid * pixels)            // pointer ro data to be loaded

The format and type parameters must match the data pointer you supply; these tell OpenGL how to interpret and unpack the data, so if they mismatch you’ll either get a bad texture or a crash. You’re loading textures as RGB via SOIL but telling OpenGL that you’re supplying it with RGBA data. OpenGL will take you at your word, assume that the data is RGBA, and attempt to read past the ends of buffers. Either tell SOIL to load as RGBA, or tell OpenGL that you’re supplying it with RGB data; whichever is appropriate for your program, but just ensure that they match.

You appear to have horrible confusion around the zoffset and depth parameters.

In your glTexImage3D call, depth is the number of array slices to create. 2 in your case and you’re getting that part correct.

In your glTexSubImage3D calls, zoffset is the first array slice to update and depth is the number of array slices to be updated.

For your example, updating a 2 slice array via 2 glTexSubImage3D calls, where each call updates 1 slice, the parameters are:

  • First glTexSubImage3D call uses zoffset 0, depth 1.
  • Second glTexSubImage3D call uses zoffset 1, depth 1.

Ok, it’s working now. Thanks for the help and the explanations everyone.