Problem adding a 2d texture array (GL 4.0)

I’m having a really hard time adding a 2d texture array to my opengl code but my shader won’t output anything except a black color. I looked through different tutorials and many threads but I simply can’t see what’s wrong with what I’m doing.

Here I’m creating the texture:

prog->setUniformValue("matTextures", 0);

        glActiveTexture(GL_TEXTURE0);
        glGenTextures(1, &textures);
        glBindTexture(GL_TEXTURE_2D_ARRAY, textures);

        //texcount + 1 for the fallback texture in white
        glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, GL_RGBA, firstImg.width, firstImg.height, texCount+1,
                     0,
                     GL_RGBA,
                     GL_UNSIGNED_BYTE,
                     0
                     );


        glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, firstSampler.magFilter);
        glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, firstSampler.minFilter);
        glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, firstSampler.wrapS);
        glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, firstSampler.wrapT);

        //fallbacktexture at first position
        std::vector<unsigned char> fallBackTexData;
        for(int row = 0; row < firstImg.height; row++) {
            for(int column=0; column < firstImg.width; column++) {
                fallBackTexData.push_back(255);
                fallBackTexData.push_back(255);
                fallBackTexData.push_back(255);
                fallBackTexData.push_back(255);
            }
        }
        glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, 0, firstImg.width, firstImg.height, 1, GL_RGBA, GL_UNSIGNED_BYTE, &fallBackTexData[0]);

        for(int texture=0; texture < texCount; texture++) {

            ...

            glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, texture+1, img.width, img.height, 1, GL_RGBA, GL_UNSIGNED_BYTE, &img.image[0]);
        }

And then i render with:

    glBindVertexArray(vao);
    glDrawElements(GL_TRIANGLES, 3 * num_tris, index_type, (void*)index_offset);
    glBindVertexArray(0);

My ShaderCode:


//== PROGRAM LINK STATUS = TRUE
//== PROGRAM VALIDATE STATUS = TRUE

//======================================================
//   Vertex Shader 5 
//======================================================

//== SHADER COMPILE STATUS = TRUE
#version 400
#define lowp
#define mediump
#define highp
#line 1

in vec3 pos;
in vec3 vertnormal;
in vec2 UV;

out vec3 vPos;
out vec3 normal;
out vec2 texCoord;

uniform mat3 normalMat;
uniform mat4 m,v,p;

void main(void)
{
    vPos = vec3(v * m * vec4(pos, 1.0));
    vec3 norm = normalize(normalMat * vertnormal);

    normal = norm;
    texCoord = UV;
    gl_Position = p * vec4(vPos, 1.0);
}

//======================================================
//   Fragment Shader 6 
//======================================================

//== SHADER COMPILE STATUS = TRUE
#version 400
#define lowp
#define mediump
#define highp
#line 1
struct Material
{
float diffuseFactor[4];
int diffuseTexture ;
float specularFactor[3];
int specularTexture ;
float shininessFactor ;
int shininessTexture ;
};

out vec4 frag;

in vec3 vPos;
in vec3 normal;
in vec2 texCoord;

uniform mat4 m,v,p;
uniform vec3 lightPos;
uniform float lightInt;

uniform int materialIndex;

layout (std140) uniform MaterialBlock {
    Material materials[256];
};
uniform sampler2DArray matTextures;

void main(void)
{

    Material mat = materials[materialIndex];

    vec3 spec = vec3(mat.specularFactor[0], mat.specularFactor[1], mat.specularFactor[2]);
    vec4 diff = vec4(mat.diffuseFactor[0], mat.diffuseFactor[1], mat.diffuseFactor[2], mat.diffuseFactor[3]);

    vec3 vLightPos = vec3(p * v * m * vec4(lightPos, 1.0));
    vec3 lightDir = normalize(vLightPos - vPos);
    vec3 reflection = reflect(-lightDir,normal);
    vec3 viewDir = normalize(-vPos);

    vec4 kd = diff * texture(matTextures, vec3(texCoord,mat.diffuseTexture));
    vec4 dPart = kd * lightInt * max(dot(normal, lightDir), 0.0);

    vec4 ka = 0.1 * kd;
    vec4 aPart = ka * lightInt;

    vec4 ks = vec4(spec,1.0) * texture(matTextures, vec3(texCoord, mat.specularTexture));
    float n = mat.shininessFactor * texture(matTextures, vec3(texCoord, mat.shininessTexture)).x;
    vec4 sPart = ks * lightInt * pow(max(dot(reflection, viewDir), 0.0), n);

    vec4 col = aPart + dPart + sPart;
    frag = col;
}

I tried outputting my UVs for color but they seem fine and the texture indices seem fine too. The Model simply renders all black. I checked the gl logs with glIntercept and it throws no errors at all. I’m kinda out of ideas. This is my first time working with openGL at all so please go easy on me :smiley:

Thanks in advance

Debug this how you’d debug anything: simplify, simplify, simplify … until it works (or misbehaves differently).

For instance, try bypassing your entire fragment shader by putting this as the last line:


frag = texture(matTextures, vec3(0.0, 0.0, 0.0));

Also, I’d suggest you use a sized internal format (GL_RGBA8) for the texture allocation. And it looks like you may be allocating one too many slices.

Sorry, if i didn’t clarify this, but I already debugged everything i could imagine and traced down the error to the texture binding.

However, I already found the error! I used tinygltf to load in the data and also the texture filter parameters and somehow these values were not correct. At first glance on the GLIntercept log these values looked fine, but when i changed to GL_Nearest and GL_Linear fo min and mag filter it worked out fine.

Thanks for your tips, tho. In the doc for glTexImage3D it states that using GL_RGBA is fine for the internal format. Is there any difference when using the sized format GL_RGBA8?

[QUOTE=mourthag;1292946]… also the texture filter parameters and somehow these values were not correct.
At first glance on the GLIntercept log these values looked fine, but when i changed to GL_Nearest and GL_Linear fo min and mag filter it worked out fine.[/QUOTE]

Yes, this is a common mistake. The default min filter is GL_LINEAR_MIPMAP_LINEAR (except for rect textures), and if you don’t provide MIPmap images (used for fast minification) for the texture, this min filter doesn’t do what you want.

In the doc for glTexImage3D it states that using GL_RGBA is fine for the internal format. Is there any difference when using the sized format GL_RGBA8?

The latter just tells the GPU exactly what format you want, while the former leaves it guessing.

Newer texture storage allocation APIs (e.g. glTexStorage3D) require the sized texture format.

Yes, i just wasn’t aware that the standard Filter is a Mipmap one.

[QUOTE=Dark Photon;1292949]
The latter just tells the GPU exactly what format you want, while the former leaves it guessing.[/QUOTE]

But is there any downside to simply using the non-sized formats? Why does it change in the new APIs?

Well, if you use glTexStorage*(), specifying an unsized format is an error.

Are you saying that it genuinely wouldn’t be a problem if the texture only had one bit per component? Because when you specify an unsized format, that’s what you’re saying.

Code using unsized formats in modern OpenGL is essentially lying, and the API shouldn’t encourage that. The unsized formats are retained for compatibility with code written before sized internal formats existed, but that clearly doesn’t apply to code using glTexStorage*.