Looking for Batch Rendering Resources

Howdy!

So basically I’m a crazy person who wants to make a 2d game with just base opengl and C, I have a not terrible understanding of everything up to making a draw call on a single object I guess. I’m at the point where I can draw whatever, but it’d be with single draw calls and swapping entire textures and binding everything every time I call draw, and obviously that’s not gonna work for long so I just want to skip past it into making a long term solution. I’m manually loading opengl functions and pulling supported stuff from the graphics card, have my program set up to draw in SRGB if it’s supported, have basic transparency and depth testing, with basic matrices, transformations and understanding of what I’m doing up to this point. So I guess the list of things I’m looking to do and having trouble finding a good resource for; in order of least to most important:

Automatically pack textures into an atlas
Automatically getting the texture coords for an individual part of that atlas / automatically converting literal pixel coordinates to 0-1 texture coords
Having a way to tell my program what part of the atlas to draw on each triangle
Actually efficiently swap between the textures w/o re-binding them
Actually drawing things with a single draw call
and having a way to do all that in an efficient way; such as -> Draw(sprite, spriteLocation, spriteOrigin, spriteSize, (etc));
in as few actual draw calls as possible

So I guess I’m looking to be pointed in the right direction, a guide, specific opengl documentation, even keywords to search up and learn about, that can get me to where I’m looking to be.
I greatly appreciate any help :slight_smile:

Well, it’s your call. If you want to dig deeper just for the learning value, go for it! Otherwise, you might make sure you have a problem first before you go trying to fix a problem you might not have.

[ol]
[li]Automatically pack textures into an atlas [/li]> [li]Automatically getting the texture coords for an individual part of that atlas / automatically converting literal pixel coordinates to 0-1 texture coords [/li]> [li]Having a way to tell my program what part of the atlas to draw on each triangle [/li]> [li]Actually efficiently swap between the textures w/o re-binding them [/li]> [li]Actually drawing things with a single draw call [/li]> [li]and having a way to do all that in an efficient way; such as -> Draw(sprite, spriteLocation, spriteOrigin, spriteSize, (etc)); [/li]> [li]in as few actual draw calls as possible [/li]> [/ol]

For #1-#4, texture arrays or bindless textures are the more current methods to batch across multiple textures. These get around many of the limitations and headaches of texture atlases.

With texture arrays, you just pass in a slice index as a 3rd component of the texcoord. With bindless textures, you provide a list of bindless texture addresses you can lookup into.

But as you realize (#5-#7), changing textures is just one reason why you might otherwise have to split a draw call. NVidia has had some pretty useful presentations out there on improving batching and reducing state changes over the years. Some of it is geared toward NVidia extensions (e.g. bindless buffers, NV_command_list, etc.). However, much of it is cross-vendor applicable (e.g. MultiDrawIndirect, bindless texture, persistent buffer maps, etc.). Much of this you’ll find by websearching “NVidia AZDO” (Approaching Zero Driver Overhead). Also, OpenGL like Vulkan (also from NVidia) has a good hit-list to look through.

1 Like

[QUOTE=Dark Photon;1292040][/QUOTE]

I appreciate the feed back, thanks :slight_smile:

Ok, so upon trying to implement texture arrays by swapping from GL_TEXTURE_2D to GL_TEXURE_2D_ARRAY and binding it I am running into an issue where it seems like I am not properly binding the texture, the shader just pulls solid black from the texture array, so I guess I’m not sure what I’m doing wrong, so I’ll just dump what I’ve got if anyone can point out what I’m doing incorrectly.

this runs a single time before the loop



    GLuint arrayTexture;
    glGenTextures(1, &arrayTexture);
    
    Texture leaf1 = LoadTexture("leaf1.png");
    Texture leaf2 = LoadTexture("leaf2.png");
    Texture leaf3 = LoadTexture("leaf3.png");
    Texture leaf4 = LoadTexture("leaf4.png");
    
    
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D_ARRAY, arrayTexture);
     glTexStorage3D(GL_TEXTURE_2D_ARRAY, 1, openGLDefaultInternalTextureFormat, leaf1.width, leaf1.height, 1);
    
    
    glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    
    
    
    
    
    if (leaf1.pixels)
    {
    glTexSubImage3D(GL_TEXTURE_2D_ARRAY,
                    0, // number of mipmaps
                    0, 0, 0, // x y & z offset
                        leaf1.width, leaf1.height, 1, // texture width height and depth
                    openGLDefaultInternalTextureFormat, // internal texture format
                    GL_UNSIGNED_BYTE, // type
                    &leaf1.pixels); // pointer to actual texture
    stbi_image_free(leaf1.pixels);
    }
    else
    {
        LogError("GLTemp() failed to load image");
    }
    
    glUniform1i(glGetUniformLocation(ID, "Sampler"), arrayTexture);
    glBindTexture(GL_TEXTURE_2D_ARRAY, arrayTexture);
    

My draw loop



    glUniform1i(glGetUniformLocation(ID, "Sampler"), arrayTexture);
    glUniformMatrix4fv(glGetUniformLocation(ID,  "view"), 1, GL_FALSE, &view.Elements[0][0]); 
    glUniformMatrix4fv(glGetUniformLocation(ID,  "projection"), 1, GL_FALSE, &proj.Elements[0][0]); 
    
    
    
    // for some reason a scale of 8 is one to one
    s32 onetoonescaling = 8;
    
    for (s32 i =0; i <1; ++i)
    {
        hmm_mat4 model;
        model =
            HMM_Translate(HMM_Vec3(0,0,-1)) *
            HMM_Rotate(0, HMM_Vec3(0, 0, 0)) *
            HMM_Scale(HMM_Vec3(onetoonescaling*2, onetoonescaling*2, onetoonescaling*2));
        
        glUniformMatrix4fv(glGetUniformLocation(ID,  "model"), 1, GL_FALSE, &model.Elements[0][0]); 
        
        glUseProgram(ID);
        glDrawArrays(GL_TRIANGLES, 0, 6);
    }


My fragment Shader


#version 330 core
out vec4 FragColor;

in vec2 TexCoord;

//uniform sampler2D ourTexture;

uniform sampler2DArray Sampler;

 

void main()
{


FragColor = texture(Sampler, vec3(TexCoord.xy, 0));


//FragColor =texture(ourTexture, TexCoord)

}

Where’s your glGenTextures() call? I expected to see this before the first glBindTexture() for a texture. This to avoid:

Are you Checking for GL errors?

For your glTexSubImage3D() call, the 2nd parameter is a MIPmap level number not a number of MIPmaps. And the next-to-last option isn’t an “internal format” (like GL_RGBA8) but rather an “external format” (e.g. GL_RGBA).

For your glUniform1i() call, the value you should pass in here isn’t the texture handle number, but rather the texture unit number to which it is bound (e.g. 0 in this case).

I added the line to my snippet, i was doing it, but I didn’t grab it on accident.
however, running glGetError() does return an error 1280 immediately after that part which appears to be GL_INVALID_ENUM

I moved it around a bit to see where it was picking up, it appears to be from

     glTexSubImage3D(GL_TEXTURE_2D_ARRAY,
                    0, // number of mipmaps
                    0, 0, 0, // x y & z offset
                        leaf1.width, leaf1.height, 1, // texture width height and depth
                    openGLDefaultInternalTextureFormat, // internal texture format
                    GL_UNSIGNED_BYTE, // type
                    &leaf1.pixels); // pointer to actual texture

0 - I am not loooking for mip maps
0,0,0 - no offset (maybe I’m confused at these)
leaf1.width - width of the texture in pixels, 8
leaf1.height - height of the texture in pixels, 8
openGLDefaultInternalTextureFormat - which is either going to be GL_RGBA8 or GL_SRGB8_ALPHA8 depending if the system has SRGB support
GL_UNSIGNED_BYTE
&leaf1.pixels - pointer to the actual pixel data of the texture

~~
edit
~~

changing the format to GL_RGBA fixes the error, but still I am only drawing a black square

My guess is your “openGLDefaultInternalTextureFormat” isn’t a valid external format. This is one of the issues I flagged above.

I’m fairly sure that & shouldn’t be there. You’re passing a pointer to a pointer.

I found the solution to the issue on this thread https://www.opengl.org/discussion_boards/showthread.php/199544-texture-arrays-in-3-3
It appears I misunderstood that I should be using glTexImage3D instead of glTexStorage3D, at least in regards to what I’m trying to do, now it’s time to stress test this, see you in a couple million triangles. :slight_smile:

So for people that find this in the future and are looking to do the same thing

So do this part when you load textures

-> load textures
glGenTextures();
glBindTexture();    
glTexImage3D();
glTexSubImage3D();
glTexParameteri();

make sure the texture array is bound and you have the right layer being sent to your shaders when you draw

glBindTexture(GL_TEXTURE_2D_ARRAY, arrayTexture);
glUniform1i(glGetUniformLocation(ID, "Sampler"), 0);
  

and in your shader it should look something like this

#version 330 core
out vec4 FragColor;

in vec2 TexCoord;

uniform sampler2DArray Sampler; 

void main()
{
FragColor = texture(Sampler, vec3(TexCoord.xy, 0));
}