Alternate way to store texture data?

My application is intended to be a simple 2D game with textures for rocks, characters, etc.

The way I’m currently handling this texture data is by loading the textures into application memory, packing them all into giant atlases that are filled to GL_MAX_TEXTURE_SIZE before creating more atlases, and then uploaded to the GPU. When drawing these textures at runtime, no sorting is done to keep texture switches to a minimum. If every draw call uses a texture from a different atlas it’ll have to glBindTexture/glDraw for every object. Now, I don’t see this being a likely problem in my case because my textures are small and I can fit hundreds of textures in a single atlas on modern graphics cards.

Now, if these texture switches were a problem for me, how could I minimize/eliminate them without doing any draw ordering? Assuming all of the texture data is the same format, would a texture array of atlases be the ideal solution or is there other alternatives?

There are a number of ways to optimize this. First, combine the textures into texture arrays (I’d drop atlases, if you do any texture filtering), and organize your scene batches such that objects using the same texture arrays (and other GL state) are discovered together. This gives you a presort for free, and saves you from having to sort after (or in the process) of discovering the batches. You could also look at bindless texture, which frees you from having to bind textures at all.

(I’d drop atlases, if you do any texture filtering)

Because of the texture bleeding? That shouldn’t be an issue using the method described here. Are there other issues involved?

organize your scene batches such that objects using the same texture arrays (and other GL state) are discovered together. This gives you a presort for free, and saves you from having to sort after (or in the process) of discovering the batches.

Is that really a reasonable way of handling it? It seems overly complex to set myself up to always have to organize the batches in a way to maximize draw calls for each texture array. The presort is free for the computer but costs me a lot of time during development.

Note: my understanding is that texture arrays can only hold textures of the same dimensions. So for example all my 16x16 textures would go in one array, all my 32x64 textures would go in another, etc.

As long as the filtering on atlases isn’t an issue, I can’t find a reason that a texture array of atlases isn’t the better solution. It wouldn’t require any texture switches, everything can be drawn in 1 draw call, and I don’t have to worry about organizing my batches.

Please correct me if I’m absolutely wrong here.

One thing the atlas approach can’t do reasonably for you is mipmapping. At some stage you’re going to want to use mipmaps, and then you’ll definitely need to consider alternatives.

Sorting need not be overly difficult. The simplest approach is to use a qsort (or std::sort, or whatever your favourite sort API is) on an array of drawable objects. A second approach is to arrange your drawables as a set of linked lists starting from the texture that each uses.

One thing you’ll find is that using smaller textures is friendlier for your GPU’s caches too. If an extremely large texture needs to be swapped in, and objects drawn with effectively random access to that texture, it’s not going to be as fast as more coherent access to a smaller texture.

a texture array of atlases

Why an array of atlases? You remove the benefical properties of an array of you fill it with atlases again. How many textures are you working with, does the amount surpass the amount of supported layers per array on the target hardware?

I can’t find a reason that a texture array of atlases isn’t the better solution. It wouldn’t require any texture switches, everything can be drawn in 1 draw call, and I don’t have to worry about organizing my batches.

Ask it the other way round: what does a texture atlas offer you that a texture array doesn’t? Do you have many textures of significantly different dimensions, formats and so on that would lead to wasting a lot of memory or necessitate switching between an unreasonble amount of arrays?

BTW, if you really want the best bang for the buck, there’s also the so called sparse/bindless texture approach.

In general: for reducing overhead check out this most valuable presentation from GDC14.

[QUOTE=mhagain;1259365]One thing the atlas approach can’t do reasonably for you is mipmapping. At some stage you’re going to want to use mipmaps, and then you’ll definitely need to consider alternatives.

Sorting need not be overly difficult. The simplest approach is to use a qsort (or std::sort, or whatever your favourite sort API is) on an array of drawable objects. A second approach is to arrange your drawables as a set of linked lists starting from the texture that each uses.

One thing you’ll find is that using smaller textures is friendlier for your GPU’s caches too. If an extremely large texture needs to be swapped in, and objects drawn with effectively random access to that texture, it’s not going to be as fast as more coherent access to a smaller texture.[/QUOTE]

Hadn’t thought about mipmapping issues, that is a downside though. If I needed to scale the textures I just planned on doing it without any generated mipmaps.

I don’t think I can easily sort in my case here. I’m not depth testing so the order things are batched is the order they’re drawn to the screen, sorting would mess that all up. Using linked lists for drawables probably won’t be cache friendly at all. I could try it but I think I’d see a considerable performance hit.

[QUOTE=thokra;1259366]Why an array of atlases? You remove the benefical properties of an array of you fill it with atlases again. How many textures are you working with, does the amount surpass the amount of supported layers per array on the target hardware?

Ask it the other way round: what does a texture atlas offer you that a texture array doesn’t? Do you have many textures of significantly different dimensions, formats and so on that would lead to wasting a lot of memory or necessitate switching between an unreasonble amount of arrays?

BTW, if you really want the best bang for the buck, there’s also the so called sparse/bindless texture approach.

In general: for reducing overhead check out this most valuable presentation from GDC14.[/QUOTE]

I think a lot of the textures I’m using can be grouped into a small number of texture arrays but even though that’s true now it could change later on and I wanted to not have to worry about it.

I’ve read that presentation and watched the video on it, it all looks awesome. You’re the second person here, along with Dark Photon, to recommend using bindless textures so obviously I should be looking into it. I’m new to OpenGL and reading the docs; is this telling me bindless textures are only supported in OpenGL 4.0 and higher? If so, should I be concerned about the lack of support for OpenGL 4.0 in a lot of graphics cards? Or is that not as big of an issue as I think?

Actually, bindless textures are not part of OpenGL 4.4. They are supported only by NVIDIA Kepler and Maxwell GPUs, and they require change in hardware to be supported (plus if NV give the license for other vendors to use them :wink: ) .

So, I really don’t see any reason not to stick to texture arrays and live happily ever after. There are at least 80 texture units (96 for NV) and at least 64 layers in each texture array (512 for NV). It looks like a pretty big number for any of your needs (at least 5 thousands for any vendor or more than 49 thousands of textures for NV). You’ll probably run out of memory sooner than spend all available texture addresses.

I’m sorry for disinformation. I’ve just found that bindless textures are supported on Radeon R7/R9 with Catalyst 14.4.

So bindless textures really are only available to bleeding edge hardware. Why does thokra talk as if they’re available for me to use right now…

BTW, if you really want the best bang for the buck, there’s also the so called sparse/bindless texture approach.

In general: for reducing overhead check out this most valuable presentation from GDC14.

So frustrating.

I’ve been reading around google and found that a lot of people ditch atlases in favor of texture arrays so I guess that’s the best option at the moment.

I didn’t say anything about availability. You also never stated which GL version or hardware you are targeting.