On volume texture atlases

We all know the advantages of texture atlases and bigger batch sizes.
I always liked the idea of having multiple 2d textures stored in a 3d texture, however as confirmed in a nv whitepaper, the mipmapping would screw up everything. By contrast, mipmapping is not a real problem with 2d atlases.

Now, I’ve already seen games with 1Kx1K textures so, since ATi cards handle only up to 2Kx2K this is bad news to me. A 2d atlas would store only up to 4 textures. NV is somewhat better.
The point is the fragment math I need to implement to re-map the texcoords. While I hardly believe it would introduce a serious performance drop, I don’t like it.

The more I think at it, the more I convince myself it would be great if 3D textures could be mipmapped on (s,t) only, leaving § as a “texture slice selector”.
Is there a possibility someone is evaluating this feature? What implications does it have on hardware? (i.e. does the hw really care if mipmaps are not resized “in the standard fashion”?)
This could sound strange, but after ARB_npot and the rumors on conditional_render I don’t see this as so much strange.

I then realize maximum 3d texture size is 512³ on most video cards, but I believe this is a pixel-limit (so I believe I could also have 1024x1024x128, 2Kx2Kx32…).

Does someone have experience with those topics?
I would appreciate to see some feedback, at least to make up my mind.

I tried D3D9 once and if i remember correctly, it was possible to set filtering-modes per axis, meaning you could have linear filtering on x and y and nearest filtering on z-axis.

I was wondering, why OpenGL doesn´t support this.

So, if we are lucky, hardware already supports it and we only need an extension for the API.

Jan.

Originally posted by Jan:
[b]I tried D3D9 once and if i remember correctly, it was possible to set filtering-modes per axis, meaning you could have linear filtering on x and y and nearest filtering on z-axis.

I was wondering, why OpenGL doesn´t support this.

So, if we are lucky, hardware already supports it and we only need an extension for the API.

Jan.[/b]
This wouldn’t help since the mipmap of the volume texture still only contains half the number of slices.

/A.B.

Originally posted by Jan:
So, if we are lucky, hardware already supports it and we only need an extension for the API.
This would be my best wish right now.

Originally posted by brinck:
This wouldn’t help since the mipmap of the volume texture still only contains half the number of slices.
This is exactly the problem. When mipmapped on “p” the two slices will get blended.

Thank you anyway for replying. I fear I’ll have to go for 2D atlases (ack).
Thinking a bit at them, I guess it won’t be so much harder anyway, although I need some more complex per-fragment math. Luckly this is not a problem yet.

Why not do the mip-mapping yourself where you set up your algorithm so that the slices don’t blend together and the depth doesn’t change? It doesn’t sound too terribly difficult I don’t think. It’s basically computing mip-maps for each 2d texture by itself then copying it to the 3d volume texture.

-SirKnight

The problem is that mipmaps have to be half the size along every axis…

Yeah GL might not like my idea. :slight_smile: I’m guessing it would check each mip level you load to see if you are submitting the correct sizes. Too bad.

What about doing some kind of lod based filtering in a fragment program?

-SirKnight

Originally posted by Jan:
I tried D3D9 once and if i remember correctly, it was possible to set filtering-modes per axis, meaning you could have linear filtering on x and y and nearest filtering on z-axis.
I’m pretty sure you can’t.

Originally posted by SirKnight:
What about doing some kind of lod based filtering in a fragment program?
This is another idea which came up to my mind and I’m rather happy you pointed out this possibility. I’m not very used to this kind of processing. I should read again that book on texture synthesis.

By the way, the problem with this approach is that it would be slower by sure means. Usually this is done for procedural shaders. The added extra complexity is balanced by the decreased bandwith required. I fear doing this on a texture which also needs to be fetched would introduce more slowness than the benefit… I think.
Do you think this could really work nicely?