PDA

View Full Version : 2D Texture Array vs. 3D Texture



Betrayal
01-30-2015, 11:40 AM
Hi,

this bothers me for a while and it's difficult to find a clear answer.

What are the differencens between a 2D texture array and a 3D texture?
(Similar question for 1D texture array and 2D texture)

What's possible with one type that isn't possible with the other and vice versa?
What are the pros and cons of either types?

Thank you.

Alfonse Reinheart
01-30-2015, 12:49 PM
They have no similarities at all. At least, conceptually speaking. A 2D array texture is a 2D texture where each mipmap level contains an array of 2D images. A 3D texture is a texture where each mipmap level contains a single three-dimensional image.

The purpose of a 2D array texture is to be able to have multiple 2D textures in a single object. That way, the shader itself can select which image to use for each particular object. Texture coordinates mean the same thing as 2D textures: representing a location in 2D space. It's just that you also have a selector that says which 2D image in the array to use.

3D textures are all about sampling from within a volume of data. That is, you have texture coordinates which are three-dimensional in nature, perhaps representing a position within the texture's volumetric space.

The two texture types do not contend with one another. That is, there's no problem that you could solve with a 2D array texture that could also be solved with a 3D texture (or at least, not without having to dodge some form of visual artifact). And vice-versa; if you need a 3D texture, it's in a problem space where a 2D array texture would not be appropriate.

Oh sure, internally in the hardware, they are mostly just a few differences between them. Specifically, lower mipmaps of arrays have the same depth but different widths and heights, and filtering for arrays is always nearest for the Z component. But as far as OpenGL itself (and the use of either texture type) is concerned, they're two different things which solve two completely distinct sets of problems.

So there should never be a case where you're trying to select between one or the other.

arekkusu
01-30-2015, 01:08 PM
Look at Microsoft's pretty pictures (https://msdn.microsoft.com/en-us/library/windows/desktop/ff476906(v=vs.85).aspx) to understand the mipmap layout differences.

Look at Nvidia's sample code (http://docs.nvidia.com/gameworks/content/gameworkslibrary/graphicssamples/opengl_samples/texturearrayterrainsample.htm) to understand the applications.

Betrayal
01-30-2015, 01:41 PM
Hello Alfonso




The purpose of a 2D array texture is to be able to have multiple 2D textures in a single object. That way, the shader itself can select which image to use for each particular object. Texture coordinates mean the same thing as 2D textures: representing a location in 2D space. It's just that you also have a selector that says which 2D image in the array to use.

Ok, so let's have this codesnippet:


uniform sampler2DArray arrayImage;
in vec3 texCoords;
void main()
{
vFragColor = texture2DArray(arrayImage, texCoords.stp);
}

So the third coordinate for an array texture is actually an index into the array?
Is this third texture-coordinate (a float value) rounded to an integer, to explicit select one item in the array?
If i understand you correctly, there is also no linear (or whatever) filtering between two texture-array levels, right? I mean, what happens when the third coordinate is between two texture array elements: Is it then just rounded down or up without any filtering between levels?
By the way, is there any source online where texture2DArray and other texel fetch functions are listed and explained? Sounds stupid but it's really hard to find detailed informations.



3D textures are all about sampling from within a volume of data. That is, you have texture coordinates which are three-dimensional in nature, perhaps representing a position within the texture's volumetric space.

Alright. But with an 2d array i also have 3 coordinates. I assume for a 3d texture filtering is a big issue.
I never really understood how to render 3d textures btw and never tried, because i have no data. Could you please describe (shortly) how to render 3d texture data? Of course i could render a lot of quads in a row, but that doesn't feel right



The two texture types do not contend with one another. That is, there's no problem that you could solve with a 2D array texture that could also be solved with a 3D texture (or at least, not without having to dodge some form of visual artifact). And vice-versa; if you need a 3D texture, it's in a problem space where a 2D array texture would not be appropriate.

Please give me some examples for typical usages.
A natural example for a 2d array may be some textured pointsprites (or quads, whatever) like in oldschool 2d games, like super mario. So a few images to represent a animation.
For 3d textures, well, maybe computer tomograph data.

Alfonse Reinheart
01-30-2015, 02:26 PM
So the third coordinate for an array texture is actually an index into the array?

Yes. (https://www.opengl.org/wiki/Array_Texture#Access_in_shaders)


Is this third texture-coordinate (a float value) rounded to an integer, to explicit select one item in the array?

Yes. (https://www.opengl.org/wiki/Array_Texture#Access_in_shaders) When the texture coordinate is a float, of course.


By the way, is there any source online where texture2DArray and other texel fetch functions are listed and explained?

Well, there is no texture2DArray function in core OpenGL 3.0 or better. But to your general question about whether there are online resources that explain how texture accessing with array textures work, yes (https://www.opengl.org/wiki/Sampler_%28GLSL%29).


But with an 2d array i also have 3 coordinates.

And? Cubemaps (https://www.opengl.org/wiki/Cubemap_Texture) use 3 components in their texture coordinates too. But they represent a direction, not a volume in space. And the only reason 2D array textures happen to use 3 components is because 2 + 1 = 3. Cubemap array textures use 4D texture coordinates; 3 components for a direction, one for the array index.


Could you please describe (shortly) how to render 3d texture data?

You don't "render 3d texture data" anymore than you "render 2d texture data". Textures (of any kind) aren't pictures slapped onto a triangle. They're look-up tables that store values. The texture and the texture coordinate could represent anything.

3D textures can be used for any number of things, none of which necessarily represents "rendering 3d texture data" in any simple way. I'd say one of the most common uses of 3D textures is to represent a three-dimensional function, like a complex BRDF lighting function (https://en.wikipedia.org/wiki/Bidirectional_reflectance_distribution_function). In this case, the three texture coordinates are (normalized versions of) the parameters to a lighting equation. Things like angles and such. The returned value is the light intensity. Such textures are used to represent a function, much like people used to use sin/cos tables to speed up sin/cos operations. Only it's a three-dimensional function, so you need a three-dimensional array of values.

A 3D texture.

But you don't "render 3d texture data" with the texture. You use it to determine the lighting over a surface, based on parameters you calculate per-fragment.

If you want objects to be able to break, so that you can see inside of the object, you could use a 3D texture to represent a volume of diffuse colors. The texture coordinate would then be some transformation of the (relative) position of the vertex, so that as vertices shift around within the model, you can see the interior of the stone or whatever. That's the closest you would get to "render 3d texture data".


A natural example for a 2d array may be some textured pointsprites (or quads, whatever) like in oldschool 2d games, like super mario. So a few images to represent a animation.

No. Well, yes you could, but you wouldn't do it in as simplistic a manor as that. Why?

Because the array depth for textures has limitations, just like the width and height do. And those limits are usually "fairly small", maybe a few thousand or so. So if you make a 256x256x1024 array texture, you could only store 1024 256x256 sprites. However, if you make a 8192x8192x1024 array texture, you could store 1,048,576 such sprites (8192/256 = 32 sprites per width/height, 32 * 32 = 1024 sprites per layer; 1024 * 1024 layers = 1048576 sprites total).

So you could and would use array textures for something like that, but only as an extension of sprite sheets, not a replacement for them. Fonts would be a good place for such things.

GClements
01-30-2015, 11:18 PM
If i understand you correctly, there is also no linear (or whatever) filtering between two texture-array levels, right?

Correct.


I mean, what happens when the third coordinate is between two texture array elements: Is it then just rounded down or up without any filtering between levels?

It's rounded to the nearest integer. From the GLSL specification (https://www.opengl.org/registry/doc/GLSLangSpec.4.50.pdf):


For Array forms, the array layer used will be

max(0,min(d−1, floor(layer+0.5)))

where d is the depth of the texture array and layer comes from the component indicated in the tables
below.




By the way, is there any source online where texture2DArray and other texel fetch functions are listed and explained? Sounds stupid but it's really hard to find detailed informations.

GLSL functions are described in the GLSL specification linked above. OpenGL functions are described in the main OpenGL specification (https://www.opengl.org/registry/doc/glspec45.core.pdf). All of the specifications can be found via the OpenGL Registry (https://www.opengl.org/registry/).


I never really understood how to render 3d textures btw and never tried, because i have no data. Could you please describe (shortly) how to render 3d texture data? Of course i could render a lot of quads in a row, but that doesn't feel right

You can use it however you want.


For 3d textures, well, maybe computer tomograph data.
That would be one application, allowing you to show an arbitrary "slice" through the data.

Another possibility would be to model a block of wood as a 3D texture; rendering a model where each vertex' texture coordinates are equal to (or affine to) its spatial coordinates would result in a surface texture corresponding to having carved the model from the block.

In most cases, you'd be better off (for reasons of memory consumption) pre-calculating a 2D surface texture. But a 3D texture would be useful if you wanted to be able to change the position or orientation of the slice or model in real time.