So the third coordinate for an array texture is actually an index into the array?
Yes.
Is this third texture-coordinate (a float value) rounded to an integer, to explicit select one item in the array?
Yes. When the texture coordinate is a float, of course.
By the way, is there any source online where texture2DArray and other texel fetch functions are listed and explained?
Well, there is no [var]texture2DArray[/var] function in core OpenGL 3.0 or better. But to your general question about whether there are online resources that explain how texture accessing with array textures work, yes.
But with an 2d array i also have 3 coordinates.
And? Cubemaps use 3 components in their texture coordinates too. But they represent a direction, not a volume in space. And the only reason 2D array textures happen to use 3 components is because 2 + 1 = 3. Cubemap array textures use 4D texture coordinates; 3 components for a direction, one for the array index.
Could you please describe (shortly) how to render 3d texture data?
You don’t “render 3d texture data” anymore than you “render 2d texture data”. Textures (of any kind) aren’t pictures slapped onto a triangle. They’re look-up tables that store values. The texture and the texture coordinate could represent anything.
3D textures can be used for any number of things, none of which necessarily represents “rendering 3d texture data” in any simple way. I’d say one of the most common uses of 3D textures is to represent a three-dimensional function, like a complex BRDF lighting function. In this case, the three texture coordinates are (normalized versions of) the parameters to a lighting equation. Things like angles and such. The returned value is the light intensity. Such textures are used to represent a function, much like people used to use sin/cos tables to speed up sin/cos operations. Only it’s a three-dimensional function, so you need a three-dimensional array of values.
A 3D texture.
But you don’t “render 3d texture data” with the texture. You use it to determine the lighting over a surface, based on parameters you calculate per-fragment.
If you want objects to be able to break, so that you can see inside of the object, you could use a 3D texture to represent a volume of diffuse colors. The texture coordinate would then be some transformation of the (relative) position of the vertex, so that as vertices shift around within the model, you can see the interior of the stone or whatever. That’s the closest you would get to “render 3d texture data”.
A natural example for a 2d array may be some textured pointsprites (or quads, whatever) like in oldschool 2d games, like super mario. So a few images to represent a animation.
No. Well, yes you could, but you wouldn’t do it in as simplistic a manor as that. Why?
Because the array depth for textures has limitations, just like the width and height do. And those limits are usually “fairly small”, maybe a few thousand or so. So if you make a 256x256x1024 array texture, you could only store 1024 256x256 sprites. However, if you make a 8192x8192x1024 array texture, you could store 1,048,576 such sprites (8192/256 = 32 sprites per width/height, 32 * 32 = 1024 sprites per layer; 1024 * 1024 layers = 1048576 sprites total).
So you could and would use array textures for something like that, but only as an extension of sprite sheets, not a replacement for them. Fonts would be a good place for such things.