which one is better for creating 3d texture? and when I use GLSL doing the sampling, should I times 16.0 to let the 12-bits value normalized to (0.0 - 1.0).
Use GL_SHORT, otherwise the 12bits negative values will become big positive values in your GLSL sampler wich will become hard to detect and ugly to interpolate.
Edit : scratch that, you should set your negative 12bits values to 0, in any case.
Then, to keep precision, you have to specify something better for internalFormat, rather than a default GL_LUMINANCE (which will probably be interpreted as GL_LUMINANCE8). I advise using at least GL_LUMINANCE16 to keep precision if your hardware supports it. Otherwise the x16 in shader will amplify precision errors.
Sorry I can seem to parse your question.
Short or ushort only differs of interpretation of large values as negative or not.
Say you enter 0x00FF as short value in the data, it will come out in the GLSL shader as 255/65636 = 0.003885… something.
GL_UNSIGNED_SHORT means values in the range [0, 65535] which will be normalized to the range [0.0,1.0] (0 -> 0.0, 65535 -> 1.0).
GL_SHORT means values in the range [-32768, 32767] which in your case will be normalized to the range [0.0,1.0] but every negative value will be clamped to zero (0 -> 0.0, 32767 -> 1.0). If you use a signed texture internal format then you can use the negative values too, but not with GL_LUMINANCE.
As a side note: GL_LUMINANCE is deprecated, use GL_RED and sized internal formats like GL_R8 and use ARB_texture_swizzle to map the single red channel to the others (i.e. swizzle as (R, R, R, 1)).