I’d encode positions into RGBA_16F or RGBA_32F texture formats (floating point formats allow values outside [0,1]).
Alternatively, if you still want to use a fixed point format, you can pass mesh bounding box into the shader and scale vertex positions into [0,1] according to it.
for my small deferred renderer I use RGBA_16f and that works just fine. I think encoding it to fixed size buffers would not be accurate enough and quite painful to handle.
DmitryM and mokafolio are correct. In case you want to make a deferred shading renderer (or something like this) consider not to store the vertex positions in a texture. In my project I calculate the vertex position in eye space dynamically using the depth texture (and a few other things). I found that its faster that way.