Floating-point to Fixed-point coordinates

I was reading this post from this very good series:

I understand most of it but I don’t understand the end of the post:

“But the key realisation is that we’re still taking steps of one pixel at a time: all the p’s we pass into orient2d are an integral number of pixel samples apart. This, together with the incremental evaluation we’re gonna see soon, means that we only have to do a full-precision calculation once per triangle. All the pixel-stepping code always advances in units of integral pixels, which means the sub-pixel size enters the computation only once, not squared. Which in turn means we can actually cover the 2048×2048 render target with 8 bits of subpixel accuracy, or 8192×8192 pixels with 4 bits of subpixel resolution. You can squeeze that some more if you traverse the triangle in 2×2 pixel blocks and not actual pixels, as our triangle rasterizer and any OpenGL/D3D-style rasterizer will do, but again, I digress.”

  1. I am really confused by this part. First, at the beginning of this chapter the author mentions that floating points are converted to integers. But rather than rounding off the floating point value to the nearest integer we first multiply the value by 16 or 256 (depending on either we use 4 or 8 bits for the sub-pixel precision thingy) and then round of the result to the nearest integer. But then it says that with 32 bits integers you can only encode values in the range [-16384,16383]. This seems incorrect to me because if you assume that the formula to compute the range is [-2^(k-1), 2^(k-1)-1], then if k=32 you get [-2147483648,2147483647]. So first I personally found a much greater number so note sure where the [-16384,16383] is coming from?
    EDIT: Oops got this point. In fact this range comes from the edge function. Because the edge function is something like (a - b)(c - d) - (e -f)(g-h), then in fact k = (32 - 2) / 2 = 15 and the max number in this case is: 16384…

  2. Then I understand that this conversion happens after the coordinates have been transformed by the viewport transform. So my question is, at this stage of the pipeline, all coordinates should be positive. So why do we even care about negative numbers? Which means that range should be even greater. Also assuming the image with is 2048, then if we have a vertex in the right corner of the image then the coordinate in fixed-point value should be 2048 * 256 = 524288.
    EDIT: so we do have an overflow in this case 524288 > 16383 and even if coordinates are all positive, 524288 > 32768. So I guess the author of the post tries to explain how its actually done but I really don’t understand his explanations.

It would be great if someone could clarify these points. What I read in this post:

Is that the GPU would actually use 16 bits for the conversion. That it would use 12 bits for the integral part of the coordinates and 4 bits for the sub pixels (16 sub-positions). First is this correct, and then I am still interested to know how it works with respect to my two previous points (are the coordinates in raster space and therefore are they positive, can we have negative integer coordinates at this point, etc.).

Thanks a lot.