I'm fairly new to OpenGL rendering stuff, so this might be quite a basic thing to solve, or alternatively quite difficult. I'm not entirely sure, so it would certainly be useful to have the perspective of an expert!
For rendering large digital microscopy images, we need to split them up due to their huge size. Some of the 2D images are over 100000О100000 pixels, so even if the GPU has sufficient memory, it's exceeding the max texture size by a long way. For 3D volumetric images we can again exceed the max texture size and also the GPU memory for multi-gigabyte images. The obvious thing to do is to split the images up into a series of small tiles and render them as a set of aligned quads (which is our current approach). In the case of 3D images, small cubes are used instead of tiles, though we haven't got this finished yet; here we would be using proxy geometry and/or ray casting to render the volume.
Within a single texture tile we get correct interpolation between pixels using GL_LINEAR for magnification and GL_LINEAR_MIPMAP_LINEAR for minification, and it looks just fine in both the 2D tile and 3D cubic texture cases. However, this all changes when we split into separate textures. Using GL_CLAMP_TO_EDGE, it looks like you will get seams between quads where there will not be any interpolation unlike within the quads (bad UTF-8-art):
Code :┌────┐ ┌────┐ ┌────────┐ │ииии│▒│ииии│▒ ┌────┬────┐ │ииииииии│▒ │ииии│▒│ииии│▒ │ииии│ииии│▒ │ииииииии│▒ │ииии│▒│ииии│▒ │abии│ииии│▒ │ииииииии│▒ │ииии│▒│ииии│▒ │ииии│ииии│▒ │ииииииии│▒ └────┘▒└────┘▒ │ииии│ииии│▒ │ииииииии│▒ ─→ ▒▒▒▒▒▒ ▒▒▒▒▒▒ ─→ ├────┼────┤▒ │ииииииии│▒ ┌────┐ ┌────┐ │ииии│ииии│▒ │ииииииии│▒ │ииии│▒│ииии│▒ │иииc│dиии│▒ │ииииииии│▒ │ииии│▒│ииии│▒ │ииии│ииии│▒ └────────┘▒ │ииии│▒│ииии│▒ │ииии│ииии│▒ ▒▒▒▒▒▒▒▒▒▒ │ииии│▒│ииии│▒ └────┴────┘▒ └────┘▒└────┘▒ ▒▒▒▒▒▒▒▒▒▒▒ ▒▒▒▒▒▒ ▒▒▒▒▒▒
Here, we have split an 8О8 texture into 4 4О4 textures. When these are rendered as adjacent quads, samples a and b will have correct interpolation. But there will be no interpolation between c and d because the 2D sampler won't have access to the bordering texture, nor any knowledge of it. Likewise for all shared surfaces of cubes for 3D textures. What we really want here is for output to be pixel-for-pixel identical to the original case where the texture was not split and the samples were adjacent in the same texture. In the 3D case we would need to be able to ray cast through all the sub-cubes on the light past to do correct volume rendering; the cast ray can be re-started on the boundaries using proxy geometry, but we still have the same requirement for the 3D sampler to sample correctly at the texture boundaries and at different mipmap levels.
It also gets more complex when considering mipmaps since they will also be split (and if computed independently for the split tiles, will differ also).
I'm not sure if this is a common problem for other OpenGL users; the artifacts are probably not noticeable visually, but given that it's for scientific visualisation, accuracy is a primary concern. I've read a few papers using OpenGL for splitting large volumes into smaller subsampled cubes, but they don't appear to acknowledge or address the above issue.
If it's possible to do more accurate sampling in the fragment shader using multiple textures, that would be something we could look at doing, but from my limited knowledge of the samplers the lookups are done in hardware by the texture unit including mipmap interpolation. If we could make that sample and interpolate between adjacent textures, that might be a solution. But maybe I'm not on the right track and there's an even simpler solution.
If you have any insight into what we could do here, it would be very much appreciated!