I'm sorry, I didn't mention it before. Yes I have zoom in and out, pan and rotate.
I'll work on the assumption the user is able to zoom in and out.
I have two systems. One for development and other for testing. The development system has max texture resolution of 1024x1024. Any image less than or equal to this size is rendered quite well. Problem comes only when image resolution grows more than this. And on the testing system, max texture resolution is 4096x4096. I map the entire image data with the texture coordinates from 0 to 1. Now here I have a question: Can we fit 2048x2048 image on 1024x1024 textures? If so then won't the image quality be lost?
even low end machines can render 8192x8192 (which equates to 256mb)
You have a max texture size of 1024x1024? Crikey!
What hardware are you using? And are you sure you're getting hardware acceleration? The reason I ask is that the MS GL software renderer is limited to 1024x1024... maybe you need to update your drivers.
In answer to your question, compression won't help because although it takes less memory you're still limited to GL_MAX_TEXTURE_SIZE.
Like I said, when you're zoomed out, you can use a much smaller image (downsized on the CPU), and only load in full resolution sections when you need to zoom to 100%.
I work on the rack server. The development is being done on the server. It is Dell Poweredge Rack server with windows 2008 server, enterprise edition. I tried upgrading the graphics driver, but my admin says that the rack servers are not meant for high end graphics usage. Is that so? As far as I know, we can have high end graphics on the servers.
Originally Posted by markds
Anyways, so basically, I need to first check the image size whether it fits into the texture. If it fits, directly render it, else, break it into four part and do the same. This solution seems to be recursive isn't it?
Yeah - recurse till you get the right size. Just store enough image data to be visible, plus some extra so the user can scroll a short distance before running out of image (while in the background you pre-emptively load more data centred what's visible on the screen).
The reason why it's beneficial to store a low res version of the image is simply so that if the user scrolls a long way you have *something* to display if the full resolution image data is still loading from the hard drive. This strategy allows literally any sized image to be streamed in. You get this effect:
Last edited by markds; 12-12-2013 at 09:51 AM.
Ya right. That's true. But I don't think it loading entire image data into memory is sufficient. Paging to the disk is very much needed. Isn't it? Well, I was going through quadtrees recently, which are used for image compression with multiple levels. Will they be helpful here?
As rightly said by you to store the low res version, how can I get the low res version? Just by re-scaling it? Secondly, at what stage I would come to know that I need to switch from low res version to high res version or intermediate res version?