rendering large images
I'm trying to render images using OpenGL. To begin with I took a very small image and rendered it using glDrawPixels. I did get the desired results but I observed that the output image used to get clipped once the part of image moves out of the viewport. Later I found that, it can be easily done with textures. I tried using textures. It did work. But again, the resolution was a problem when I tried a little bigger image. I started researching on various options. I came across tiled rendering library. But I was not able to achieve what I wanted with TR library. So I am now trying to understand more about textures. I have learnt following facts:
We have following constraints:
1. Max no of textures.
2. max texture resolution supported by the hardware.
After googling on this topic, I came across two solutions (in case when image resolution doesn't fit with texture resolution):
1. Image tiling
2. Image resizing
I am really not comfortable, how Image tiling can be implemented, when we have restriction on the number of textures. If the image is broken into 'm' blocks which is greater than max. no of textures, again I would fail to render the image.
Image resizing relates with mapping bigger image to the texture resolution (Scaling). In this case, I feel I'll loose the quality of the image.
Please correct me if I have understood it wrongly. Many image viewing applications do render large images. I want to know how it can be efficiently implemented? Any other modern techniques exists? Ultimately, my application should be able to load and render any resolution image. Ofcourse the preformance will reduce a bit, but it wouldn't matter at this stage.
Again, please correct me if I am wrong.
These are concurrent images. There is nothing stopping you having multiple renders each drawing some of the large texture
Do you mean that we can have any number of textures at a time? i.e. array of 'n' textures??
Originally Posted by tonyo_au
No. At some point all your memory is used up.
But you can have plenty of them depending on their size and the memory available.
Let me put down what I understood. Suppose, I have a card that supports max texture resolution of 1024x1024. So is it that I can allocate I can allocate 50 textures of size 1024x1024, provided I have enough memory available?
Originally Posted by Cornix
You said, I can have plenty of them depending on their size and the memory available. The memory you are referring is texture memory right?
As far as I know your video card can use your normal RAM if the VRAM is used up.
However, when the video card uses your RAM it will get slower.
By the way, textures arent *that* big. A 1024x1024 texture should be around 4mb ( (1024x1024x4) / 1024).
Ok that is what is happening. I have divided the image data(pixel data) into 'n' chunks of size 1024x1024. And I have bound those chunks to that many textures. What is happening is after certain m chunks, remaining n-m chunks gets repeated. I mean the output of m is repeated for remaining n-m chunks.
Originally Posted by Cornix
In some cases, the remaining n-m chunks shows white quads. Any suggestions, why such behavior?
How big are the images you're talking about - are we talking gigapixel images streamed from a web server, or something more like a standard digital photograph?
I ask because different sizes from different sources require different rendering strategies. Give us the actual parameters you're using, and an idea of how you're trying to render them.
As of now, I am trying to load images from the system disk. With the technique I explained above, I am able to load images of resolution (approx upto) 6000x6000. I have an high resolution image of 13000x13500 pixels. The application should be capable of loading any type and size image from the system disk as of now. In the later version I'll add a feature of loading image from remote system(over the web). In either case, the source of the image is hard disk. The disk of the local machine or remote machine. No image comes from any digital devices, like camera or something.
Originally Posted by markds
Presently I am breaking the Images to blocks of 64x64 pixels. And running through each of them, I bind them to the textures and render them as GL_QUADS. This is done only once and within display list. For rendering, I call the rendered display list.
Pls. Note: Once the image is loaded, no editing or any type of processing is being done on the image. So it is static all the type, and that is why I used display lists.
I'll work on the assumption the user is able to zoom in and out.
Firstly, 64x64 is far too small - even low end machines can render 8192x8192 (which equates to 256mb). If the image fits into a single texture use that - it's simpler and faster. Also, create mipmaps for zooming purposes.
If the image is too big, keep halving it until it fits into a single texture: e.g. a 20000x20000 source image should first be reduced to 10000x10000 - if that doesn't fit reduce it again to 5000x5000 etc. Only when the user needs to view at 100% should you page in the full res data for the area on screen.