rendering large images

Hello,

I’m trying to render images using OpenGL. To begin with I took a very small image and rendered it using glDrawPixels. I did get the desired results but I observed that the output image used to get clipped once the part of image moves out of the viewport. Later I found that, it can be easily done with textures. I tried using textures. It did work. But again, the resolution was a problem when I tried a little bigger image. I started researching on various options. I came across tiled rendering library. But I was not able to achieve what I wanted with TR library. So I am now trying to understand more about textures. I have learnt following facts:

We have following constraints:

  1. Max no of textures.
  2. max texture resolution supported by the hardware.

After googling on this topic, I came across two solutions (in case when image resolution doesn’t fit with texture resolution):

  1. Image tiling
  2. Image resizing

I am really not comfortable, how Image tiling can be implemented, when we have restriction on the number of textures. If the image is broken into ‘m’ blocks which is greater than max. no of textures, again I would fail to render the image.

Image resizing relates with mapping bigger image to the texture resolution (Scaling). In this case, I feel I’ll loose the quality of the image.

Please correct me if I have understood it wrongly. Many image viewing applications do render large images. I want to know how it can be efficiently implemented? Any other modern techniques exists? Ultimately, my application should be able to load and render any resolution image. Ofcourse the preformance will reduce a bit, but it wouldn’t matter at this stage.

Again, please correct me if I am wrong.

Thanks

Max no of textures.

These are concurrent images. There is nothing stopping you having multiple renders each drawing some of the large texture

Do you mean that we can have any number of textures at a time? i.e. array of ‘n’ textures??

Thanks

No. At some point all your memory is used up.
But you can have plenty of them depending on their size and the memory available.

[QUOTE=Cornix;1256789]No. At some point all your memory is used up.
But you can have plenty of them depending on their size and the memory available.[/QUOTE]

Let me put down what I understood. Suppose, I have a card that supports max texture resolution of 1024x1024. So is it that I can allocate I can allocate 50 textures of size 1024x1024, provided I have enough memory available?

You said, I can have plenty of them depending on their size and the memory available. The memory you are referring is texture memory right?

Thanks

As far as I know your video card can use your normal RAM if the VRAM is used up.
However, when the video card uses your RAM it will get slower.

By the way, textures arent that big. A 1024x1024 texture should be around 4mb ( (1024x1024x4) / 1024).

[QUOTE=Cornix;1256797]As far as I know your video card can use your normal RAM if the VRAM is used up.
However, when the video card uses your RAM it will get slower.

By the way, textures arent that big. A 1024x1024 texture should be around 4mb ( (1024x1024x4) / 1024).[/QUOTE]

Ok that is what is happening. I have divided the image data(pixel data) into ‘n’ chunks of size 1024x1024. And I have bound those chunks to that many textures. What is happening is after certain m chunks, remaining n-m chunks gets repeated. I mean the output of m is repeated for remaining n-m chunks.
In some cases, the remaining n-m chunks shows white quads. Any suggestions, why such behavior?

Thanks

How big are the images you’re talking about - are we talking gigapixel images streamed from a web server, or something more like a standard digital photograph?

I ask because different sizes from different sources require different rendering strategies. Give us the actual parameters you’re using, and an idea of how you’re trying to render them.

[QUOTE=markds;1256806]How big are the images you’re talking about - are we talking gigapixel images streamed from a web server, or something more like a standard digital photograph?

I ask because different sizes from different sources require different rendering strategies. Give us the actual parameters you’re using, and an idea of how you’re trying to render them.[/QUOTE]

As of now, I am trying to load images from the system disk. With the technique I explained above, I am able to load images of resolution (approx upto) 6000x6000. I have an high resolution image of 13000x13500 pixels. The application should be capable of loading any type and size image from the system disk as of now. In the later version I’ll add a feature of loading image from remote system(over the web). In either case, the source of the image is hard disk. The disk of the local machine or remote machine. No image comes from any digital devices, like camera or something.

Presently I am breaking the Images to blocks of 64x64 pixels. And running through each of them, I bind them to the textures and render them as GL_QUADS. This is done only once and within display list. For rendering, I call the rendered display list.

Pls. Note: Once the image is loaded, no editing or any type of processing is being done on the image. So it is static all the type, and that is why I used display lists.

Thanks

I’ll work on the assumption the user is able to zoom in and out.

Firstly, 64x64 is far too small - even low end machines can render 8192x8192 (which equates to 256mb). If the image fits into a single texture use that - it’s simpler and faster. Also, create mipmaps for zooming purposes.

If the image is too big, keep halving it until it fits into a single texture: e.g. a 20000x20000 source image should first be reduced to 10000x10000 - if that doesn’t fit reduce it again to 5000x5000 etc. Only when the user needs to view at 100% should you page in the full res data for the area on screen.

I’ll work on the assumption the user is able to zoom in and out.

I’m sorry, I didn’t mention it before. Yes I have zoom in and out, pan and rotate.

even low end machines can render 8192x8192 (which equates to 256mb)

I have two systems. One for development and other for testing. The development system has max texture resolution of 1024x1024. Any image less than or equal to this size is rendered quite well. Problem comes only when image resolution grows more than this. And on the testing system, max texture resolution is 4096x4096. I map the entire image data with the texture coordinates from 0 to 1. Now here I have a question: Can we fit 2048x2048 image on 1024x1024 textures? If so then won’t the image quality be lost?

You have a max texture size of 1024x1024? Crikey!

What hardware are you using? And are you sure you’re getting hardware acceleration? The reason I ask is that the MS GL software renderer is limited to 1024x1024… maybe you need to update your drivers.

In answer to your question, compression won’t help because although it takes less memory you’re still limited to GL_MAX_TEXTURE_SIZE.

Like I said, when you’re zoomed out, you can use a much smaller image (downsized on the CPU), and only load in full resolution sections when you need to zoom to 100%.

[QUOTE=markds;1256813]You have a max texture size of 1024x1024? Crikey!

What hardware are you using? And are you sure you’re getting hardware acceleration? The reason I ask is that the MS GL software renderer is limited to 1024x1024… maybe you need to update your drivers.

In answer to your question, compression won’t help because although it takes less memory you’re still limited to GL_MAX_TEXTURE_SIZE.

Like I said, when you’re zoomed out, you can use a much smaller image (downsized on the CPU), and only load in full resolution sections when you need to zoom to 100%.[/QUOTE]

I work on the rack server. The development is being done on the server. It is Dell Poweredge Rack server with windows 2008 server, enterprise edition. I tried upgrading the graphics driver, but my admin says that the rack servers are not meant for high end graphics usage. Is that so? As far as I know, we can have high end graphics on the servers.

Anyways, so basically, I need to first check the image size whether it fits into the texture. If it fits, directly render it, else, break it into four part and do the same. This solution seems to be recursive isn’t it?

Thanks

Yeah - recurse till you get the right size. Just store enough image data to be visible, plus some extra so the user can scroll a short distance before running out of image (while in the background you pre-emptively load more data centred what’s visible on the screen).

The reason why it’s beneficial to store a low res version of the image is simply so that if the user scrolls a long way you have something to display if the full resolution image data is still loading from the hard drive. This strategy allows literally any sized image to be streamed in. You get this effect:

Ya right. That’s true. But I don’t think it loading entire image data into memory is sufficient. Paging to the disk is very much needed. Isn’t it? Well, I was going through quadtrees recently, which are used for image compression with multiple levels. Will they be helpful here?

As rightly said by you to store the low res version, how can I get the low res version? Just by re-scaling it? Secondly, at what stage I would come to know that I need to switch from low res version to high res version or intermediate res version?

Thanks