How use big texture?

How use big texture in OpenGL (may be without hardware acceleration)? Texture is high resolution (4096x4096 or bigger) geographical map superimposed on landscape.

If your card supports that texture size, and you have enough memory, then your fine, if not, then you can either resample size, or break the bigger texture into smaller units.

Resampling and then clamping it would probably look ugly (blurry), whereas using subdivided image would make it a pain to map it onto your landscape (modulo arithemtic in order…)

int maxTexSize;
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &maxTexSize);

should get you the max size supported by your implementation, which you can then use as a divisor (in / and %) in your calls to glTexCoord. Max size is always a power of 2, so if your texture is a power of 2 as well, you need not be concerned about “leftovers”.

The problem will be to actually cut up your 4096x4096 pixel data into squares; most likely you just have a flat array of pixels, in which case you’ll probably have to create a lot of new pixeldata entries, delete the original one, and then make a bunch of calls to glTexImage2D. So when loading your image, it would be good if you are already aware of the max tex size - in which case, you can organize your data into squares right away (again, using division modulo).

[This message has been edited by Pa3PyX (edited 08-20-2002).]

I have successfully mapped two 4k x 4k textures on a terrain, but the only machine I had with that kind of texture memory was an SGI-540 that could parse out part of it’s main memory as graphics memory.

I have successfully mapped a single 4k x 4k texture on a piece of terrain on an Nvidia card, check the maximum size as per above, the results may surprise you. Some cards allow up to 4k x 4K, others only 1k, etc. If your card allows it, just use it. However, remember that your graphics card often has to share memory with rasterization. Wildcats parse them out individually, others often share. So a 64Meg card theoretically will hold a 4k x 4k (==16Meg single color, 48Meg tri color) image, though some cards default to 32 bit color texture storage which means 4k x 4k == 64Meg and no room for rasterization of the image, therefore 4k is impossible for that card even though it seems it should be able to. And you are also sharing that 64Mb with all other textures, so watch your memory.
just me,
jeff
P.S. if you are lucky enough to have an SGI Infinite Reality system, you could use image clipmapping… wish someone else would see the need for that… I would love to texture with 32k x 32k terrain images again.

If you have hardware accelerated texture borders (GeForce3 and up) then you can cut your texture in several smaller pieces and tile it seamlessly. Do a LOD thing where textures in the distance are only uploaded in lower resolution, and you’ll do okay.

You can go 4kx4k easily on consumer cards by storing them as band separate 8 bit luminance textures and using color writemasks and multipass to render the terrain. If you want to go larger then you have to get clever about paging texture.

Hmm… I’d say, even if the textures are larger than available video memory, you are probably still OK so long as the texture sizes do not exceed the maximum; it’s up to the drivers to decide what to do if textures don’t fit, and that probably means moving a lot of data in and out across the PCI bus, so it will just be horribly slow. But I think nVidia drivers are smart enough to handle more-than-available-local-memory texture sizes (they cache all textures in system memory anyway). S3 drivers (to my knowledge), on the other hand, are not, and neither are most of the OpenGL wrappers like GLDirect.

As per actual tex size maximums, then I believe it’s: (nVidia) 1024 for Riva 128 and Riva TNT, 2048 for TNT2 and Geforce, 4096 for Geforce 2 and above; (ATI) 512 for Rage Pro, don’t know about others; (3Dfx) 256 for Voodoo, Voodoo 2 and Voodoo 3, 1024 for Voodoo 4 and 5, and I think it’s 1024 for Microsoft’s software OpenGL. (Correct me if I’m wrong on any.)

>>Correct me if I’m wrong on any<<

size only is not important but also texture format eg my tnt2 can do 2048x2048 as LUMINANCE but not as RGB. check this with texture proxys.

Exactly, using band separate luminance textures & multipass with writemasks increases the maximum supported texture size before you have to do anything fancy with splitting textures and all the associated problems with edge filtering. A TNT2 is an old card, you can go higher res on newer models :slight_smile:

[This message has been edited by dorbie (edited 08-21-2002).]