>100 textures on one polygon?

Is is possible to place a large number (~100) of images on a single polygon? I looked into multitexturing, but it seems there is a limit of 32 images. Is there another way?

The alternative would be to mosaic the images frist and then apply the mosaic to the polygon. However, it would be nice to do it all in one step within OpenGL.

The limit is even lower, 16 texture image units on todays hardware.
The most straightforward trick would be to stack the images into a 3D texture and write a fragment shader running through the slices per pixel.
If you need filtering on the individual 2D slices you must do that yourself today.

And how about using multipass with blending?
You could place 8-16 textures in every pass. Would that do the trick for you?

You could pack all your images into a single 3D texure.

Robert.

with that large amount of images, speed is going to be an issue.

Using a large texture atlas is one way to do it.
The best way to do it is to either premake the entire mosaic in photoshop or alternatively assemble it in a FBO at loadtime if you need to change it slightly every now and then.

@zeoverlord: Well I can’t pre-make anything, so Photoshop is out.

@Relic & Robert OK I’ll look into 3D textures. I’ll need that for another part of my project anyway.

@k_szczech Multipass with blending sounds promising. I suppose I can build up the texture layer by layer with this technique?

The basic idea is that the user will import a height-map and then place images on it. The images can be anywhere, have any orientation, and they may even overlap. The images are expected to be small. We are talking on the order of 40x40. However, there will be a bunch of them.

Multipass with blending sounds promising. I suppose I can build up the texture layer by layer with this technique?
Yes, but additionally use multitexturing to get reasonable performance.

The images can be anywhere, have any orientation, and they may even overlap
Note that even if you use 3D texture you will have a lot of texture coordinates to handle - vertex shader could do that, but it won’t be able to pass that much varying variables to fragment shader, so you will end up in passing texture coordinate gen planes as uniforms to fragment shader - 2*vec4 for every texture unit and lot’s of math in the fragment shader.

This is why I think that using a combination of multipass + multitexturing will be the best solution.

In your first implementation just use the multipass rendering (no multitexturing) to render everything layer by layer.

Then optimize your application to draw only those polygons on every layer that are affected by that layer’s texture (from what you say I assume you use GL_CLAMP and not GL_REPEAT) - small images will require only few polygons to be rendered on their layer. Very important is to draw polygons from different layers in exactly the same place with exactly the same clip planes (if you use any).

Next step would be to combine multiple layers into single pass with mutitexturing - this part would be the most difficult. For example imagine you have 10 layers, and layers 5 and 8 have a common area - you could draw them in single pass as long as layer 6 do not collide with 5 and 8 at the same time and layer 7 does not collide with both of them. You only need to do all these calculations if layers change (are edited).
And of course the layer currently modified by user does not get combined by any other - if user swithes to another layer you have to calculate layer combinations again.

Ok, I’ve done enough research to know that you’ve set me on the right course. Thanks.