PDA

View Full Version : New Seamless Texturing Technique



Cignoni
05-08-2004, 01:15 PM
Hello,

I just wanted to know the hard coder opinion of this community about a new technique, called PolyCube-Map, for texture parameterization that is: seamless, mesh-independent (e.g. LOD robust) with a low-distortion, texture space efficient, mipmap robust, if you need it can be user controlled (YOU can decide what are the important portions of the mesh)
This technique was recently developed by our group (vcg.isti.cnr.it) (http://vcg.isti.cnr.it) and that we will present it at next siggraph. From the abstract:
Standard texture mapping of real-world meshes suffers from the presence of seams that need to be introduced in order to avoid excessive distortions and to make the topology of the mesh compatible to the one of the texture domain. In contrast, cube maps provide a mechanism that could be used for seamless texture mapping with low distortion, but only if the object roughly resembles a cube. We extend this concept to arbitrary meshes by using as texture domain the surface of a polycube (http://vcg.isti.cnr.it/polycubemaps/gallery/bunny3.jpg) whose shape is similar to that of the given mesh (http://vcg.isti.cnr.it/polycubemaps/gallery/bunny2.jpg)

Obviously the technique can be implemented in opengl with current hardware.

More info, pictures, a paper and a short movie at the following link:

http://vcg.isti.cnr.it/polycubemaps/

Your comments, opinions and critics are very welcome.

Korval
05-08-2004, 02:39 PM
I haven't seen any real problems with seams in modern games. This is more of a solution looking for a problem.

Pop N Fresh
05-08-2004, 05:34 PM
I thought the paper was excellent. I'll be awaiting the downloads.

I'm curious to see how your technique would works with texture space rendering. Something like described in ATI's skin shading paper (http://www.ati.com/developer/techpapers.html#GDC04) from this year's GDC. Have you experimented using PolyCube-Map with any of these techniques?

rgpc
05-09-2004, 06:36 PM
Originally posted by Korval:
I haven't seen any real problems with seams in modern games. This is more of a solution looking for a problem.Could that be because the artists spend a great deal of time making sure they've fixed the seams? Rather than having the code do it for you?

Cignoni
05-09-2004, 07:33 PM
@ rgpc
Yes! you got the main point, until now seams were mainly a artist issue. From the point of view of the coders you just need to handle the fact that the same point with the same vertex and normal could have more than a texture coordinate.

Now with the coming of render to texture algorithms seams are an issue also for us. Look at ATI slides (http://www.ati.com/developer/gdc/D3DTutorial_Skin_Rendering.pdf) for skin rendering (slide 21: first item "texture seams can be a problem..." ): they have to cope to the discontinuity of the parametrization in a explicit way.

With the polycube maps texturing coords will be something that is really 'per vertex' (no needs of sending the same vertex to the gpu with different tex coords), so simpler data structrures, better striping ecc.

CatAtWork
05-09-2004, 07:44 PM
Agreed. Every modeller I know constantly bemoans texturing. One of my coworkers loves Z-Brush, but the mappings it generates are less than ideal.

plasmonster
05-09-2004, 07:45 PM
In my opinion, the motivation for a technique like this is a no-brainer. The closer the intermediate surface resmembles the actual surface being mapped, the better. This is an excellent solution.

The paper mentions the problem with the shift operation in current hardware, and the associated fix. I wonder if near-future hardware will support bit operations, and have the fragment horsepower to handle this technique. I certainly hope so.

davepermen
05-09-2004, 09:46 PM
great work. can't imagine how happy my favourite artist friend is, once he gets that link later this day. currently, he's sleeping, so i hope he has nice dreams, and not nightmares about texturing :D

and i, on my side, am rather happy with it, too.. this could help programmers art, too, as they are even less keen at texturing :D

Korval
05-09-2004, 09:53 PM
This is an excellent solution.Except for the whole performance issue. This isn't like parallax bump mapping, which has a fairly negligable impact on performance (compared to having bump mapping at all) for a massive impact on visual quality.

This technique trades per-fragment performance for artist time and some small per-vertex performance benifit in terms of stripping. I, for one, don't believe in trading performance for anything less than a non-trivial visual quality improvement. We're talking about operations that are non-trivially complex.

Artists, while they may bemoan doing it, still can do it. As such, the problem is solved.

Now, if hardware starts implementing this, such that my texture accesses can automatically do it, that's a different story. But, until then, I can find more important things to spend performance on.

davepermen
05-09-2004, 10:43 PM
korval: this is RESEARCH. we know you don't give fuzz about that, but this is research, and never ment to be fast by the start. once we have the demo, people can start to optimize and compare against it, and next-gen-hw can start to show features that help to optimize it.

but we see more and more the invisible part of the power of fragment shaders: the ability to do anything to texture. different shaders to sample different sorts of shadow-depth-maps, different shaders for autogenerated texture maps in this case.

the more we can automatize this whole process, the more artists can do art. this can help solve most of the lightmappers issues of seams, it can help autogenerating doom3 style multilod bump/parallaxmapped characters out of high-res-models, etc.

lets not start a flamewar about this just because of performance. this is never an issue if you develop something new. once its working, it starts to be one. never before.

just delay your game because of some sourceleaks for another half or full year, and hw advanced much enough to be fast enough again :D

CrazyButcher
05-10-2004, 03:07 AM
would love to see this in "normal" 3d apps (3dsmax, lw...) as well, cause uv mapping hi-res stuff for texture baking is definetely a "no fun" thing.

davepermen
05-10-2004, 03:37 AM
definitely. instead creating a lowres "voxel"-version of your mesh, and using that as a base to do polycubemapping on it could be very easily doable in an editor.

damn artist, still sleeping :D

castano
05-10-2004, 05:37 AM
Hmm... This is an interesting technique, specially because it's very different to the traditional automatic texture mapping algorithms. However, it requieres additional cost at the fragment level. I haven't read the paper yet, but at first glance it seems that you have to do the texture filtering manually... it would be nice if this was hardcoded and handled by the hardware, or if we had enough fragment power to handle this at lower cost.

V-man
05-10-2004, 09:28 AM
The though of using cubemaps to do standard texturing had crossed my mind as well, but I rejected it quickly :)

I think this idea has a very good potential, except for the encoding in the upper left which seems ugly.

Instead, having a kind of index number per vertex would seem cleaner. One could use the fourth coordinate since it is wasted on accessing cubemaps.

I will read the paper just after but I was wondering how big can a polycubemap be with the current approach.

eirikhm
05-10-2004, 09:41 AM
any ETA on the running demo ( and code ) ?

Cignoni
05-10-2004, 01:21 PM
@Korval,
yes, performance is an issue, the fragment program is rather long (57 instructions), but both the speed of hw and the lenght of allowed fragment programs is increasing a lot. Moreover the cost is amortized if you are using more than one texture with the same parametrization; probably this is the most common case for complex shaders.
The advantages of this techniques (apart being artist-friendly) are many. Probably one of the most important is the independence from the mesh topology: the same texture can work for different LOD models, and you can simplify your model without constraining the mesh to maintain the same seam topology.

Moreover we (and many other here :) ) think that polycubemaps can lead to more artist friendly texturing tools. And this can lower the overall cost of game developement (and producers need more artists than engine designers...).

Obviously if hw could support polycubemaps, everything would be easier and hidden to the users.

@davepermen,
remember to let us know the opinion of your favourite artist friend! :)

@Portal,
Yes, having bitwise operations in fragment shaders will help a lot.

@V-man,
i agree, the coding of the LUT in the corner of the texture is not a wonderfully clean design choice; in theory it is something that is conceptually different from the texture itself, but with current hw and fragment program the texture itself was the best place for storing it.

@eirikhm
Availability of the code? surely before Siggraph :) , we are cleaning up our library ( vcg.sf.net (http://vcg.sf.net) ) so stay tuned!

Thanks to everyone for the insightful comments!
Go on please! I really would like to hear some informal comments from the HW guys...

castano
05-10-2004, 01:38 PM
Moreover we (and many other here ) think that polycubemaps can lead to more artist friendly texturing tools. And this can lower the overall cost of game developement (and producers need more artists than engine designers...). Well, most artists are still painting textures manually. There are some tools for 3d painting, but most artists prefer standard tools like photoshop that work only with 2d texture maps. Automatic parametrization is still nice for other applications, like extraction of appearance attributes, ie baking.

Korval
05-10-2004, 01:44 PM
the fragment program is rather long (57 instructions), but both the speed of hw and the lenght of allowed fragment programs is increasing a lot.It's going to be quite some time before a 57-instruction fragment program can be considered worth the expense. Like possibly even post R500 (though some of the high-end R500's can probably handle it).

Even a 500 instruction shader... another 57 opcodes is still 10% of that, and adding 10% more opcodes is going to cause a slowdown if you're pushing the hardware.


Moreover we (and many other here ) think that polycubemaps can lead to more artist friendly texturing tools.That isn't the issue. The issue is taking performance to solve an art-creation problem. It is very difficult to justify 57 per-fragment opcodes just to make your modellers/texture artists lives easier.


And this can lower the overall cost of game developement (and producers need more artists than engine designers...).Except that you now need better engines (and engine designers), because you have to optimize your code more/remove features to make up for the 57-opcodes-per-fragment that you've taken up.


Obviously if hw could support polycubemaps, everything would be easier and hidden to the users.Behind you 100% on the integration-into-texture-units front.

CatAtWork
05-10-2004, 02:38 PM
Content creation is becoming the bottleneck in games; at least that's what prominent developers have been saying recently.

While this might not be immediately applicable across the board, due to the performance impact, it's still a step in the right direction.

Cignoni
05-10-2004, 02:38 PM
@castano
yes, i know, artists usually draw texture by hand and have a pixel-wise control of their job polycubemaps can give the artist a good control on how unwrap their models and without incurring into seams. For example lets take a look to a model (totally random entry: gothicgirl (http://www.cgtalk.com/showthread.php?s=&threadid=133299) and consider the unwrapping of the head) often artists unwrap the models along their main feature shapes (a head is unwrapped like a cube, the leg as a cylinder, and so on); with polycubemaps this approach still works: the underlying polycube has the structure that the artist like more. In this way it is straightforward to think to a sw tool that assemblies the squarelets in a planar bitmap for straightforward classical editing, but without seams, etc.

Polycubemaps are not totally automatic (for now) the shape of the polycube can be choosen by the artist. On the other hand polycubemaps works quite well also for normal map redetailing low poly meshes (aka detail preserving (http://citeseer.ist.psu.edu/cignoni98general.html) ).

JustHanging
05-11-2004, 12:05 AM
For offline renderers this would be great, but for realtime applications I have to agree with Korval.

I'm sure artists would like to get rid of the nasty uv mapping process, but not if in return they have to throw away half of their detail. As for cutting down on developement costs, you'll also be cutting down the amount of potential buyers thanks to increased system requirements.

This technique would also create a tangent space that isn't continuous even across a single polygon, right? So using bump mapping would make the fragment shader even more complicated.

-Ilkka

davepermen
05-11-2004, 02:07 AM
@ Korval and all performance freaks.

you support the idea, but want it in hardware? because it .. slows down your engine. understandable, but heck we are at RESEARCH STATE. you can't get all right from the start. first, get it working, then, get some (demo-)support, and then, get hw-features making it more hw friendly.

i've seen cubemapping demos before cubemaps existed, they took 12 passes, blending, and all the fuzz. but they still showed one thing: it works, and looks useful. today, a cubemap is there for all.

i've seen envbump and similar stuff emulated with glBegin GL_POINTS foreach(pixel on screen) glVertex, wich was ****ing slow, but showed, heck, it looks cool, and powerful.

today, we have both, for free. and based on these technologies, new technology rises, like this one, havily using both as a given.

it's all just a mather of time. IF we at least support the effort, and the research, and DON'T look at it from a "bah, it doesn't run fast now, we can't use it ever" side.

doom3 wasn't playable on any hw the day carmack started.. once it will be out, people can play it on highest res.. hw i fast evolving.

there is research in hw accelerated raytracing, beamtracing, and other techniques, to finally solve the lighting equation in arbitary scenes realtime. you can't just go there and say "does it work yet at 60fps? no, so it's bullhit, drop support". it gets faster and faster, and bether and bether. one day, it will be good enough.

you're a very closed mind, korval. too money-horney "i need it now, i need it now fastest, and i don't want to do anything for it". research, slow, hacky effort was what brought you this hw today, with all its power. you should really support it.

i'm unsure about my artist. have to drive to him today to check whats going on..

castano
05-11-2004, 08:42 AM
Ok, we should be realistic. It seems that the main advantage of this technique is that it provides an automatic paramaterization. Additionally that parameterization has other advantages, it's seam free, and has low distortion because the surface is mapped onto a polycube, a surface that has similar shape compared to the original mesh.

In my opinion those additional benefits aren't so important. There are other parametrization algorithms that also produce low distortion, but have to break the mesh into pieces. That means that you have to duplicate vertices and that the texture usage is not so good. But I still prefer that over the disadvantages of the polycube maps.

I think that the problem with current algorithms is that the mesh segmentation is not controllable. The segmentation process should be guided by the artist. It should allow the artist to set the seeds of a region growing algorithm and to guide the growing process by tweaking parameters of the growing metric.

Pentagram
05-11-2004, 09:16 AM
I haven't read the paper but the "Multiresolution Mesh Atlases" parameterisation in
http://www.csee.umbc.edu/~olano/s2001c24/ch10b.pdf
Seems to provide similar capabilities with normal 2D textures. (There is some other paper by the same author that provides more info but I'cant find it right now)
(I'm mainly thinking about render to "skin" style approaches here)

Charles

Cignoni
05-11-2004, 12:05 PM
@JustHanging
Good question. Tangent space discontinuity: you probably refer to the case of computing normal map and store them directly in the tangent space of every face, for a easier normal mapping of non-rigid models.
This is not a peculiar issue of polycubemaps but it arises also for every non face-aware parametrization (like for example the spherical parametrization of Hoppe) that works independently from the mesh topology.

The problem is that when you modify the geometry of your mesh, you have to accordingly change the direction of the normals stored in the normal map. This is quite easy if you store normals in the tangent space, but can be done also if the normals are computed in object space and you know the original normals of the vertex of your mesh.

@Pentagram
I know well the paper that you refer, but i do not think that is closely related to our technique.
Mainly it is a variation of the classical 'per triangle' parametrization: where you parameterize every triangle in a independent way.

This kind of parametrization is the one that is most closely tied to the mesh topology (remesh something and your texture probably has to be thrown away).

On the other hand one of the strenght of our approach is that the resulting parmetrization it not tied to the mesh topology, but only to the overall shape of the object.

Incidentally, the url that you have given does not not link the final version of the paper, that can be retrieved on the home page of John C. Hart (http://graphics.cs.uiuc.edu/~jch/papers/) and is: Meshed Atlases for Real-Time Procedural Solid Texturing (TOG 2002) (http://graphics.cs.uiuc.edu/~jch/papers/rtpst.pdf) ;

The final version of the paper is much clearer and contains a nice state of the art, where they cite the first authors to use this kind of 'per triangle' parametrization (Maruya almost 10 years ago!) and other authors that have used the same approach to bake solid texture over a 2d texture (our group in 98 :) )

dorbie
05-11-2004, 01:56 PM
I think the tangent space is continuous. There has been a misunderstanding. The tangent space is defined by the geometric and texture derivative parameters as usual. The polycube texture topology is merely a coordinate fetch there is no reason it cannot exist in the same tangent space across polycube faces.

Consider an object space vector map and this is obvious. Consider a tangent space map and I think it can still work. The texture derivatives won't change moving across faces and the tangent space normals in the map should mach the tangent space basis for their generation, and of course with tangent space normals you have an interpolated coordinate frame that works for meshing etc.

dorbie
05-11-2004, 09:10 PM
P.S. independence from geometric LOD implies object space normal maps, simpler systems can support this too, even with deformations. It is only when tangent space normals are used that there is a strong dependence on the underlying tangent space basis of the textured mesh. The decision as to the space that the fragment lighting is performed in is rather arbitrary when it comes to bump map reconstruction of detail. Only more traditional bump mapped tiled aproaches demand tangent space treatment and they have no use for this scheme. I like the polycube idea, it has a penalty but I get uncomfortable when the merits are oversold. It's a simple question of practicality & cost, different people may have different answers to that question, that doesn't make them luddites.

davepermen
05-11-2004, 09:40 PM
don't the demos show bumpmapping? they look at least "good enough". imho, there isn't an issue with tangent space bumpmapping. it's just a different way to fetch texels, something similar to paletted textures. instead of getting an index into a colour-palette, we get one into a cubemap-palette.

btw, the artist had mixed feelings, as he didn't really understood whats going on. interestingly, he showed me a bit of farcry afterwards and ranted about the now "that cool bumpmapped models, that still have those same issues" and he showed me texture seams, and where models don't look continuous due that.. :D

i explained him then why this is, he understood, and realised that, and why, this new approach doesn't have that problem. he's interested in seeing tools on how to set it up, and waits for the demos, now, too

plasmonster
05-12-2004, 12:32 AM
Okay, I think we all agree that the idea is good, if not great. :) There is the pesky matter of performance, but isn't it a tad premature to get in to that when we don't even have an implementation yet? I'm as performance conscious as the next guy, but jeez, we don't have anything firmly quantitative to discuss here in terms of, er...um... anything! I'd like to see a _real_ implementation of this thing before I get into a twist over cycles. And realistically, when could one expect to see something like this in hardware, maybe in the geforce 10 or 15?

It seems to me that time would be better spent trying to think of all the cool things we could do with a technique like this. Look at what cube maps gave us, surely thare are some real possibilities here. Surely being able to discretize a space into a single polycube will have many applications besides making an artist's life easier. Think of baking a diffuse lighting solution for a non-convex space, for example. This could be a replacement for light maps, if setup properly. Okay, maybe that's a stretch, but this is a supercube, F(direction) gone insane! I'm still trying to think of new ways to use cube maps.

New technology is driven pragmatically. As Korval rightly points out, if there's nothing visually to gain, why bother? That's just it, getting the developement community behind this thing will require visually compelling arguments. Seeing is believing, I think some good demos would go a long way towards convincing people that the cycles would be well spent.

I think davepermen summed it up nicely when he said this is research, and mighty fine reseach I might add.

davepermen
05-12-2004, 03:14 AM
i'm espencially interested in implementing simple clean proper solutions for on-surface-renderings with this. back-transfer the pixel you work on into 3d, move around, and retransfer to your adjanced pixels.. for on-mesh-blurring effects, for subsurface scattering effects, for what ever effects.. this is a 1:1 mapping of a 2d texture to a 3d mesh, and back, with the possibility to do this without any seam. a quite powerful thing imho. rendering decals onto the mesh, raindrops falling onto it, etc.. you collide, get the position, position = object2pct(position) (polycubetexture) it, and begin to draw..

could get quite interesting for a lot of things, as i said yet.. game of life on a 3d mesh anyone? :D

plasmonster
05-12-2004, 04:06 AM
...and render to polycube, and depth polycubes. I wonder if they could be used in shadowing? There must be some volumetric effects that would benefit from this as well...

Adruab
05-12-2004, 01:10 PM
Render to polycube would be sweet. Given what cube maps are used for (env. maps, light approximations), polycube maps could represent a much closer version of this.

Theoretically you could get some sort of self reflection while just using a polycube map (it wouldn't be exact, but it could get a lot closer). Of course, that might not work, since the the nature of polycube maps seems to be based around positional topology rather than normal topology. But it would be really neat :) .

And think of extending all the other paradigms that cube maps do.... Restrictions of viewing lighting from a single point (cube map lighting approximation), try boosting this to the accuracy of the model.

Of course, performance jumps right back into the picture (even render to cube map requires rendering to the individual faces). Of course, if this could some how be optimized to take advantage of the fact that the polycube faces only have 6 directions (offset of the projection plane for each one... maybe...). *sigh* That would be really cool too. How soon before siggraph do you plan to release the demo code for this stuff?

krychek
05-12-2004, 10:31 PM
PolyCube maps would probably make great environment maps. Consider a big indoor maze-like level, the environment can be completely captured in a PolyCube map (low-res). Unlike the traditional envmaps, this would assign texture space depending on the size of the geometric feature. The reflections on a model moving through the level would seamlessly move across the model correctly depicting the reflections. Current way of doing that would use blending between different tranditional-envmaps each corresponding to a different location. I haven't looked at the paper so I dunno how tough it is to calculate the texcoords for reflection. If it is doable, we can finally have fast changing reflections due to translation (as opposed to only rotations) of a model while using an envmap. :D

JustHanging
05-13-2004, 02:35 AM
Are you talking about raycasting in a pixel shader against the polycube topology? Sounds doable, but the reflected image would resemble the polycube, not the actual scenery. Perhaps you could associate a displacement term in the maps, but raytracing against that would definently kill performance (oops, a bad word...)

-Ilkka

krychek
05-13-2004, 06:50 AM
I was hoping it could be done with a method better than raycasting but looking at it again, I can't think of anything that works directly with these PolyCubes.
Raycasting per fragment sounds atrocious. We might be better off with lots of cubemaps placed around the level :(

Adruab
05-13-2004, 01:04 PM
You wouldn't need to do raycasting I don't think. Normal reflection cube maps are done by rendering the scene to each face. And then using the normal to index into each one. If you did the same thing, but used the normal as the direction to project against the cube (perhaps it would be a different coordinate), I think it could work.

The bigest restriction in this type of system, would be locking and rendering to each individual face (since they are split up all over the texture). It would probably kill you based on vertex performance. And that's why you'd need some sort of system to take advantage of the coherancy of faces. It might not work because each would have a different center of projection, but that's the only real gotcha.

Perhaps you could keep everything except maintain 4d offsets for each different projection point (in fact you could precalculate those, it is linear after all). The thing you'd really need to be able to do in bind different positions for different texture targets (multi-element only renders to one place), and it wouldn't even need to be arbitrary, as long as it knew which input stream to the pixel shader to consider the position for individual texture targets, it could be done (ooooh that would be cool). Of course, with that generalization ALL sorts of things would be different (you could render to many of your surfaces at once, though they might be in different places). Of course, the problem with that is that all the other coordinates need to be interpolated with the position, so it seems like this would be more of an instancing api for use on a pixel shader (though it would be MUCH more difficult to implement I'd imagine).

You'd still have some distortion because the projection point would be moving as you crossed from tile to tile (you might have to interpolate neighboring tiles...), but still, it could be really cool anyways I think. You could still do it if the center of projection was still at the center of the model, but I don't think that would work as well.

All this is very "what if" fodder, but it's still awesome to think about. Of course, considering this is already implementable, a lot of the things to take advantage of some it, might have to come much later.

plasmonster
05-13-2004, 02:45 PM
Adruab,

I think what JustHanging is refering to is the problem with determining the cube face hit with a reflected ray, if I'm not mistaken.

Consider a right angle hallway in a typical level


-----------------------|
/ \ |
start/ \ |
--------------- \ |
| \ |
| \ |
| \ |
\|
end I think it's clear from my meager illustration that this situation is untenable in the current context; although ray tracing is an intriguing, if somewhat *costly*, proposition.

This does not preclude the possibility of diffuse lighting, or other view independent uses. :)

BTW, I too had initial hopes for relfection, just don't see how it could be done.

chrispy
05-15-2004, 06:15 AM
Is seamless blur possible on the 2d texture?

Adruab
05-17-2004, 02:35 PM
Yeah, I was thinking you'd end up with problems like that. In retrospect, you would have to do ray-tracing given the normals and the position at the specified position on the polycube map. And that would be way too slow.

Seemless blurring could be done, but the lookup for the texture would have to be done many many times (would only work on cards that support an inordinate number of instructions). Unless it was directly accelerated by hardware, this probably wouldn't be worth it. It's pretty much like the prospect of doing bilinear filtering except way way more intense :) .

Marco Tarini
05-21-2004, 05:07 AM
Originally posted by Adruab:

Seemless blurring could be done, but the lookup for the texture would have to be done many many times (would only work on cards that support an inordinate number of instructions).Of course, you could also access the final texture at an artificially coarser MIPmap level, with bilinear interpolation on, and thus get a cheap but effective texture blur. Bilinear interpolation can be made to work with just a few lookups codes, and mipmapping works straight away.

(There are some shortcuts, getting worked on, to avoid re-running the entire texture lookup code in order to perform bilinear interpolation on a polycubemap texture. Additional texture lookups would cost around 10 additional instructions each. Still a little expensive. But just a little branching in the fragment shader will probably cut the cost of the general technique by a lot! More about all this to come.)

davepermen
05-21-2004, 05:57 AM
you _could_ have around each cube-face a border with indexees to the adjanced texel, or something similar. a space-speed trade-off (definitely worth paying in case of non-branching hw, i guess..)..

Adruab
05-21-2004, 09:11 AM
Now that I think about it... Since all the texture squares are in the same texture, and are all of a certain dimension, you could do bilinear filtering purely through math on the final indices.

I think that's sort of what some of you were implying. Anyways, all it would really take is to transform into a space relative to the square you're sampling from originally. Then you'd just find overflows and underflows, and jump to adjacent squares. You could simulate the implied branching, as a lot of compilers do (especially when the operation is simple), by interpolating the resulting texture coordinates using either 0 or 1. That would certainly make it much faster :) .

Has anyone looked at the quality of bluring based on sampling lower mip-levels? It seems like that might not produce as good a look as an actual blur, as it would be constrained to linear interpolation. Despite this, it would probably be the only speed acceptable thing to do. And if you're doing trilinear interpolation of sorts, it would probably be ok. Heheh, you could probably also optimize away the full extra calculations for mipmap sampling using rounding and such (even simpler than above...).