Carmack's MegaTextures

Are there any specs about John Carmack’s implementation of his MegaTextures ?

http://www.beyond3d.com/interviews/etqw/

There was a thread about this recently at gamedev: http://www.gamedev.net/community/forums/topic.asp?topic_id=390751

Ok what i think he does is that he first creates a 1024x1024 (personaly i would choose maybe 2 or 4k) texture (hint: 2 1k RGBA textures uses up 8MB) that he repeats all across the map.
then he makes sure that you can only see a bit less than 512 pixel units ahead.

Finaly he uses glTexSubImage2D() to add and replace something like 32x32 parts of the texture as you move along the texture.

Doing this only once per frame still makes it possible to have a 32kx32k texture without any overwhelming preformance loss.

zeoverlord, that would not work when the whole 32k² is seen (ie. map view for officier class or whatever). I have heard megatextures are more like clipmaps http://www.sgi.com/products/software/performer/presentations/clipmap_intro.pdf

The key idea is to use a fixed-sized texture window and torroidally scroll it, filling in the newly exposed parts with subtex loads. With texture wrapping and a texture matrix or fragment shaders, it’s trivial to move the origin of the texture window. And stacking these like a clipmap makes it work at oblique angles, even without fragment shaders or blending (though blending helps).

Unfortunately, software clipmapping (aka Universal Texturing) is patented (6,618,053), which is why I can describe it. The inventor is Chris Tanner, who did the work for Intrinsic Graphics, which folded and sold their assets (patents included) to Vicarious Visions.

Paradigm’s “virtual texture” patent (6,924,814, referred to in the gamedev thread) seems to be the same thing, but is more recent. The patent office may have f**ked up again and ignored prior art in their own system. The newer patent cites a paper by Tanner and Jones from their SGI days, but not the prior patent. There could be some differences I don’t see, but the technique is not that complicated.

Anyway, I’m not going to argue for software patents, as I think they generally stifle innovation. But if Carmack’s “MegaTexture” is the same thing, he should be pretty embarassed to claim it’s new (legalities aside), since he was given a private demo of universal texturing (as used in Google Earth) back in 2000 or so.

I can only assume he’s done something new and different to warrant his claims. But I don’t see enough details to tell.

z-buffer: yea, my guess is that another texture is used for that, both for the map view and distant terrain.
Allthough i don’t think it uses clipmaps or at least not a lot of it, JC did say that it all had to do with texture management.
And clipmaps still has the problem of needing to take many smaller textures and stiching them toghether and with megatextures that is obviously not happening, it’s all treated by hardware as one big texture.
So i don’t think you need any type of clipmapping besides perhaps for deciding how to render objects that are really far away.

Software clippmapping could use a stack of textures, say 16 levels of 256x256 each which represent powers of two filtering down a common “line of sight” That’s the clip stack (plus a mipmap pyramid for the bottom).

Or you could arrange those same textures as tiles within one bigger texture. The challenge there is to write a shader that allows each subtile to be treated as self-wrapping entity with a scrollable origin. But blending would probably be a lot easier that way and you trade away the per-poly binds.

I think those are conceptually similar, but the second version is probably a better solution nowadays.

Originally posted by Cyranose:
since he was given a private demo of universal texturing (as used in Google Earth) back in 2000 or so.
Could you tell us a bit more about this? What was this private demo, and what do they use in Google Earth, and how do you know about all this??

Hah, ok, I did some research, and it’s all starting to come together:

Intrinsic Graphics was founded by members of the SGI Performer team. They implemented software clipmapping, then were bought by Vicarious Visions. Google Earth uses their Intrinsic Alchemy engine, and Vicarious Visions also handled the Doom3 port for XBox, so they are closely working with johnc, who just did megatexturing! Surprise! :slight_smile:

Andras, I was present at the demo – I helped develop GE till 2001.

Again, I don’t know if “MegaTexturing” is the same thing as what’s patented. I hope it isn’t, since there’s more than one way to skin a planet. But I doubt JC simply licensed the tech from VV, renamed it, and then said it was his baby. I imagine it’s either something new, or the patent holder isn’t interested in persuing the matter, which is fair, since JC is good about not patenting his stuff.

However, I’d wonder what implications that might have for a company that wants to compete with GE with whole-planet rendering?

I suppose there’s no need for the clipmaps to be just used to map a continuous planar surface. I’m trying to get my head round this arbitary mesh stuff. What if there’s two objects stacked on top of one another, so you can see the underside of both…they’d demand more texture than can be stored on their footprint, so more texels would have to be dragged inwards to accomodate the extra surface area…

ah…but there’s no need for the clipmap to store continuous texture data…the toroidal texture object could just be an arbitary cache, containing an atlas of textures used in that world quadrant.
So what makes this different from standard texture streaming?

Andras, I was present at the demo – I helped develop GE till 2001.
Ah, I’ve just realized that you’re Avi Bar! I remembered you from your posts on the OpenGL list, I just didn’t recognise your alias.

Again, I don’t know if “MegaTexturing” is the same thing as what’s patented. I hope it isn’t, since there’s more than one way to skin a planet. But I doubt JC simply licensed the tech from VV, renamed it, and then said it was his baby. I imagine it’s either something new, or the patent holder isn’t interested in persuing the matter, which is fair, since JC is good about not patenting his stuff.
Yes, I’m pretty sure his stuff is something different, but still, a demo showing GE technology couldn’t hurt! :wink: I mean, I’d love to see that myself, I’m sure I’d learn something useful.

However, I’d wonder what implications that might have for a company that wants to compete with GE with whole-planet rendering?
Well, I’m sure that there are other, possibly even better ways to render entire planets… :wink: I’m just curious about what this MegaTexture stuff actually is…

My guess is, that GE uses something that I could best describe as “chunked” clipmapping (toroidal update by bigger chunks).

I believe that clipmapping has several problems, though, one of them is the inability to zoom (decrease the FOV) while looking at an oblique angle, and still have lots of detail… Correct me if I’m wrong.

you’d just make the clipmap centre relative to the field of view. a quick zoom would have the same artifacts as a quick move.

Clipmapping is primarily a texture caching mechanism. Moving the whole clipmap pyramid is not cheap. Now, what you say is that the center of the pyramid is not based on camera position, but based on camera view. Well, what happens, if I just turn the camera around?? The entire clipmap will have to move, even though the camera stays in place! Sure, it works, but basically you’ve lost all the advantages of clipmapping.

well, no it’s not such a performance killing suggestion. Clipmaps only update the nearest level when there’s enough time to do it…move too fast and you get the lower mipmap levels. So, seeing as though a viewpoint is pretty much always moving, offsetting it based on the fov is not such a ridiculous idea.

Sure, you can throw anything at it, and it will never kill performance (it has been designed that way), just the image will be blurry all the time… Can you imagine standing on top of a mountain looking around, while continuously moving the pyramid, as if you were flying at speeds of 10 mile a sec?? Also, the pyramid structure itself has been designed to support a view from the center of the pyramid. Let’s imagine that you zoom in on a peak far away, but there’s another peak in the view at half the distance. Guess what, the peak closer to you will be blurrier, because it’s far from the center of the pyramid… It just doesn’t work.

never heard of depth of field? :slight_smile:

Okay, it seems to me that this mega texture thing is probably just a simple streaming scheme. Basically, taking a giant texture, dividing it up into managable chunks (say 1024x1024 in size) and then streaming. This could allow several gig of textures to be used over a large surface, and they would simply be streamed in. Now, using some math, we see the following… if we have 16384x16384 virtual texture, thats approx 1.34GB of memory used. Now, if we skip three mip levels, we end up with a 2048x2048 virtual texture. That only uses about 21.9MB of memory. So, say we have our map divided into 256 1024x1024 chunks (16x16 chunk grid), and we skip the first 3 mip levels, but load the remaining texture. Then we really only need 256 128x128 textures to have a reasonable amount of the terrain in memory. Then, we can take the eye position, and determine which chunks need to be higher res. We then need to have some 1024x1024, 512x512, and 256x256 textures lying around, and we stream in the higher res textures needed. If you can’t stream something in time, you only get a lower res texture. All you would need to make something like this work is approximate screen space object size and the eye position.

That’s my guess as to what they’re doing. A very simple and reliable streaming system for textures.

Kevin B