PDA

View Full Version : Carmack's MegaTextures



Hampel
05-10-2006, 06:05 AM
Are there any specs about John Carmack's implementation of his MegaTextures (http://www.gamerwithin.com/?view=article&article=1319&cat=2) ?

CatAtWork
05-10-2006, 07:33 AM
http://www.beyond3d.com/interviews/etqw/

lodder
05-10-2006, 08:47 AM
There was a thread about this recently at gamedev: http://www.gamedev.net/community/forums/topic.asp?topic_id=390751

zeoverlord
05-10-2006, 09:28 AM
Ok what i think he does is that he first creates a 1024x1024 (personaly i would choose maybe 2 or 4k) texture (hint: 2 1k RGBA textures uses up 8MB) that he repeats all across the map.
then he makes sure that you can only see a bit less than 512 pixel units ahead.

Finaly he uses glTexSubImage2D() to add and replace something like 32x32 parts of the texture as you move along the texture.

Doing this only once per frame still makes it possible to have a 32kx32k texture without any overwhelming preformance loss.

ZbuffeR
05-10-2006, 04:59 PM
zeoverlord, that would not work when the whole 32k² is seen (ie. map view for officier class or whatever). I have heard megatextures are more like clipmaps http://www.sgi.com/products/software/performer/presentations/clipmap_intro.pdf

Cyranose
05-10-2006, 11:14 PM
The key idea is to use a fixed-sized texture window and torroidally scroll it, filling in the newly exposed parts with subtex loads. With texture wrapping and a texture matrix or fragment shaders, it's trivial to move the origin of the texture window. And stacking these like a clipmap makes it work at oblique angles, even without fragment shaders or blending (though blending helps).

Unfortunately, software clipmapping (aka Universal Texturing) is patented (6,618,053), which is why I can describe it. The inventor is Chris Tanner, who did the work for Intrinsic Graphics, which folded and sold their assets (patents included) to Vicarious Visions.

Paradigm's "virtual texture" patent (6,924,814, referred to in the gamedev thread) seems to be the same thing, but is more recent. The patent office may have f**ked up again and ignored prior art in their own system. The newer patent cites a paper by Tanner and Jones from their SGI days, but not the prior patent. There could be some differences I don't see, but the technique is not that complicated.

Anyway, I'm not going to argue for software patents, as I think they generally stifle innovation. But if Carmack's "MegaTexture" is the same thing, he should be pretty embarassed to claim it's new (legalities aside), since he was given a private demo of universal texturing (as used in Google Earth) back in 2000 or so.

I can only assume he's done something new and different to warrant his claims. But I don't see enough details to tell.

zeoverlord
05-11-2006, 06:30 AM
z-buffer: yea, my guess is that another texture is used for that, both for the map view and distant terrain.
Allthough i don't think it uses clipmaps or at least not a lot of it, JC did say that it all had to do with texture management.
And clipmaps still has the problem of needing to take many smaller textures and stiching them toghether and with megatextures that is obviously not happening, it's all treated by hardware as one big texture.
So i don't think you need any type of clipmapping besides perhaps for deciding how to render objects that are really far away.

Cyranose
05-11-2006, 08:00 AM
Software clippmapping could use a stack of textures, say 16 levels of 256x256 each which represent powers of two filtering down a common "line of sight" That's the clip stack (plus a mipmap pyramid for the bottom).

Or you could arrange those same textures as tiles within one bigger texture. The challenge there is to write a shader that allows each subtile to be treated as self-wrapping entity with a scrollable origin. But blending would probably be a lot easier that way and you trade away the per-poly binds.

I think those are conceptually similar, but the second version is probably a better solution nowadays.

andras
05-11-2006, 08:17 AM
Originally posted by Cyranose:
since he was given a private demo of universal texturing (as used in Google Earth) back in 2000 or so.Could you tell us a bit more about this? What was this private demo, and what do they use in Google Earth, and how do you know about all this??

andras
05-11-2006, 08:39 AM
Hah, ok, I did some research, and it's all starting to come together:

Intrinsic Graphics was founded by members of the SGI Performer team. They implemented software clipmapping, then were bought by Vicarious Visions. Google Earth uses their Intrinsic Alchemy engine, and Vicarious Visions also handled the Doom3 port for XBox, so they are closely working with johnc, who just did megatexturing! Surprise! :)

Cyranose
05-11-2006, 02:17 PM
Andras, I was present at the demo -- I helped develop GE till 2001.

Again, I don't know if "MegaTexturing" is the same thing as what's patented. I hope it isn't, since there's more than one way to skin a planet. But I doubt JC simply licensed the tech from VV, renamed it, and then said it was his baby. I imagine it's either something new, or the patent holder isn't interested in persuing the matter, which is fair, since JC is good about not patenting his stuff.

However, I'd wonder what implications that might have for a company that wants to compete with GE with whole-planet rendering?

knackered
05-11-2006, 04:40 PM
I suppose there's no need for the clipmaps to be just used to map a continuous planar surface. I'm trying to get my head round this arbitary mesh stuff. What if there's two objects stacked on top of one another, so you can see the underside of both....they'd demand more texture than can be stored on their footprint, so more texels would have to be dragged inwards to accomodate the extra surface area....

knackered
05-11-2006, 04:45 PM
ah...but there's no need for the clipmap to store continuous texture data.....the toroidal texture object could just be an arbitary cache, containing an atlas of textures used in that world quadrant.
So what makes this different from standard texture streaming?

andras
05-11-2006, 11:35 PM
Andras, I was present at the demo -- I helped develop GE till 2001.Ah, I've just realized that you're Avi Bar! I remembered you from your posts on the OpenGL list, I just didn't recognise your alias.


Again, I don't know if "MegaTexturing" is the same thing as what's patented. I hope it isn't, since there's more than one way to skin a planet. But I doubt JC simply licensed the tech from VV, renamed it, and then said it was his baby. I imagine it's either something new, or the patent holder isn't interested in persuing the matter, which is fair, since JC is good about not patenting his stuff.Yes, I'm pretty sure his stuff is something different, but still, a demo showing GE technology couldn't hurt! ;) I mean, I'd love to see that myself, I'm sure I'd learn something useful.


However, I'd wonder what implications that might have for a company that wants to compete with GE with whole-planet rendering?Well, I'm sure that there are other, possibly even better ways to render entire planets.. ;) I'm just curious about what this MegaTexture stuff actually is..

My guess is, that GE uses something that I could best describe as "chunked" clipmapping (toroidal update by bigger chunks).

I believe that clipmapping has several problems, though, one of them is the inability to zoom (decrease the FOV) while looking at an oblique angle, and still have lots of detail.. Correct me if I'm wrong.

knackered
05-12-2006, 03:03 AM
you'd just make the clipmap centre relative to the field of view. a quick zoom would have the same artifacts as a quick move.

andras
05-12-2006, 07:29 AM
Clipmapping is primarily a texture caching mechanism. Moving the whole clipmap pyramid is not cheap. Now, what you say is that the center of the pyramid is not based on camera position, but based on camera view. Well, what happens, if I just turn the camera around?? The entire clipmap will have to move, even though the camera stays in place! Sure, it works, but basically you've lost all the advantages of clipmapping.

knackered
05-12-2006, 08:45 AM
well, no it's not such a performance killing suggestion. Clipmaps only update the nearest level when there's enough time to do it....move too fast and you get the lower mipmap levels. So, seeing as though a viewpoint is pretty much always moving, offsetting it based on the fov is not such a ridiculous idea.

andras
05-12-2006, 09:32 AM
Sure, you can throw anything at it, and it will never kill performance (it has been designed that way), just the image will be blurry all the time.. Can you imagine standing on top of a mountain looking around, while continuously moving the pyramid, as if you were flying at speeds of 10 mile a sec?? Also, the pyramid structure itself has been designed to support a view from the center of the pyramid. Let's imagine that you zoom in on a peak far away, but there's another peak in the view at half the distance. Guess what, the peak closer to you will be blurrier, because it's far from the center of the pyramid.. It just doesn't work.

knackered
05-12-2006, 09:56 AM
never heard of depth of field? :)

ebray99
05-26-2006, 11:03 AM
Okay, it seems to me that this mega texture thing is probably just a simple streaming scheme. Basically, taking a giant texture, dividing it up into managable chunks (say 1024x1024 in size) and then streaming. This could allow several gig of textures to be used over a large surface, and they would simply be streamed in. Now, using some math, we see the following... if we have 16384x16384 virtual texture, thats approx 1.34GB of memory used. Now, if we skip three mip levels, we end up with a 2048x2048 virtual texture. That only uses about 21.9MB of memory. So, say we have our map divided into 256 1024x1024 chunks (16x16 chunk grid), and we skip the first 3 mip levels, but load the remaining texture. Then we really only need 256 128x128 textures to have a reasonable amount of the terrain in memory. Then, we can take the eye position, and determine which chunks need to be higher res. We then need to have some 1024x1024, 512x512, and 256x256 textures lying around, and we stream in the higher res textures needed. If you can't stream something in time, you only get a lower res texture. All you would need to make something like this work is approximate screen space object size and the eye position.

That's my guess as to what they're doing. A very simple and reliable streaming system for textures.

Kevin B

Zulfiqar Malik
05-26-2006, 12:48 PM
I totally agree with you ebray (Kevin), but coming up with a streaming technique is not trivial especially if you want incremental changes in the texture e.g. texture LOD changes from L to L + 1. An efficient scheme needs to be devised to handle this gracefully especially with the viewer moving quickly across the terrain surface. I, for one, do not like texture clipmaps simply for their enormous software overhead. Geometry clipmaps are better and perhaps a better implementation of the clipmap technique, but it suffers from an inherently conservative culling technique and poor batch submission (in comparison to a fixed sized patch approach) and almost always gets CPU bound.

ebray99
05-26-2006, 02:09 PM
Basically, what I'm saying is:

Preprocessing:
Divide the large texture into smaller ones.
Correct UVs on the geometry (adding new verts when necessary) to use the textures.

When running:
Load all textures ignoring their top 3 mip levels.
When an object passes a certain size threshold on the screen, send an order to a separate thread to load the top mip levels needed.
When that order is finished, change to use the texture that contains the mip levels needed.

The worst case for this algorithm is that someone gets a lower res texture than is desired and then the higher res one pops in later. With a naive implementation of this, I'm sure you'd see a lot of popping. A lot of work would probably be needed to ensure that the texture data is stored in such a way that is fast to read off of the disk.

Another limitation is seams that can't be filtered accross, since you can't filter across multiple textures obviously, but this probably isn't the worst problem in the world, especially with noisy textures like those you'd see on terrain.

Kevin B.

marksibly
05-26-2006, 04:37 PM
ebray,

I was thinking about something like that too.

A huge percentage of a scene is made up of lower rez mipmaps, so many higher rez ones aren't really needed. By 'decoupling' mipmap levels from each other and perhaps doing mipmap filtering by hand in a shader, you should be able to make much more efficient use of texture memory. Worse case mipmap level for each 'block' of a terrain/scene could be computed by block distance from camera, and you could let texture management do all the work!

But perhaps hardware already does this? When a texture is 'evicted' from vidmem, does the whole thing+mipmap levels get tossed or just the LRU mipmap level. I suspect the former but the latter would make for a nice optimization!

ZbuffeR
05-27-2006, 06:35 AM
When a texture is 'evicted' from vidmem, does the whole thing+mipmap levels get tossed or just the LRU mipmap level.
I think a texture is still linked deeply to all its mipmap levels.

Carmack already talked about this 6 years ago, in a plan stating :
"Virtualized video card local memory is The Right Thing."

The whole discussion is still very interesting :
http://www.bluesnews.com/cgi-bin/finger.pl?id=1&time=20000429013039
(bottom of the page)

"The primary problem is that textures are loaded as a complete unit,
from the smallest mip map level all the way up to potentially a 2048 by
2048 top level image. Even if you are only seeing 16 pixels of it off
in the distance, the entire 12 meg stack might need to be loaded."

Funny how with MegaTexture shaders it seems to be possible to re-implement the videocard low-level cache policies :D