PDA

View Full Version : Megatextures?



yooyo
08-24-2005, 04:59 PM
JC: What we’re doing in Quake Enemy Territory is sort of our first cut at doing that over a simple case, where you’ve got a terrain model that has these enormous, like 32000 by 32000, textures going over them. Already they look really great. There’s a lot of things that you get there that are generated ahead of time, but as the tools are maturing for allowing us to let artists actually go in and improve things directly, those are going to be looking better and better.

Enemy Territory: Quake Wars:...This is due at least in part to the huge and surprisingly detailed outdoor areas that are possible, thanks to an all-new "megatexture" mapping technology developed by id Software programming guru John Carmack. The megatexture is essentially one huge, continuous texture map that can stretch all the way to the horizon, without any need for fog or other effects to mask a limited draw distance or texture tiles that repeat and show seams at the edges.Can somebody explain how megatextures works?

yooyo

SirKnight
08-24-2005, 05:26 PM
Ok I really don't know for sure (and I doubt any one else does besides id and the engine licensees) but to me, it sounds an awful lot like Clip-Maps, which are nothing all that new. Well for games maybe. Chapter 2 of GPU Gems 2 talks all about them and from that, this MegaTexture thing sounds similar. Too bad someone at QuakeCon didn't ask John Carmack when he was taking questions at the end of his talk.

I think there is a paper about them, but I don't have it. I do remember seeing them mentioned on vterrain.org.

-SirKnight

Gorg
08-25-2005, 05:44 AM
What we’re doing in Quake Enemy Territory is sort of our first cut at doing that over a simple caseThat part was after his talk about texture virtualization which means instead of having texture objects, you simply have one texture memory space and the texture coordinate looks into it.

Textures virtualization would help some things quite a bit for developers:

-You won't have to bind samplers to shader program, you simply read from one texture space. Which means you could read from as many textures as you want until the swapping in and out of the texture space kills performance(just like swapping lots of memory pages from the hard drives kills CPU performance).

Of course, you would still need to pass the base address of a texture if you wanted to read from
it in a shader and did not have an object with texture coordinates to read from it.

-All textures can be non-power of 2.

-You can maximise batching on the application side. As long as data is executed with the same shaders, you can batch them. Right now, you have to bind different textures even if you are using the same states or shaders to render geometry.

This also implies that artists can customize different side of a wall without killing batching.

Tzupy
08-25-2005, 09:44 AM
AFAIK the maximum texture size is 2k on Ati and 4k on nVidia. I don't understand how one can create these 'megatextures'. In my applications I use images larger than 4k by 4k, but am using tiling, usually at 512 by 512.

skynet
08-25-2005, 09:53 AM
I´m REALLY waiting for virtual graphic card memory, too. I have to deal with geometry data that is multiple times larger than the main memory (and graphics card memory). We have to do alot of data-managment. And it is very hard, almost impossible, to tell the hardware _exactly_ how to manage your data you put into the VBO´s.
Virtual memory would relief us from many headaches and probably would perform better.

Gorg
08-25-2005, 01:07 PM
Originally posted by Tzupy:
AFAIK the maximum texture size is 2k on Ati and 4k on nVidia. I don't understand how one can create these 'megatextures'. In my applications I use images larger than 4k by 4k, but am using tiling, usually at 512 by 512.It really only works well for something like a terrain where you can still get batches of a reasonable size.

You simply tell you artist : You have 32k by 32k to draw into.

Then you pass it throught a software a softeware that splits the texture down to size that the video card support and assign the textures and texture coordinates to the right pieces on the terrain.

Gorg
08-25-2005, 01:12 PM
Originally posted by skynet:
I´m REALLY waiting for virtual graphic card memory, too. I am really looking forward it too. But I am a bit worried because it is a significant change in paragdim and I suspect it will be very difficult to support non-virtual cards and virtual cards.

Of course, if your software is custom and your customer(s) don't care about switching cards then it is great.

On the PC, game developers will have to suffer the duality. I suspect widespread usage of the technology would begin on a console first and then the PC gamers will have no choice but to upgrade to play the new games.

Korval
08-25-2005, 03:09 PM
That part was after his talk about texture virtualization which means instead of having texture objects, you simply have one texture memory space and the texture coordinate looks into it.Yeah, that's never going to happen. Videocard memory may eventually be virtualized into some kind of cache-like architecture, but you'll still have texture objects. IHV's aren't going to suddenly just give you a pointer to some memory and let you call it a texture. The driver will still manage the memory itself.

The advantages of virtual textures are the possibility for larger textures (the theoretical maximum is 8388608, or 2^23. This is due to 32-bit floating-point precision issues) and some possibility for greater performance. This does not mean suddenly giving fragment programs carte blanc to go rampaging through video memory reading any old block of data, nor does it make texture units in fragment programs just go away. The latter might be possible, but it would be too easy to screw up in platform-dependent ways (can you actually pass a texture object name as an integer argument? Texture objects are 32-bit ints, while integers in glslang are guarenteed only to be 16-bits + a sign bit).

Gorg
08-25-2005, 07:07 PM
Originally posted by Korval:
[QB]Yeah, that's never going to happen. Videocard memory may eventually be virtualized into some kind of cache-like architecture, but you'll still have texture objects.Texture objects would in concept become a form of pointer, but you are right, they would still be texture object.

I was not saying we need direct access to the memory. You would still call glTexture* to create a texture and all the other functions to manipulate it.



This does not mean suddenly giving fragment programs carte blanc to go rampaging through video memory reading any old block of data, nor does it make texture units in fragment programs just go awayWell, if you add an extra step that transforms texture coordinates to point to a spot in the virtual memory space, then texture units are not needed anymore.


texture = glTexture2d(...)

glBindTexture(GL_TEXTURE_2D, texture )

//setup vbo for texture coordinates

//this would now transform
//the coordinates to virtual addresses
glTexCoordPointer(...) After the transformation, texture units are not required anymore.

Repeat and Clamp would still apply if the coordinates go overboard. This requires some work from the GPU to figure out the bounds of the current texture. CPUs can already do that for processes and complain if you try to access something outside your address space.

Something will always be hard to do for somebody. It is either going to be difficult for the hardware, or difficult for the software.

Virtualization makes it easier for the software. CPU makers figured it out, I am sure GPU makers can too.

knackered
08-25-2005, 10:47 PM
surely there'd be cache issues - say you access a 64x64 subtexture that lies bang in the middle of a 32000x32000 texture, the texture cache manager doesn't know the subtextures dimensions (or has any concept of a subtexture?), so it would only cache texels on the currently referenced row...as soon as the v coord increments it would have to completely repopulate its cache.
This sounds like mad talk to me.

Pentagram
08-26-2005, 01:50 AM
3d Labs has had virtual texture memory for years, it basically works like normal virtual texture memory, but on textures...
So the texture is divided into 32x32 tiles and those tiles are paged in/out based on usage patterns. (Oh coincidence, 32x32xRGBA that's 4k the page size a lot of normal systems use.) The tile indexing sheme may also use some clever space filling curve to make acces patterns better (like not having row by row indexing)
Using them basically works like normal texturing, it doesn't even have to make the maximum texture size bigger. The advantages are:
* If you use only a sub part of your texture only the bounding 32x32 tiles will be loaded.
* If part of the texture is obscured by other geometry only the visible tiles will be loaded (think skyboxes...)
* If some object with a 4kx4k texture is far away only the lowest mip level tiles will be loaded.

There is some presentation from 3D labs that explains it all in detail somewhere...

All of this basically helps to make the texture memory requirement closer to the actual framebuffer size as any invisible data is not loaded.
I'm not sure of the hardware cost to implement this but bullet 2 seems to be the most difficult one as it bascially needs hardware that is able to suspend fragments and do some other usefull stuff untill that data appears.

ZbuffeR
08-26-2005, 03:07 AM
Pentagram, I suppose the presentation you refer to is :
http://www.graphicshardware.org/previous/www_1999/wkshp.html
search in page for "Virtual Textures - a true demand-paged texture memory management system in silicon"
http://www.graphicshardware.org/previous/www_1999/presentations/v-textures.pdf

Wow, supported since 1999 in Permedia 3 ... One can ask why gaming cards don't already do this.

Tzupy
08-26-2005, 05:50 AM
I'd like to make two remarks:
1) 32k x 32k RGBA = 4 GB.
Not exactly usable with today's installed memory sizes (usually 1 GB, next year probably 2 GB needed).
So it would need heavy compression to squeeze it down to 512 MB for example. That would be acceptable.
But more actual textures would have to be created for each frame (or maybe just updated?), than in the
case of a 'classical' approach, where the same texture can be used for many quads.
2) 512 x 64 = 32k.
If the actual texture size is 512 x 512 (used for a local tile), then just 64 x 64 such textures would fill all
the 32k x 32k image. For small levels this may be acceptable, but for a large world like Morrowind (loved it)
this would be unusable.

Robert Osfield
08-26-2005, 06:11 AM
Virtual textures doesn't just require graphics memory and CPU memory, you also need to read from the disk, as main memory is not going to be large enough to load the types of textures that people will want to use, 32k by 32k is actually rather piddling once you start thinking about whole earth geospatial data.

Sgi's clip mapping supports paging from disk, to main memory and then subloading down to the graphics hardware. The hardware support for IR specific, and doesn't yet exist on modern CPU.

I haven't personally tried implementing clip map emulation, but with shaders I would have thought you could come reasonably close to emulating it.

Another important attribute of any virtual textures support beside the ability to read from disk (or over the network), is the ability to have non uniform detail levels, where you have localised high res inserts.

All this couple of file formats, network/disks/CPU/main memory/GPU/GPU memory all points to a high level API than just OpenGL, expecting OpenGL to provide all this is really asking a bit too much of OpenGL.

This doesn't stop anyone from writing a general purpose library that adds virtual texture support ontop of OpenGL :)

Robert.

Jan
08-26-2005, 09:19 AM
Nice paper!

When i read Carmacks speech, i was not convinced from his point. I don't think it is that important to have custom textures everywhere. I don't think landscapes will look better if their texture is different everywhere, because i cannot see a difference anyway. Not, when i am busy slaying monsters ;-)

However, there certainly ARE applications, where huge textures can be a great win, especially in CAD applications for designers.

Anyway, i didn't think it would be worth the trouble to put that into the hardware, but the fact, that 3DLabs has put it into silicon 1999 already and claims that it IMPROVES efficiency and speed, makes me wonder, why ATI and nVidia didn't do anything about it, yet.

But i honestly doubt, that this situation will change in the next few gfx-card generations. We'll see.

Jan.

SirKnight
08-26-2005, 09:34 AM
I haven't personally tried implementing clip map emulation, but with shaders I would have thought you could come reasonably close to emulating it.
Yep, just see chapter 2 of GPU Gems 2. :)

You can d/l the .fx files that are a part of ch2 on nvidia's Gems 2 site. The stuff on the cd in the book is on their site for d/l.

-SirKnight

Korval
08-26-2005, 09:59 AM
Well, if you add an extra step that transforms texture coordinates to point to a spot in the virtual memory space, then texture units are not needed anymore. Why would you want to get rid of exceedingly fast hardware that does precisely what we want it to 99.99% of the time and replace it with comparitively slow fragment program code? Unless fragment programs can start doing a full 2x2 bilinear blend in 1 cycle (not to mention clamping where needed, converting a floating-point texture coordinate into a memory address, etc. All in 1 cycle), why would you want to?

Pointers are bad. They create bugs. And bugs in a GPU that you can't really debug on anyway are never a good thing.


I'm not sure of the hardware cost to implement this but bullet 2 seems to be the most difficult one as it bascially needs hardware that is able to suspend fragments and do some other usefull stuff untill that data appears.The underlying problem with virtual texturing is that there's almost no way to prevent stalls due to paging in a texture piece.

If you have one pixel-quad of a triangle, and it needs to page in a piece of the texture, there's not much you can do instead of waiting for the data. You might try processing other quads in that same triangle, but likely those quads are just going to ask for more from that same texture. And since these quads are nearby (screen-space), you're probably already getting the data for them from the first fetch, so running these other quads in the triangle isn't too useful.

And you can't run quads from other triangles as this violates the GL spec all over the place. Triangles must be rendered in-order for any number of reasons. Of course, the other triangles are almost certainly also going to be accessing this texture (swapping textures creates a stall long enough that you probably won't be looking at a different shader/material set), so they could easily stall themselves.

Gorg
08-26-2005, 10:46 AM
Originally posted by Korval:
why would you want to?Because it get rids of the act of binding textures.

Filtering can still be done by the sampler.


Pointers are bad. They create bugs. And bugs in a GPU that you can't really debug on anyway are never a good thing.You are not using the pointer. To the application, it is just a single texture. To keep going with analogies, a texture would be a process. The GPU would protect access to the other textures(processes) by clamping and repeating.

Korval
08-26-2005, 03:21 PM
Because it get rids of the act of binding textures.So, if I want to use a shader many times, each time with different textures, I have to either pass the texture object (which I've already stated is a U32, while integers in glslang are only S16's) as a uniform to the shader (thus killing batching anyway, thus negating any usefulness of doing so), or I have to do some strange attribute work to tell it what texture to use, which will probably have the same problem in terms of creating stalls as binding the texture to begin with.

Also, if I recall correctly, nVidia fragment shaders actually compile uniforms into the shader itself, so changing a uniform is the equivalent of switching to a new shader (it must upload the program again).

tranders
08-27-2005, 10:49 AM
Originally posted by Tzupy:
32k x 32k RGBA = 4 GB.
I think the computation was 32pixels x 32pixels x 4bytesperpixel:

32 x 32 x 4 = 4096bytes

That would allow for very large disk-based bitmaps to be indexed down to 32x32 pixel blocks. A page request from the disk cache and a few pre-fetches for approaching neighbors would make this very responsive. This is not a new concept since the stereo mapping industry has been using similar techniques for nearly 20 years (if not longer).

- tranders

Jan
08-27-2005, 03:01 PM
He wasn't referring to 3DLabs implementation of 32x32 pixel blocks, but to John Carmacks claim to use a 32K x 32K texture on the terrain.

Jan.

tranders
08-28-2005, 02:15 PM
Originally posted by Jan:
He wasn't referring to 3DLabs implementation of 32x32 pixel blocks, but to John Carmacks claim to use a 32K x 32K texture on the terrain.
Sorry about that confusion -- I assumed he was referring to the more recent posting relating to the typical system page size since he used RGBA in his calculation (the post about Carmack's presentation did not mention image format - RGBA or otherwise). In any event, I highly doubt the entire 32Kx32K texture would ever be loaded into memory all at once. Tiled image formats are also old technology.

-- tranders

Gorg
08-28-2005, 04:14 PM
Originally posted by Korval:
]So, if I want to use a shader many times, each time with different textures, I have to either pass the texture object
You would not need to. The texture coodinates would tell you. Unless of course, you want to read directly from the texture yourself in which case you would just bind it yourself just like you do today.

To support that, there would have to be 2 types of samplers : The current samplers that uses 0-1 ranges to look up into a texture and a new type that can read straight from virtual addresses. Note that the standard sampler would actually transform the 0-1 address to a virtual address to read into the binded texture.

The only way you could have virtual addresses would be from a varying attribute. The compiler would have to enforce that or if someone use the wrong sampler, the object would be strangely colored.

Pre-transforming a buffer of texture coordinates would not be costly if using VBO.

Of course, If dynamic texture coordinates are required, you can just revert back to normal binding.

Static coordinates are quite common in my experience, so pre-caching the coordinats in a VBO would remove lots of useless binding in the application and improve batching.

For example, if you have a shader that does the interaction of one light and a surface, you would only need to bind the light specific textures. You could then batch all the surfaces that have static textures coordinates(but could have widly different textures) and are touched by that light.

On the other side, if gpus ever become good at rendering small batches, then what I am talking about is not useful and virtual texture memory would only be used for large textures.

I would prefer this, but who knows where technology will go.



which will probably have the same problem in terms of creating stalls as binding the texture to begin with.
You get stalling when the texture needs to be loaded on the card. If you use virtual texture memory, you read pages. So stalling occur at the page level. So the stalling problem is not removed or exacerbated, it is moved.

The point is to make it nicer for application developers.

Ysaneya
08-29-2005, 01:59 AM
I'm not impressed at all by everything i've read about it so far, and i think some of you are seing complexity where it actually is pretty simple.

"Megatextures" sounds awfully like some marketing garbage to me. If you keep it technical, i think it's basically a terrain engine (i don't even think it's as advanced as clipmaps - CLOD or geomipmapping are more likely) with a huge texture mapped on it.

Now, a 32k x 32k texture certainly doesn't fit in memory. So what ? You can easily cut the terrain in 32 x 32 seamless sections, and apply a single 1k x 1k texture on it... on close sections. For further sections, you can use lower resolution textures with a cache, just like mipmaps (except in OpenGL, the full mipmaps hierarchy fits in memory). Add some streaming and blending to avoid popping, and you're done!

All in all, that's probably no more than a few weeks of work for a good graphics programmer.

I'm not trying to minimize Carmack's work - what i just say is that it's not as innovative and new as some people would like you to believe.

Y.

Korval
08-29-2005, 09:56 AM
You would not need to. The texture coodinates would tell you.That makes no sense.

Without the texture unit having a base pointer to a texture, it has no way of converting normalized texture coordinates (0 to 1) into an address in memory. Giving the system this pointer is the principle function behind binding a texture.

Unless texture coordinates become virtual memory addresses (never going to happen), you still need to know which texture is being referred to.


I'm not trying to minimize Carmack's work - what i just say is that it's not as innovative and new as some people would like you to believe.I'm sure people said the same thing about shaders. That, if you need to do shader-type stuff, you can (in software. Possibly by writing to a texture). It'd only take a few weeks worth of programmer time too.

The point of it being in hardware is to make the process both automatic and more efficient. The algorithm you propose, for example, is not terribly efficient compared to a hardware-based one. The hardware algorithm (assuming the texture fits into main memory. Having it go to disc murders performance) can dynamically load only the necessary chunks of the data into video memory. The unneeded bits don't even get touched. Your algorithm can't tell the difference, so it uploads mip levels and texture regions that may never be needed.

dorbie
08-29-2005, 10:27 AM
There are a few things you should know;

Firstly there are many ways to do this, it has been done for years on PCs.

Paging on demand sucks because the data can be huge, bigger than system memory and by the time texture requests demand the memory it's way too late.

Paging in anticipation of demand, load management and reasonable fallbacks when you don't have the best data are key.

One example of texture virtualization is to torroidally scroll a texture region and use vertex coordinate manipulation to adjust the virtual coordinates to the current torroidal window, the key is not to try to texture stuff outside your torroidal window at that resolution, and to leave an unused buffer region for active paging. Once you do this other issues come to the fore, like MIP lod selection and management. That too has been solved creatively:

http://patft.uspto.gov/netacgi/nph-Parse...ie&RS=IN/dorbie (http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=/netahtml/search-bool.html&r=2&f=G&l=50&co1=AND&d=ptxt&s1=dorbie.INZZ.&OS=IN/dorbie&RS=IN/dorbie)

Ever seen the Google Earth client? That's doing a primitive version over the *web* for a single texture that's the size of the entire Earth at a resolution up to 3 inch and it works pretty well considering.

There are other ways to virtualize textures:

http://patft.uspto.gov/netacgi/nph-Parse...ie&RS=IN/dorbie (http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=/netahtml/search-bool.html&r=3&f=G&l=50&co1=AND&d=ptxt&s1=dorbie.INZZ.&OS=IN/dorbie&RS=IN/dorbie)

Gorg
08-29-2005, 03:04 PM
Originally posted by Korval:
[QUOTE]
Unless texture coordinates become virtual memory addresses
That is what I was talking about.

It can happen. You just need a transformation step when you upload the texture coordinates to a VBO.

lxnyce
08-29-2005, 03:29 PM
Dorbie responded before I could, but as he says, there is nothing new about this. There are quite a few GIS applications and algorithms which do the same exact thing, and they have existed for over 10 years now (my company's product is that old and the technology existed back then...on 8MB graphics cards). Dorbie isn't the only one with patents out there on this kind of tech as well, there are lots of them.

32K x 32K pixels in the GIS world is nothing to gloat over. I am looking at file dimensions of 259,823,625 x 129,926,368 pixels right this instance. Essentially every pixel is unique as well, and I can add decals, lighting, lightmaps, etc... to it just like you would in a regular gaming atmosphere. Keep in mind that this is only the imagery, the terrain is also of this high resolution as well. The toughest part isn't actually rendering the data I would say, but overcoming the bandwidth issues. Especially when it comes to streaming it over the internet (which you will find plenty of patents on as well).

You can go through some of the commercial packages found here to see some of this tech : http://www.vterrain.org/Packages/Com/index.html
Or as Dorbie pointed out, check out Keyhole/Google Earth, or even TerraeExplorer (voxel based...lots of patents on their tech).

In short, all I am saying is that don't want to see JC credited for this if it doesn't provide a new method for visualizing large terrain datasets. If it does come out and it proves to be something brand new, I will give him his due respects.

SirKnight
08-29-2005, 04:58 PM
I am looking at file dimensions of 259,823,625 x 129,926,368 pixels right this instance.
I've got to say this. Daaaaaaaaaaamn. :eek:

-SirKnight

Korval
08-29-2005, 05:36 PM
It can happen. You just need a transformation step when you upload the texture coordinates to a VBO.No, it can't. Memory addresses cannot be linearly interpolated across the surface of a polygon; real texture coordinates can. Plus, during this interpolation, you might switch from one mipmap level to another, which would require a huge jump from one memory address to another. And heaven help you should you be doing anisotropic filtering on that texture, as that will literally do multiple texture accesses all across memory.

Plus, IHV's aren't going to give us addresses to virtual memory to begin with. It would expose us to a level of hardware that we have no buisness accessing. Things like texture swizzling and so forth, which are IHV dependent, would have to be defined for us in order to convert texture coordinates to virtual addresses.

So yeah. This is not a technique for being able to batch more; that's not what this is about.


In short, all I am saying is that don't want to see JC credited for this if it doesn't provide a new method for visualizing large terrain datasets. If it does come out and it proves to be something brand new, I will give him his due respects.I think there's kind of a misunderstanding here. Practical virtual texturing (hardware-based) isn't really for the purpose of visualizing large terrain. It exists to allow the user to more freely use more texture memory than the card normally has and amoratize much of the performance loss due to thrashing. Basically, it lets you use more/bigger textures without having nearly as much penalty as normal.

That's its purpose, no matter what JC or anyone else wants to do with it.

lxnyce
08-29-2005, 05:58 PM
I apologize if there was a misunderstanding. From the initial post, I thought thhis is what it was being targeted towards.

"where you’ve got a terrain model that has these enormous, like 32000 by 32000, textures going over them."

Brolingstanz
08-29-2005, 07:10 PM
yeah, it seems like what you want here is a really big texture atlas, only witout the problems associated with them. the out-of-core stuff should be handled in software. the hardware could handle largish atlas objects that could be allocated and mapped dynamically by the application. if asked, i plan to be vague and illusive on the implementation details.

Robert Osfield
08-30-2005, 01:39 AM
Originally posted by lxnyce:
You can go through some of the commercial packages found here to see some of this tech : http://www.vterrain.org/Packages/Com/index.html
Or as Dorbie pointed out, check out Keyhole/Google Earth, or even TerraeExplorer (voxel based...lots of patents on their tech).
[/QB]There are open source solutions to large texture support/database paging too. Even a number of the above commericial project linked to from are based on open source database paging support undernearth :-)

See http://www.openscenegraph.org, there's a little howto guide on howto generate the database - just do a search for osgdem.

The support currently implemented is based on paging geometry and textures together. In the future I would like to docuple the texturing and geometry paging, so that we have something close to virtual textures.

I have to say that I treally don't think it there is much value in using virtual texture as a texture atlas. Fetching the required data required from main memory, let alone disk or over the web, incurrs a big latency hit.

You have to hide this latency as much as possible by paging on the CPU in the background, incrementally downloading datat to the GPU, and where possible using predicative kowledge about what data will be needed prior to being used. All this requires high level support for paging, not low level OpenGL support.

Better hardware support for paging textures and geometry wouldn't go amiss, but its never going to replace the high level side of things, it'll have to work in unison to it. There also isn't any one specific bit of hardware you excerise as paging stresses the whole CPU and even network, it really isn't just a case of next gen GPU solving all thats required.

Things that would sure help out on the GPU side would be decompression (Jpeg2000 style) of imagery and geometry down on the GPU. I'll take this over any virtual texture support.

Robert.

Gorg
08-30-2005, 05:08 AM
No, it can't. Memory addresses cannot be linearly interpolated across the surface of a polygon; real texture coordinates can.They can be interpolated easily if virtual coordinates are 2d and sequential.

The more difficult thing is the texture matrix and vector addition. Just that alone might probably not make it worth it because you would have to transform back the coordinates to 0-1 range and back again.



Plus, during this interpolation, you might switch from one mipmap level to another,
Again, That is just more work to the hardware to find the proper memory location.


And heaven help you should you be doing anisotropic filtering on that texture, as that will literally do multiple texture accesses all across memory.I don't see your point here. That will happen even if you use a 0-1 texture range in a virtual texture memory.



Plus, IHV's aren't going to give us addresses to virtual memory to begin with. It would expose us to a level of hardware that we have no buisness accessing. I don't actually care about the address. The hardware can keep it any format they want. They can simply return an error if we attempt to read the vbo of transformed texture coordinates.

Korval, I think we went over all the pros and cons of this technique. It has a more cons (especially for the hardware), so unless this gives a substancial performance boost, I'd much prefer fast small batches.

So I will let it rest.

andras
08-30-2005, 07:35 AM
32K x 32K pixels in the GIS world is nothing to gloat over. I am looking at file dimensions of 259,823,625 x 129,926,368 pixels right this instance.You mind if I ask what does this dataset represent? Because if you had the full Earth at 1ft resolution(!!), it would still be a lot smaller than what you claim..

andras
08-30-2005, 07:41 AM
One example of texture virtualization is to torroidally scroll a texture region and use vertex coordinate manipulation to adjust the virtual coordinates to the current torroidal window, the key is not to try to texture stuff outside your torroidal window at that resolution, and to leave an unused buffer region for active paging. Once you do this other issues come to the fore, like MIP lod selection and management. That too has been solved creatively:Any papers about these techniques, or do we have to learn to read this patent cr@p written in lawyenglish? ;)

skynet
08-30-2005, 09:47 AM
Pleas don´t forget about other uses for virtual gfx card memory, too. (Huge Geometry!).
I want to throw in that Carmack also stated, that he wanted to abandon texture tiling (in long term) by basically textureing every polygon with a unique artist-painted texture. That would of course result in say 1 big texture atlas per "room". But since we only have 1-2 millions pixels visible on screen, the actual _referenced_ texture memory stays rather constant (some multiple of the framebuffer size, I guess). Wouldn´t a 256MB card be a rather sufficient "cache" for this type of usage pattern?
Same counts for the geometry data. You can create a huge detailed world and put it partly (few big chunks) or as a whole into the gfx card memory.

VikingCoder
08-30-2005, 10:01 AM
259,823,625 x 129,926,368 pixels?

Even at 8-bits per pixel, that's 31,439,531 Gigabytes of data. (Yes, 31 Million Gigabytes.)

Pull the other one.

knackered
08-30-2005, 11:01 AM
he's likely using geometry clipmaps, with on-the-fly decompression - it's one of the big advantages of geometry clipmaps.
So the data is heavily compressed on disk.

andras
08-30-2005, 11:27 AM
he's likely using geometry clipmaps, with on-the-fly decompression - it's one of the big advantages of geometry clipmaps.
So the data is heavily compressed on disk.Huh? At what compression ratio? 10000:1? :) I'm sure it's just a typo. Divide both dimensions by 1k, throw in 40:1 wavelet compression, and then it sounds reasonable.

lxnyce
08-30-2005, 02:51 PM
Andras, I believe they may be as low as 5cm. I haven't done checks to see what the desired resolution at the 1ft range is, but its probably close to that number. The scene is composed of various high resolution rasters. The resulting raster takes on the highest resolution from all the other rasters when its mosaiced.

This should also explain VikingCoder's question. Its damn near impossible to store all that data as one huge raster. Instead its composed of various high resolution raster sources which get composed into 1 enormous raster at runtime.

As far as the renderer is concerned, it only knows about that 1 huge raster though. The individual sizes range from 30k to 100k+, and there can be hundreds of them.

zed
08-30-2005, 05:16 PM
i was just writing about this today, bigger textures aint really whats needed, hows that gonna solve the problem that most (all) games suffer from that the closer u get to the wall the worse it looks. im
looking outside at the moment the grassfield i see doesnt need a 32k x 32k unique texture. visually u can achieve a 99% similar result with 4detail textures and a blendmap but with either method its still gonna look crap up close.
it would be nice if we had some sort of lossy compression built into the cards dxt3/5 give a 4:1 ratio but i want 40:1

andras
08-30-2005, 06:32 PM
The scene is composed of various high resolution rasters. The resulting raster takes on the highest resolution from all the other rasters when its mosaiced.

This should also explain VikingCoder's question. Its damn near impossible to store all that data as one huge raster. Instead its composed of various high resolution raster sources which get composed into 1 enormous raster at runtime.Yeah, so this is "logical" resolution. I also have a renderer that can render images at milimeter/pixel and kilometer/pixel resolution composed at the same scene. This doesn't make the runtime image resolution bazillion squared :)

Korval
08-30-2005, 11:43 PM
bigger textures aint really whats needed, hows that gonna solve the problem that most (all) games suffer from that the closer u get to the wall the worse it looks.Bigger textures is, in fact, how you solve that problem. Or you drop your screen resolution (or just stop increasing it to rediculous levels like 16x12 and higher).

Compression has nothing to do with the visual artifacts you are referring to. It is due solely to bilinear filtering of a texture that is, relative to the screen resolution, too small. The screen-space triangle is trying to pick out pixels in the texture that just aren't there, so it uses bilinear filtering to make them up. And, while it's better than point sampling, it's not as good as having bigger textures.

If you increase the size (and the detail. Assuming competent artists who know how to use a bigger texture), then you can add those details to the texture.

knackered
08-31-2005, 10:18 AM
Originally posted by andras:

he's likely using geometry clipmaps, with on-the-fly decompression - it's one of the big advantages of geometry clipmaps.
So the data is heavily compressed on disk.Huh? At what compression ratio? 10000:1? :) I'm sure it's just a typo. Divide both dimensions by 1k, throw in 40:1 wavelet compression, and then it sounds reasonable.Thanks for doing the maths.
Well, with geoclipmaps you can procedurally add detail, so you could achieve these kind of resolutions - although technically it's not real data, only an interpolation with added noise. ;)

zed
08-31-2005, 11:40 AM
Bigger textures is, in fact, how you solve that problem. Or you drop your screen resolution (or just stop increasing it to rediculous levels like 16x12 and higher).that doesnt solve it as you know, it only reduces the distace it becomes aparent,
btw the lossy texture suggestion wasnt my solution but another thing ild like added to graphics cards, imagine having the ability to have 4096x4096textures everywhere.
what im talking about is very cpu intensive fractals etc ok its easy to do rocks/plants etc. but i believe even stuff like humans is doable, what is an arm, a piece of skin with thousands of similar hairs and wrinkles etc on it, u dont need to model each hair, just the one + give a range of distribution + range of variability

Korval
08-31-2005, 12:34 PM
that doesnt solve it as you know, it only reduces the distace it becomes aparent, And if you prevent the player from getting closer than this distance, the problem is solved.


what im talking about is very cpu intensive fractals etc ok its easy to do rocks/plants etc. but i believe even stuff like humans is doable, what is an arm, a piece of skin with thousands of similar hairs and wrinkles etc on it, u dont need to model each hair, just the one + give a range of distribution + range of variabilityBut that doesn't really solve photorealism, as by the time you could even consider noticing things like hairs on skin, you've got more important things to worry about. Like the fact that the rock's surface is perfectly flat, or (employing bump mapping or relief mapping) that its edges are flat. Or that they're polygonal.

By the time a large texture is no longer capable of providing the appropriate details on a surface without bilinear filtering, other issues about that surface become more apparent. To the point of wanting to employ displacement mapping.