Collision detection, and an octree, Sort of.

I was wondering what you all thought about this idea. I was working on an octree style collision detection. Basicaly a bianary <= => check of big chunks of data, to eventualy get down to a smaller chunk of the land scape. But I had another similar idea, but it may take up TOO much memory.

I was simply going to break my ENTIRE map up into like 10 or 20 foot blocks. Each block containing an unsigned int index to the collision walls possible for that block. Then simply each frame check to see where the player is on the XYZ, and match it to being inside one of the blocks. This makes for VERY VERY fast finding where you are, and VERY VERY few checks (Being that we are only checking a 10ft area. Not many walls are going to be there).

But this does pose a size issue. Say i built a 5000ft x 5000ft x 20ft high (2 levels) Map. That is 500,000 Blocks. Now say i had an average of 5 indexes per block, that is about 2.5 megs of data. Is that too much you think? (On ram, not video obviously).

Or should i go with the classic Octree bianary division setup?

EDIT: Subnote, i could also build some of my other code areas into this design, thus adding more variables to the global mix, I guess the MAJOR question, is it ok to use ALOT more space, to gain ALOT more speed?

[This message has been edited by dabeav (edited 01-09-2003).]

Yes it will be faster than recursing though your octree, but the main reason quadtree/octrees were invented was because the method you describe uses waaaayyy to much memory in even modest scenes.
Just make your octree polygon threshold low enough and you’ll still have bugger all triangles to test against. The recursion is not that expensive because it won’t have to recurse much in the general case.
Octree and quadtree collision detection is very fast when done correctly. I believe lots of commercial games use them - not surprising when you consider the memory available on the average console.

Just what i thought, ALOT of memory use. But i think todays systems can take it dont you? I mean if my entire collision code only takes like 5 megs (max) on a system with even 256 megs (which is REALY REALY LOW) it should still have PLENTY of space for other stuff. I mean, i look at some apps, that require 128 megs just to RUN, and i cant believe it.

I think i would like to use this method, since the ram is plentiful on most systems, and i wont be moving the data around alot inside the ram (that whole 5 meg chunk will be loaded once, and then only read from), this leaves ALOT of CPU power left to do other things.

256MB of memory is not “REALY REALY LOW”. 128MB is the standard expected by most games, which means most gamers probably have an average of 256MB.

Also, if you’re at all interested in rendering speed (which is more likely to slow you down than anything else), you should be using some form of on-chip video memory for you vertex, and perhaps index, data. Investigate VAR or VAO, depending on your platform of choice.

128 MB required for a game means the game has to fit in about 96 MB.

Collision detection speed is about equally important to rendering speed for overall performance. Sometimes it is more or less important, depending on how much you stress either system.

In games and pretty much anything that is 3D, memory requirements don’t matter as much as speed.

By the time you finish your project, gamers will have 738MB as standard. You decide.

I spent some time in optmizing my collision detection code, since it’s a very expensive operation. Still not finished.

V-man

I use a hybrid approach: at the top level I have a grid, like you described, to perform really fast but coarse proximity queries. Inside each grid cell I have an AABB tree. I base the size of the grid cells on the total size of the scene and on the polygon density. I usually end up with 5-10K triangles per grid cell.

– Tom

Eh? You do point-to-triangle/cylinder-to-triangle detection of 5 to 10 thousand triangles?

That brings up a good question. What is a good number of collision checks to check for each frame? I mean, we are attempting to cut as many as possible, but eventualy we have to check some. What is a good number to stay below?

dabaev: that all depends on your individual app. For some it may be too slow to do more than 20, for others it may be okay to do thousands, but usually it will be in the hundreds.

knackered: I believe he means that he has 5-10K triangles in his AABB tree, and uses the AABB tree to reduce the checks.

In that case, i think i will go with my idea then. I have worked it out, if i have a max map size of 10,000 x 10,000 x 200(or so) and break it up in chunks of 25ft a piece, and average 20 polys per chunk(A high estimate i believe) it is about 25megs of space required, but that is about ALL of the needed variables for my entire collision code. So i think im doing well. And also, i dont think i will EVER have a map that big.