PDA

View Full Version : Doom3



Firestorm
09-25-2000, 10:30 AM
Ever since i listened to carmacks' speech at quakecon i was wondering about something..

At the speech he said doom3 would use .map files instead of .bsp files, and maybe a secondary binary file for edge coherency & t-junction data..
Later on he wrote somewhere (in his plan i think) that currently 'doom3' is using .map files only and no binary file would be used..

This basically means that all vsd preprocessing etc. would be done at level loading..
And altough it's possible to draw an entire q3 level without vsd at 90fps on geforce hardware (i've tried it ;o)
I doubt carmack is planning to have no vsd http://www.opengl.org/discussion_boards/ubb/wink.gif

Especially if you consider that he's going to implement realtime lighting using stencil buffers, which basically means virtually everything is redrawn at least 1x per light..
He was talking about 8-passes per surface!

And considering that you need at least some vsd in order to correctly draw multiple alpha-blended and other transparent surfaces...

Add to this that he was mysteriously talking about doom3 being more dynamic than anything seen to date, and about brush connectivity etc. etc.
Almost sounds like every brush would be dynamic and movable... altough i doubt *that*

It does sound like doom3 is going to be a cool game / engine..

But it does leave me wondering what the hell he's doing hehe

So, my question is, do you guys have any idea what kind of vsd carmack is doing?

Considering that preprocessing will have to be lightning quick since they'll be done at level loading time..
It sure as hell won't be BSP... (at least, i don't think so)
Probably some rough portal rendering..
Just turn on r_portalonly (or something) in q3a.. it's just as fast as using it with the bsp tree (at least on my geforce1)
and considering that geforce1/2/nv20 will be the target system for doom3.....

oh well.

Firestorm
09-25-2000, 10:32 AM
heh, yea i know, a bit off topic ;o)

pavlos
09-25-2000, 11:51 AM
There’s a very interesting interview at voodooextreme. There, Carmack make (almost) clear that it’s going to use a portal engine for doom. I can’t understand if it’s a usual "clip the view frustum to portal" engine ,but I’m sure that there will be not vsd preprocessing.
Also,there is no reason to store the geometry on a BSP , but that’s also the case with Quake 3 ,so here Carmack is unpredictable.

Firestorm
09-25-2000, 12:08 PM
well portal rendering itself needs to be preprocessed..
you don't magically *get* all of the portals and cells afterall, they need to be calculated somehow/somewhere..

as for "storing the geometry on a bsp"..
I'm talking about the bsp tree itself basically.. not about the geometry..
The geometry is quick & easy to calculate..
It's how to calculate the data you need to perform (rough) vsd quickly that's the mystery...

but thanks for the info about the article on voodoo extreme, i'll be sure to check it out..

Gorg
09-25-2000, 01:42 PM
You can create somekind of "dynamic portal".

Brushes are used to define a region that is only visible from a certain spot. Then, if something moves in that brush you had it to an "in brush" object list. So if you don't have that brush in view, you just don't render anything.

With that, you can even make terrain rendering faster if the map makers of clever enough to place the brushes at the right spot.

With today's more powerfull machine and with the size of the data, it's pretty "unclever" http://www.opengl.org/discussion_boards/ubb/smile.gif to use a perfect vsd algorithm. What you need, is just very gross culling and scalable geometry.



[This message has been edited by Gorg (edited 09-25-2000).]

skw|d
09-25-2000, 08:30 PM
I am doing some research into a portal engine myself and I think I know what he is talking about.

With portal rendering, there is no need for preprocessing of map data because you can easily create the cells on the fly. The trick is to come up with an algorithm to minimize the number of cells used to define a room, this allows for fewer splits of the view frustum. Because you only render what is visible, you have extra processing time for other things like dynamic lighting.

And since the loading of the world data into cells is realtime, you can easily change the existing cells dynamicly.

Though I think Carmack has something in mind to go with the already known portal rendering method. DooM3 is supposed to support more open areas which have been shown to be slow with portal rendering. He might have a hybrid rendering scheme that mixes portal rendering with a form of terrain rendering. It is quite easy that once you leave a cell through a portal, that the rendering system jumps to an entirely different scheme to render what is beyond the portal.

Another benefit is that you can design levels using the engine in realtime. This idea is similiar to the DukeNukem3D map editor.

Everything that Carmack has stated about his engine has not surprised me, it makes sense and I have already seen engines that show off some of the technology that he speaks of. I hope to be able to come up with my own solutions as I develop my own engine.

/skw|d

Firestorm
09-26-2000, 12:12 PM
well building portal cells is fast, but i doubt it's fast enough to do realtime..
at least not when you're doing rendering, AI, sound, physics and what not at the same time too..

as for zero overdraw, that's overrated..
it's probably fast to have a lot of overdraw instead of splitting a lot of polygons to get zero overdraw (generating more vertices to send over to the 3d card and having the cpu overhead of the splitting itself)
Fillrate is not the problem with rendering, bandwidth is.

These days it's usually more effective to just find a very rough estimation of what is visible and just send it over to the card then to actually try to find out exactly what is visible and draw it..
especially if you have to use a less optimal format when sending the data to the card..
Better try to keep everything as static as possible and use matrices to move/rotate stuff...

I mean, i think carmack said something about having roughly 30x overdraw per pixel??
and i remember him saying something about having about 8 passes per surface on average..

i mean wow.. that's a lot more overdraw than i tought in the first place...

dorbie
09-26-2000, 10:03 PM
This suggests to me that there could be lot's of dynamic data in the scene and highly curved surfaces etc. Maybe it won't all be indoors.

Remember that the BSP structure was only a means of visibility processing which was very well matched to the database being used. That database consisted of a series of convex hulls with preprocessed intervisibility information. The BSP traversal determined which leaf the eye was in to access a mask for which other leaf geometry sets were also potentially visible.

As the nature of the database, hardware and surface descriptions change alternative means of visibility processing make sense.

I don't think it's a given that there will be no preprocessing of the scene but it could well be the case, CPU's are much faster and the kinds of processing required during the load may be offset by the time to load things like shaders and geometry. It may also be a mistake to assume that the rendering will be as efficient as previous engines. A protal engine seems extremely limited for anything but running around a cluttered series of rooms with narrow connections (and I mean more cluttered than previous shooters). It's way to fussy for anything similar to what's gone before.

The edge connection list & t junctions are required for dynamically remeshing curved surfaces without cracked seams. If there is no curvature this information and geometry would be fixed and in the core description, tristriping on the fly isn't enough motivation for this.There may be much more payback from intense work on view dependent tesselation algorithms for this type of database than from sophisticated model space visibility culling, in addition as the geometry gets arbitrarily complex and unpredictable sorted eye space occlusion culling with emphasis of level of detail tesselation might make more sense for many types of scene, especially if you expect hardware will have some sort of coarse zbuffer testing.....and everyone will. Heck with coarse zbuffer testing you'll be focusing on geometry again rather than overdraw and fill, particularly with many textures in a single pass. You waste less bandwidth sending tris and in the distance there are fewer.

Carmack's art has always been matching the engine to the technology. If you're trying to figure what he's going to architect you need to look at future card features (he runs on bargain basement cards but he targets the latest & greatest) and the type of database being drawn (look at the direction Quake3 moved the database in).

Coarse zbuffer means you need to sort on cards anyway. It also means geometry not fill could become the issue.

Many textures & register combiners etc. means single pass shaders again improving fill while reducing the number of times you send geometry.

Database mandates much complex curved even animated surfaces which in turn makes schemes like facet aligned BSP trees impossible.

A BSP tree is also faster to generate for sorting of larger groups of objects with bounds of information instead of leaf level data. If you want a reasonable BSP (or other structure) for quickly sorting of big chunks of geometry rather than getting down to a facet level leaf to resolve visibility it may make more sense to compute it when loading.

Firestorm
09-27-2000, 12:48 AM
Well if you tought i tought that doom3 would have no vsd, than you misunderstood me..
It HAS to have some vsd, and therefore it requires at least *some* preprocessing..
Only it's done during load time..
and carmack mentioned something about preprocessing not working the same as in previous id games, in the way that doom3 requires smarter designing from level designers because the preprocessing won't be fullproof (or something, sorry bad explanation)
it comes down to that level designers will need to use more hint/clip etc. type brushes to get good vsd compared to older id games..
He said something that technology was moving away from letting one tool do all the vsd preprocessing for you to more feedback from the level designers..

So a lot of vsd preprocessing information can already be in the .map file..

He IS calculating the t-junction data etc. while loading the level because i doubt that he'd store raw polygon & edge data in the doom3 .map files..
Most likely he'll use some updated version of the old .map file format..

Some form of vsd is always necesarry, even with zbuffers...
at least when you're planning on displaying transparent surfaces..

as for portal rendering, you don't have to use convex cell portal rendering, you can use portal rendering in the form of areas connected to areas..
Just like id did in q2 & q3..

Actually, without some sort of portal rendering combining any form of vsd with any other form of vsd (terrain rendering) will be very hard to do..

Altough i doubt id will have terrain in doom3..
maybe i'm wrong, but it doesn't seem to be carmacks thang..

skw|d
09-27-2000, 03:01 AM
When I said on-the-fly, I meant on map loading, I don't see how I implied that you process the portals/cells every frame. You can update them or insert/remove them dynamicly however.

And overdraw is very important, the limitation of hardware is the ammount of polys you process. If you render a poly with a texture, dynamic lighting, shader effects... and it's not even visible.. that is a waste of a lot of processing time. A BSP can suffer from 200% overdraw, so 2x the number of polys are sent to the hardware and get processed. If you construct the PVS from the camera real-time, then you only send the visible polys to the hardware because there is no overdraw.


as for portal rendering, you don't have to use convex cell portal rendering, you can use portal rendering in the form of areas
connected to areas.. Just like id did in q2 & q3..

That is not true, portal rendering requires convex hulls, you adjoin two convex hulls to form complex shapes. This gives you a lot of portals, and each one will clip the view frustum, and a lot of splits are evil.

Quake2 only used BSP trees for the calculation of the PVS, Quake3 used a portal hack to work with a BSP tree for special effects like the transport destination view and rotating mirrors. Doom3 will be a portal-based engine, no more BSP. The main reason is to be able to increase the complexity of the levels and to remove the preprocessing of map requirement.

/skw|d

LordKronos
09-27-2000, 05:57 AM
Not positive, but when firestorm said you dont have to use convex hulls for portal, I think he MIGHT have meant you dont need convex hulls that match up with room geometry. You can use the convex hulls as a sort of roughly bounding box, to create general areas of the map.

Say you are trying to model a 5 story building, each story connected by a staircase which stick out of the side of the building (kinda like a fire escape). You can put a convex hull around each of the 5 stories, and a 6th convex hull around the stair case. Then a portal can be used to link each floor to the staircase. In this case, each floor of building can be concave (hallways and rooms), but its bounding hull is convex.

Firestorm
09-27-2000, 09:10 AM
Originally posted by skw|d:
That is not true, portal rendering requires convex hulls, you adjoin two convex hulls to form complex shapes. This gives you a lot of portals, and each one will clip the view frustum, and a lot of splits are evil.
a lot of splits are always evil (well, in principle at least, maybe not always)

portal rendering doesn't have to be fully convex, you can have non convex cells too..
You can think of the non-convex parts as 'details' (like detail brushes) or anti-portals or what not..

I've seen it work, it is possible.

pavlos
09-27-2000, 11:31 AM
The key is how Carmack will define a cell (convex or not).
Here is a part from the interview at VE:
“In any case, the gross culling in the new engine is completely different from previous engines. It does require the designers to manually placed portal brushes with some degree of intelligence, so it isn't completely automated, but I expect that for commercial grade levels, there will be less portal brushes than there currently are hint brushes. It doesn't have any significant pre-processing time, and it is an exact point-to-area, instead of cluster-to-cluster. There will probably also be an entity-state based pruning facility like areaportals, but I haven't coded it yet.”
So ,I think it’s clear that he ‘s talking about relatively large (non-convex)areas connected by portals placed by the designer. That way he can get an estimate of what is visible and replace the precomputed PVS. But he must use a hierarchical space partitioning algorithm for each cell to eliminate overdraw. I haven’t read any carmack’s comment about that but I think he will not use a BSP tree for that (That’s what I was trying to say on my first post) .The shortcomings of a BSP is well known and you can observe them if you run a Quake 3 map with r_speeds enabled.

I think he may continue to have each object in an AABB and put these bounding boxes inside an octree(or something like that) with relatively big (to avoid splits) octree-cells. This way can have a rough estimation of what’s visible and also the octree can render outdoors efficiently (but I don’t think DOOM 3 will have outdoors).

Also, he have stated at Firingsquad (almost a year back, after the quake 3 release) that the new engine will have the ability to cut the geometry everywhere, something you can do relatively easily with an octree and an area connectivity graph (to add new portals when necessary) .

And finally note that in a Slashdot comment he have cleared that the .map file will be loaded (and required) for use only with the built-in editor. The rendering engine will use pre-processed “compiled” geometry placed on a text file. The built-in editor is like the old editor and it ‘s integrated to share the common code with the rendering engine.


[This message has been edited by pavlos (edited 09-27-2000).]

skw|d
09-27-2000, 04:09 PM
LordKronos: You are describing a culling technique that does work, but it cannot be use to get pixel accuracy needed to render the 3d data. If you did, you will get tears in the walls and such.

Firestorm: You know something I don't, please explain to me how you can clip a view frustum with an arbitrary volume, because I would like to know.

LordKronos
09-27-2000, 05:35 PM
Originally posted by skw|d:
LordKronos: You are describing a culling technique that does work, but it cannot be use to get pixel accuracy needed to render the 3d data. If you did, you will get tears in the walls and such.


not sure what you mean. it works fine for me

pavlos
09-27-2000, 05:55 PM
Firestorm is right. You can use non-convex cells.
Using convex cells produce zero overdraw but the overhead from the frustum clipping is enormous and does not make sense when using hardware acceleration.
There ‘s a portal column at Flipcode that describe a portal engine. Here is a snippet:
“Another good way to get a portal engine up to speed is using concave sectors: Instead of using small sectors (or larger sectors with very little detail) we could also use larger sectors, if we would somehow find a way to handle the problems that this introduces...”
For the rest go at http://www.flipcode.com/portal/

As for the clipping question, the answer is you can’t clip against an arbitrary volume . So, in my engine I clip against the 2d-bounding box of the portal, getting always a 4-planes frustum .As you now, testing an AABB against a 4-planes frustum is only 4 dot products.
I know it’s not perfect, but keep in mind that all the hardware accelerators perform guard band clipping and so, you only need I rough estimation of what’s visible.


[This message has been edited by pavlos (edited 09-27-2000).]

skw|d
09-28-2000, 07:45 AM
An Axis-Aligned Bounding Box is a convex volume, so you are still clipping against a convex cell. It is true, that regardless of what GEOMETRY is contained inside the cell, as long as the cell is convex it can be clipped and the side-effect will be overdraw.

But going back to the topic of this thread: With all the polygon processing that will be done to get dynamic shadows and special effects, overdraw is not an option. By removing unneeded polys from the pipeline, you save time for more complex geometry and dynamic shadows.

For my research I am taking a set of data and processing it into cells and portals. To minimize the splits, the algorithm adjusts the current cell with it's neighbors to try and find a volume that is within a certain tolerance of it's neighbor. My goal is to be able to load unprocessed data right into the engine for use, and still have all the benefits of zero overdraw.

/skw|d

Firestorm
09-28-2000, 10:46 AM
Originally posted by skw|d:
An Axis-Aligned Bounding Box is a convex volume, so you are still clipping against a convex cell.

Hahaha, let's not argue about symantics, that will get us nowhere ;o)
Yes, you need a convex cell for clipping.
But who ever said that you need to clip?


But going back to the topic of this thread: With all the polygon processing that will be done to get dynamic shadows and special effects, overdraw is not an option. By removing unneeded polys from the pipeline, you save time for more complex geometry and dynamic shadows.

True, but fillrate isn't the problem. (altough yes, geometry is)
(and by the way, that is one of the reasons why i think doom3 HAS to have some at least fairly good vsd, which kinda is the opposite of quick 'pre'processing during loading imho)
But calculating zero overdraw takes way too much processing time, relatively speaking.
Carmack said 30x overdraw, with 8 passes per surface... that is something like 4 surfaces drawn on top of eachother if you count the 8 passes as 8x overdraw..
You know how to create shadows with the stencil buffer, right? It's pretty cool stuff..
The thing which costs the most calculations is calculation the shadow sillouhette, but once you have the sillouhette, you don't have to care how complex the geometry is you're casting your shadows on..
(and sillouhete calculations can be partly precalculated)


For my research I am taking a set of data and processing it into cells and portals. To minimize the splits, the algorithm adjusts the current cell with it's neighbors to try and find a volume that is within a certain tolerance of it's neighbor. My goal is to be able to load unprocessed data right into the engine for use, and still have all the benefits of zero overdraw.

Sounds pretty cool..
Some time ago i was, like you, looking for the perfect algo's wich could produce zero overdraw as quickly as possible..
But some time back i realized that the problem in the current level of technology isn't overdraw (fillrate) but troughput/bandwidth..
It's going to give you a bigger performance gain by using display lists, use vertex buffers, decrease the number of vertices (which means less splits, splits create more vertices) and sort your textures..

Trust me, i've timed it all..
Fillrate, on geforce level hardware and up at least (and within a year that's going to be low level), is basically infinite as far as you care.. (at least on q3a level engines, probably not at doom3 level engines)
It's the bandwidth which is going to eat your framerates away..
So 'zero-overdraw' won't help you because it will make it hard for you to send your data to the card in a compact form (vertex buffers/display lists etc.), it'll create more vertices by all the splitting which is necesarry to get zero overdraw, and you have all the additional cpu cycles it takes..

Don't believe me?
Create a program which loads a q3a map, creates all the polygons from the brushes, create a display list and draw everything
(sort those textures!)

then, split everything in such a way that you only have the outer shell of the level..
optimize it any way you want.
use a display list again..
And then just watch the enormous performance drop.

and with zero overdraw you won't even be able to use a display list..
vertex buffers (which are slower) might work with it, but i'm not entirely sure..

whoa, long post http://www.opengl.org/discussion_boards/ubb/wink.gif


[This message has been edited by Firestorm (edited 09-29-2000).]

skw|d
09-28-2000, 08:01 PM
You forgot to think about some things.

Zero overdraw means less polygons are being sent through the pipeline, you hurt more there than just the fillrate.

Think about this, you are sending polys to the hardware card asking it to do all the complex processing you talked about. All the polys must be textured, lit by dynamic lighting, dynamic shadow volumes must split some and shade them, and any other processing must be done to them... but they are never seen!

The whole point of vsd is to render what the viewer needs to see. The reason is not just fillrate, but to avoid all the other processing that needs to be done to the polys.

The time it takes to render a scene of polys is not linear, adding 10 times the number of polys does not make it 10 times slower, it is far worse. So the more effort you put into removing polys that will never need to be processed will go a long way, and IMHO is where one should focus their efforts when designing an engine.

I hope you can see what I am getting at.
/skw|d

Firestorm
09-28-2000, 09:52 PM
Originally posted by skw|d:
Think about this, you are sending polys to the hardware card asking it to do all the complex processing you talked about. All the polys must be textured, lit by dynamic lighting, dynamic shadow volumes must split some and shade them, and any other processing must be done to them... but they are never seen!
well "textured, lit by dynamic lighting" is fillrate and "dynamic shadow volumes must split some"??
as far as i know hardware doesn't have shadow volumes!?
The technique i was referring to when talking about with stencil buffers uses the zbuffer and does absolutely no clipping..
It's a pure fillrate thang..

And if you're referring to texture matrix calculations on the card etc.
Yes, those have impact on performance, but geometry has more effect on it than anything else...


The whole point of vsd is to render what the viewer needs to see. The reason is not just fillrate, but to avoid all the other processing that needs to be done to the polys.

And you're absolutely right!
But the point is that balance is more important than zero-overdraw..
There are more factors than just 'drawing the polygons you see'
Sure, the less you draw, the faster everything is going to be, that's obvious.
But it's faster to draw less polygons and less complex polygons (with less vertices) even if you have more overdraw..


The time it takes to render a scene of polys is not linear, adding 10 times the number of polys does not make it 10 times slower, it is far worse. So the more effort you put into removing polys that will never need to be processed will go a long way, and IMHO is where one should focus their efforts when designing an engine.
It's all about balance, yes you need to remove as many polygons you don't see as you reasonably can (without doing so many calculations you actually start slowing down everything)
But clipping polygons is a bad thing most of the time..
But the most important factor to take into consideration is that things like vertex buffers usually have a much bigger impact on performance (this case in a positive way) than removing several polygons..

Look..
I've been at siggraph2000 this year and in one of the courses they came with a very good advise.
Theory is good, but it's worth crap if you don't actually verify that what you think is going on, IS actually going on.
Check everything, try everything, never assume.
Verify verify verify..

I was working with no-overdraw algo's before i stated testing everything... and because of those tests i changed my mind..

Please take my advice and do some tests, You'll discover (like i did) that splitting is very bad for performance, much more so than overdraw.

Ofcourse if you split a polygon and you end up with exactly the same ammount of vertices, sure it'll be just as fast..
(but ofcourse you'll still have the extra cpu overhead)

It's probably more effective to split a level up into some sort of cells (not too small, not too big, non convex)
put them in display lists, and determine which portals are visible and which display lists should be called..
the added advantage of display lists is that the geometric data should already be on the graphics card..
And when you do things like multipass rendering, you just call the list for every pass and you don't need to send it to the card again, because it's already resident.

But again, i'm talking about geforce level hardware...

I hope you realize how much faster vertex array ranges (nvidia specific) are faster than display lists, how much faster display lists are than vertex buffers, and how much faster vertex buffers are than sending triangles etc..
And the more static the data is, the faster it can basically be send to the card..
Ofcourse when i say static, i mean from a geometric point of view.. you can still rotate/translate the data with the help of matrices (and some extensions), you can even do the same for texture coordinates using texture matrices..

[This message has been edited by Firestorm (edited 09-29-2000).]

skw|d
09-28-2000, 11:46 PM
You didn't bother to take any time to understand what I was saying so I give up on trying to explain any further.

Roderic (Ingenu)
09-28-2000, 11:55 PM
Let the hardware do HSR for you.
I don't mean clipping or culling this is our job, but 0 overdraw is up to the hardware not us.
Many card are goind to do that for us, so try to optimize for something else than 0 overdraw.

PowerVR cards already do 0 overdraw for you and even sort the pols in the right order to take care of correct transparency effects.

I know that next gen 3D cards will take care of this for us...
While we are waiting for them to become standard equipement, we should take care a bit of overdraw.

(don't be extreme, sending everything to the card with massive overdraw (more than 10/pixel) will not help you...)

Firestorm
09-29-2000, 04:15 AM
Originally posted by skw|d:
You didn't bother to take any time to understand what I was saying so I give up on trying to explain any further.
I understand perfectly what you're saying, but understanding what you say doesn't necesarrily mean that i have to agree with you!
And i can point a finger straight back at you and say you don't try to understand my point.

arnero
10-01-2000, 01:11 AM
Maybe one could say that software culling moves up to higher and higher levels:

clip triangles=cull pixels
cull triangles
cull displaylists (bunch of triangles)
cull even bigger displaylists
The card is doing everything for us

Arne

Firestorm
10-01-2000, 04:46 AM
Yup, sure seems that way..
Unfortunatly the current pc architecture makes it hard for graphics cards to do more and more work for us since the bandwidth is becomming a bigger and bigger problem..
The best solution would be some sort of shared memory architecture, a bit like in the xbox (altho i'm not sure if it is or what would be the best implementation of shared memory architectures, that's not really my expertise http://www.opengl.org/discussion_boards/ubb/wink.gif

An added bonus of shared memory architecture that you'd basically be able to retrieve data from opengl without almost any performance penalty http://www.opengl.org/discussion_boards/ubb/smile.gif

The Scytheman
10-02-2000, 05:11 AM
It's always nice to find a board with quality threads like this one.

Crusader
10-02-2000, 12:55 PM
Hey people ! Are you sure this is true and not only rumors ? I think it sounds a bit "magic".

People, don't you think you need more than 20 secs to compile a 100'000'000 polys map ? And doing this at loadtime ? Carmack I don't trust you ! Or it will be HELL slow...

Or, MUST WE DO THE *VIS* PROCESSING WITH OUR HEADS !!! with brush visibility at map editing time ????

Maybe some people are staying too much in front of their pc and should look outside to see what they're passing by...

Roderic (Ingenu)
10-03-2000, 12:00 AM
You can compute Quad, Oct or kde-Tree at runtime, it's not that long, however it's not the fatest way to optimize your 3D VSD.

About the PC architecture, AMD is moving to Alpha arcitecture keeping x86 compatibility, has you know the Alpha architecture is much better.

About 3D cards and Bandwidth:
I recommend that you go and read http://www.beyond3d.com and check for the PowerVR explanation, see http://kyro.st.com and http://www.powervr.com

Those cards are VERY bandwidth saving and show hardware manufacturer the way to go.
I know that many big firms are following them in a form or another (not tiling and deferred rendering but full color buffer in on chip memory...)

Unified Video board memory is something nice, shared memory is not possible with the current databuses in our PCs.
(Damn too slow)

Firestorm
10-04-2000, 02:44 AM
Originally posted by The Scytheman:
It's always nice to find a board with quality threads like this one.

Hahaha, you know, i can't tell if you're being sarcastic or if you really mean that http://www.opengl.org/discussion_boards/ubb/wink.gif
I'll presume the last ;o)


Originally posted by Crusader:
Hey people ! Are you sure this is true and not only rumors ? I think it sounds a bit "magic".
Yes i'm sure it's true because i was at quakecon when Carmack had his speach.
And i really don't think Carmack would have lied about all this, that would be so out of character.


Originally posted by Crusader:
People, don't you think you need more than 20 secs to compile a 100'000'000 polys map ? And doing this at loadtime ? Carmack I don't trust you ! Or it will be HELL slow...
Well i seriously doubt doom3 maps will have 100 million polygons http://www.opengl.org/discussion_boards/ubb/wink.gif
But still, yes it is possible...
You have to realize that doom3 will be build using different vsd techniques.
It won't have VIS like in q3a (maybe something simular, but it'll be different) and it won't have to do any lighting precalculations since all lighting will be done dynamically using the stencil buffer..
It's difficult, but not impossible


Originally posted by Crusader:
Or, MUST WE DO THE *VIS* PROCESSING WITH OUR HEADS !!! with brush visibility at map editing time ????
Yes, partly.
Which isn't a big deal, anyone who's ever made a map for q2 or q3a knows that you have to use a lot of clip/skip etc. brushes..
That's basically "processing with our heads"..
I also suspect that some vis precalculations will be done while editing the map.. In such a way that you don't actually notice it.


Originally posted by Crusader:
Maybe some people are staying too much in front of their pc and should look outside to see what they're passing by...

Maybe, but that's a completely different topic ;o)


Originally posted by Ingenu:
About 3D cards and Bandwidth:
I recommend that you go and read
<snip>
that's interesting, cool.


Originally posted by Ingenu:
Unified Video board memory is something nice, shared memory is not possible with the current databuses in our PCs.
(Damn too slow)

True, but the xbox uses a different type of bus, i don't know the specifics, but i do know that the xbox has a shared memory architecture (so has the ps2 btw)

XBCT
10-04-2000, 03:50 AM
Hi!
You´re always talking about doom3´s lighting system using the stencil buffer....Can someone give me a hint how the stencil buffer is used for lighting, or a good link?

Thanx in advance, XBTC!

Gorg
10-04-2000, 04:39 AM
Stencil is used for shadowing. Carmack said he wanted to do all the lighting with Dot Product bump mapping.

LordKronos
10-04-2000, 05:18 AM
theres plenty of documentation on the topic at http://www.nvidia.com/developer.

I also have some articles on it at my site: http://www.ronfrazier.net/apparition/research

To give you a quick rundown of the 3 common shadowing techniques:

1)Shadow Volumes: uses the stencil buffer to determine which objects are in the shadow of another object. Probably the best choice for current hardware, as it is relatively fast, looks good, and widely supported.

2)Depth Shadow Maps: uses a dynamicly created texture and the stencil buffer to determine if any object sits between each pixel and the light source. Pretty slow and typically not as good looking on current hardware. As hardware becomes faster and memory increases, this technique should eventually become the best of the 3 techniques. I have actually heard that this is the technique used by advanced animation studios (like pixar). It works for them because they have insanely fast hardware and dont have the real time requirement most developers typically do.

3) Index Shadow Maps: Very similar to depth mapping but uses a polygon or object index instead of depth to make comparisons. I Dont have much to say about this one. I Dont think it has a lot of usefulness (Im thinking of the lyrics to the song "War"). It has the same problems as depth mapping, and then some.

XBCT
10-04-2000, 06:07 AM
Thanx alot,guys!
I think I´ll look into that stuff(Carmack said that he started the doom3 render on top of the Q3-Renderer,perhaps I can do the same with my Q3-Viewer http://www.opengl.org/discussion_boards/ubb/wink.gif).

P.s.:Cool site FireStorm....Lots of interesting stuff.

Greets, XBTC!

Firestorm
10-04-2000, 09:17 AM
Originally posted by Gorg:
Stencil is used for shadowing. Carmack said he wanted to do all the lighting with Dot Product bump mapping.

Yes you're right, i ment that, but should have been more clear.


Originally posted by XBCT:
Thanx alot,guys!
I think I´ll look into that stuff(Carmack said that he started the doom3 render on top of the Q3-Renderer,perhaps I can do the same with my Q3-Viewer http://www.opengl.org/discussion_boards/ubb/wink.gif).
Well i actually think he rewrote most of the rendering part of the engine..


Originally posted by XBCT:
P.s.:Cool site FireStorm....Lots of interesting stuff.
Now i'm confused... which site are you refering to?

XBCT
10-04-2000, 11:49 AM
>quote:
Originally posted by XBCT:
P.s.:Cool site FireStorm....Lots of interesting stuff.
Now i'm confused... which site are you refering to?<

Uups sorry, I meant the site of Kronos....

Greets, XBTC!

Crusader
10-04-2000, 12:36 PM
People, specially Firestorm,

I don't want to contradict anyone but, do you really thing that we will be able to catch Freedom with editing ???

This is the most important thing in life and it is not reached, and video games are made to do this in some cases, so whats the point if you have an engine with no freedom !

Artist must NEVER have to handle theese troubles so programmers are and will EVER be here for this reason nowadays (?) Surely I will be wrong for many, but if we think wider, all this is nonsense...

Please, I really like to edit maps ! And I don't want to have CARMACK's new norms of designing !!! Please don't help this or it is going to be like Microsoft ! One leader

I hope I made my self clear, if not, excuse me.

Roderic (Ingenu)
10-05-2000, 02:05 AM
They are many 3D engine out there, you don't have to use mr Carmack one if you dislike it.

I don't see your point here.

You support only what you want to.

Firestorm
10-06-2000, 08:14 AM
Originally posted by Crusader:
People, specially Firestorm,

I don't want to contradict anyone but, do you really thing that we will be able to catch Freedom with editing ???

I know what you mean, and deep down inside i feel the same way.
The problem is, however, that programming requires a lot of energy and time.. even for things that seem to be absolutely trivial..
And when you have only have one year to build your engine and game, you simply do not have time to add all the features you'd like.
Sure, you could hire more programmers, but that'll make the project more chaotic, harder to organize (trust me, having more than 7 programmers is a mistake) and more expensive (not all software houses make millions with each game they make, most can barely stay alive)

Yes i know the tools that are given to the artists to make their stuff with have a lot to be desired of, but it's simply not realistic to create the super tools we'd all like to have..

Ofcourse you could improve your tools with every new game you use your engine with..
But that would make it nearly impossible for you to develop drastically new technology.. so you'd be kinda stuck with your old stuff..
And the game will suffer because of that..

So the only solution is to make life a little harder for the artists, but being able to generate a awesome game.

Ofcourse, another solution might be to use third party tools like 3dsmax etc.
That way you'd have good tools with lots of documentation and flexibility and you'd probably only need to write a plugin.
Altough sometimes the extra functionality actually gets in the way (artists using functionality that isn't supported)
And most people wouldn't be able to afford 3dsmax, so you can forget about amateurs making mods etc.

But thankfully they're going to release a 'free' version of 3dsmax for soon..
But it'll cost money to release plugins for it..

10-07-2000, 10:24 AM
If you want super tools, you don't have to
write them from scratch. Use Maya, or maybe
something cheaper like 3dsmax, and write a
converter or exporter which exports your
level format from the model file.

Some games development companies do this,
and they often rave about how productive
their artists are because of it. It's all
the people thinking that they can get this
level of support for free which amaze me.

Firestorm
10-08-2000, 01:19 AM
Originally posted by bgl:
If you want super tools, you don't have to
write them from scratch. Use Maya, or maybe
something cheaper like 3dsmax, and write a
converter or exporter which exports your
level format from the model file.
Didn't i just say that in the last message?


Originally posted by bgl:
Some games development companies do this,
and they often rave about how productive
their artists are because of it. It's all
the people thinking that they can get this
level of support for free which amaze me.
Well it's in the best interest of the game companies as well, since a game that has a lot of mods for it will sell better and live longer..
And a game that has no (decent) tools for it, will be very hard to make mods for..