PDA

View Full Version : Moving back to software - when?



MickeyMouse
05-08-2005, 09:27 AM
Hi everyone,

Sorry for a slight offtopic...
I'd like to hear your opinions on when (and why?), if ever, will we move from hardware acceleration to software renderers for stuff like games and other advanced visualization systems. For last several years we can notice a more or less stable speed ratio for software/hardware renderers, so it's more of a question how do you suspect the future to be and if you have some interesting pros or cons for both of the available options.
Since every year the CPUs are getting about 70% faster (don't know exactly how GPUs behave, but I think it's even more), that's been 'proven' by someone, that in 2020 we will reach the highest possible CPU speed, because of their physical construction constraints.

Thinking 'logically' one would say, that since GPUs are in recent years constantly faster than CPUs, why wouldn't they be faster for all the next years. But look at Java vs. C++. The latter is generally constantly several times faster than the first, but as the time passes, we are soon expecting succesful Java games too, aren't we? (or maybe there are already some) The point is that not many would expect it to happen in the old times of software renderers. There are simply always multiple points of view, which I'm trying to consider too.

Thanks for any input.

Zengar
05-08-2005, 11:25 AM
If you put it that way, your question becomes absurd. Where is the border between hardware and software rendering? Of course, if we have a 100-x core cpu with low latency memory acess and hight-performance math, we may throw our gpus away. But don't you think it's the same thing we have now? IMHO, the graphic demands raise much faster then the cpu technology - so I guess there will always be hightly parallelised units to process graphics.

JD
05-08-2005, 01:26 PM
Exactly, you're thinking of what's possible in gfx today and saying that in future we're able to do this in sw at the same speed while neglecting to realize that in future we won't be doing the same things that we do today. There is also the issue of 3rd parties saving us time that you have to think about. Most don't want to give up the black box and rather do more important things with their time. Also, the last time I checked we've been hovering around 2ghz cpus for a while now. Thus the whole move to multicore cpus which become reality this summer. Even if you had infinate cpu power you would use it up with infinate bloat :) Look at large capacity harddisks we have today. Back in '98 I think I saw a 1gig hd on the shelf and I thought that if we had 400gig then we wouldn't know what to fill it with. Well, sw size has increased to where a game when installed takes over 1 gig or space. Unthinkable back in '98. That's how it goes.

dorbie
05-08-2005, 01:47 PM
Not in the foreseeable future.

Graphics hardware will always be faster than software on general purpose CPUs.

Graphics is a very specialized task with performance gains from application specific architectural features like on chip coarse z memory management. Graphics it is also inherently scalable thorugh parallelism. It's also nice to be able to perform 3D gaphics in parallel with your application/game code instead of wasting a CPU to do it. In fact good software on the CPU in unison with good GFX hardware can further enhance performance.

MickeyMouse
05-08-2005, 02:13 PM
Originally posted by JD:
Exactly, you're thinking of what's possible in gfx today and saying that in future we're able to do this in sw at the same speed while neglecting to realize that in future we won't be doing the same things that we do today. Umm, I think I was misunderstood. Blame me for my english, but I'm far from thinking the way you suggest I do.

What I want is to hear what do you think is the direction we're currently moving to with the 3d graphics. I also wanted to point out that it's very difficult to tell the future, at least when talking about informatics. Even though it's difficult, maybe some of you have some good guesses what will get changed in future.

The point about parallelism is definitely a good one. But it's noticeable that there's usually some evolution level at which informatics sacrifice speed increase over usage convenience / simplicity. Real-time computer graphics is the very critical one about performance, so this might not always be the case, but who knows.

dorbie
05-08-2005, 03:13 PM
Well if you want flights of fancy imagine a system where the GPU is a coprocessor with close affinity with the CPU(s). Both share extremely fast memory systems and caches and neither are bogged down by the legacy x86 design.

The CPU can almost instantaneously access the GPU registers and cache and vice versa and some software code just compiles to vector like instructions some of which runs on the GPU effortlessly.

GPUs are heading more towards general purpose instruction architecture including parallel MIMD execution with memory access to make the rapid processing of memory coherent data efficient.

When will CPUs be replaced by the GPU?
Or when will they merge into a single system on chip design?

Some of this is more feasible than you think, but parts of it seem vanishingly unlikely unless you're talking about a complete design departure like a console design.

Zengar
05-08-2005, 04:41 PM
I was thinking about the similar thing...

zed
05-08-2005, 05:29 PM
that's been 'proven' by someone, that in 2020 we will reach the highest possible CPU speedflatearth society member.

ultimately of course there will be just the one processor (technically prolly eg a million subprocessors running at 10ghz each of something) though the question is when 2050? 2100?
cpus in partnership with gpus are gonna be around for a while yet

N64Marin
05-09-2005, 12:09 AM
If we have a lot of cores that can work simutaneously, and can be programmed to do a lot of floating point operations in single clock cycle, and a lot of hardware registers to avoid frequent memory accessing, with a large memory bandwith. Maybe we can move to software rendering.

dorbie
05-09-2005, 12:31 AM
No, there's more goes on in a GPU than just a lot of fp parallelism and registers.

There are fundamental reasons an ASIC does a task better than a CPU. A GPU is an ASIC, it has optimizations that need hardware to be efficient like *many* pipelined heterogeneous operations sequenced per clock, coarse z, perfectly timed pipelining w.r.t. the memory fetches, in chip FIFOs between stages, domain specific cache architectures and memory addressing specific to the types of memory access. It doesn't matter what you claim your CPU can do. GPUs will evolve too, you wind up talking about firmware running on CPUs that have evolved to look like a GPU, and that makes no sense (is less area efficient) for the 'immediate' future. If flexibility and configurability was your main objective then maybe you'd have something, but it's a simple case of horses for courses and being economical.

Obli
05-09-2005, 02:18 AM
I'm pretty scared by the fact CPUs could emulate GPUs. Well, there could be advantages. Every portable PC will have "efficient" (in the sense of "working") glslang and stuff but I hardly believe it will be able to outperform GPUs.
I heard intel already does this on integrated graphics (only vertex processing however).

I actually think GPUs are grown so fast because of their "stream" programming model which allows to do scale with parallelism and assume different things from CPUs. By contrast CPUs are designed to handle "flexible code", jump here and there on the memory.

I perfectly agree with what I read on the net some days ago... GPUs and CPUs will likely converge on a single feature set in the (surely-not-near) future, but there will be algorithms which run best on the other kind of processor.

By sure means, integrated graphics vendors have all interest in melting the two chips in a single one, but I hardly believe CPUs will kill GPUs or vice versa.

bChambers
05-09-2005, 10:50 AM
Originally posted by Obli:I'm pretty scared by the fact CPUs could emulate GPUs.

What about the fact that GPUs could emulate CPUs? I started programming back when the GPU was little more than a gateway to the framebuffer, and all this functionality has now been subsumed from the CPU. Why did they evolve like this? Because GPUs are optimized in ways that make them extremely fast for graphics programming, but extremely slow for everything else.

I perfectly agree with what I read on the net some days ago... GPUs and CPUs will likely converge on a single feature set in the (surely-not-near) future, but there will be algorithms which run best on the other kind of processor.

This would be a step in the wrong direction. The very design decisions that make GPUs fast would make CPUs slow.

What I believe we'll see in the future is more dedicated cores - we've already seen dedicated sound processors, and now we're seeing the introduction of physics processors. I believe that we'll see all these, and more, located on a single motherboard and connected via some communication flexible system (perhaps a Hypertransport or PCIe derivative). Several specialized cores, and a few general purpose cores. Hints of this can already be seen in next generation consoles and CPU roadmaps.

tamlin
05-13-2005, 06:14 AM
My view is that we will indeed move towards software, again.

Now, now, hold your horses. I didn't say software run on the CPU. :)

I hope the evolution we have seen with VP and FP is just the tip of an iceberg of the generalized programmability GPU's will provide in the future.

As memory requirements and consumption increase with more detailed ... everything, I think we'll start to see much more generated-on-the-GPU "stuff", e.g. terrains and textures, from functions supplied by the programmer to the GPU. GPU speeds have increased way more than memory size on cards, tipping the balance of the always present computational speed vs. storage requirements tradeoff more towards using (GPU) generated stuff, lowering the relative storage requirements.

What I hope we will see more of in the future is smarter partitioning of memory between GPU and CPU, so that gfx cards would only require a baseline amount of memory on-card (in addition to, like any decent CPU, caches), allowing n bytes (say half a gig if you want a number) by chipset be decoupled from the CPU RAM bus and temporarily given to the GPU to play with. Perhaps RAM will fall so far behind processing speeds of both CPU and GPU (like it wasn't seriously behind today :) ) that dedicated packet-switched networks to RAM will be required, making such complex partitioning logic unneeded.

The potential benefits of such a setup are too numerous to list. Suffice it to say all parts of a system, not to mention users of such a system, would benefit.

But whether we'll start to see this change in memory architecture in our active lifetime or not, I'm sure we'll see way more software solutions allowing way more flexibility for visualization than we have today - and more and more of it will be run on the GPU.

I hope future GPU's will become more of complete visualization systems where the CPU can e.g. upload a scene/frame description and let the GPU handle translucent-surface sorting, optimal sorting of state-changes, turning on-off states, FP's, VP's and all the other tedious, boring but unfortunately currently required micro-management we are forced to do on the CPU.

The only obstacle I currently see in this area, is that I'm not aware of any open project (research or otherwise) seriously looking into extending OpenGL in these directions, be it under the OpenGL name or something else - at least not with an eye towards what servers (what most people today think of as GPU's) might be able to do and therefore off-load the client (host CPU).

Zengar
05-13-2005, 12:40 PM
tamlin

would you be so kind to define what do you mean under "software" and "hardware"? For me, software means general processing and hardware - hard-wired functionality. Software is slow but flexible, hardware is fast but, well, hard-wired :-)

A hardware unit will ALWAYS be faster on simple tasks then a software unit.

The digital processing envolves to get more performance and more flexibility. Flexibility: ability to constrol the execution. Performance: implementing time-critical tasks in hardware. If we combine it, we get a new solution which can't be considered neither "software" nor "hardware" based, if we use the established terminology. So for me, such discussions don't make any sence. We can make our prognoses how future GPU's may look like, but please don't use current terminology as it is! I'm shure it won't be compartable with future solutions.

michagl
05-13-2005, 05:38 PM
if you want my two cents, i can tell you exactly where hardware will likely go.

its very simple really.

take a curve. there are an infinite number of points on the curve. in a few years to a decade polygons will be an ancient paradigm. a static polygon model is basicly preprocessed points on the curve to a preset resolution. for a mildly realistic scene which is more or less freely explorable, precomputing all the per vertex attributes for the scene is pure insanity, much less managing the multi resolution meshes which would be required for any reasonable degree realistic resolution.

therefore as a pure matter of necessity, like it or not, future hardware will except uloaded parametric geometry, nurbs or subdivision surfaces or whatever. from there you pretty much won't be able to touch the surfaces directly, because the hardware will have to manage the tesselation of the surface with respect to an evolving view frustum.

because parametric spaces are not uniform, it is likely that you would define a 2D mesh (yes a 2D mesh in the parametric U and V coordinate system) to serve as the base geometry which the hardware will subdivide. (other subdivision processes could be used, but this is the most economical model).

i'm working on all of this, and for the last week have been working on a lowlevel API which should be immediately comfortable for opengl users. i've built the system and taken as much advantage of current hardware as is possible, and quite effectively as it turns out. the system itself though is completely pathalogical and would be better implimented entirely in hardware.

beyond this for limited applications (especially hard science) i would predict a finite element physics processor tightly coupled with the graphics processor. i predict the definition of a 'video game' will converge on ultra realistic virtual reality, and processors will be taylored specificly to this need. games which do not conform within the constraints of something similar to 'real' physics, would have to take a backseat performance wise and fall back heavilly on software. but people with a taste for 'unrealistic' games probably wouldn't care about this. the heaviest puller for this technology will be garment simulation.

though i prefer not to talk about it, i've also done extensive work in the field of real-time anatomical simulation (down to the last pieces of bone and sinue) and i can tell you that a whole slieu of specialized parallel asynchronous pipelined hardware will develop around heuristical models of anatomical simulation. of course other sorts of simulation will get a free ride, but anatomical simulation will drive everything, just as video games drive computer science.

EDIT: just to try to reconcile the last two paragraphs, trying to do anatomical simulation on a finite element processor is just insane power wise for vr type applications where people would expect to play a major role and in vast quantities. the first games to see this technology will be one on one fighters with a lot of skin ('fist of the north star' would be an awesome candidate). as far as i know i'm the only person who has made real headway into this field. ie. musculographics.com (not me)... anyone with money and an upstanding ethical history who wants to talk about it can just PM me.

just tryin to blow your mind!

sincerely,

michael

Topquark
05-14-2005, 09:19 AM
In addition to all what've been said, I think we're going to see more and more rendering algorithms that are now considered simple, but slow and computationally expensive, to be implemented in hardware (or programmed hardware) and run in real time, for example raytracing + photon mapping.
Remember Z-buffer, it was invented in 1979 as a simple visibility determining alg, and practically useless because of its comp. needs, and now every one has it in hardware...

michagl
05-14-2005, 01:58 PM
honestly i imagine that in the not too far flung future there probably won't really be any such thing as the graphics programmer. everything will be done on dedicated hardware, and the basic workflow will pretty much become standardized.

the graphics programmer would probably have little more to do than pick from a catolog of programmable shaders and configure the hardware.

beyond that everything will probably just be event management, and i wouldn't categorize that as graphics programming.

such a future might appear void of creativity. but the truth of the matter is everything will converge on optimal solutions, and the solutions will be built into dedicated hardware.

the trick will probably only be a matter of society not forgetting how the hardware even works once evolution becomes only a matter of quantity rather than quality.

constrained AI / natural language processing and database management will probably take over the curiosity of hobbiest. graphics and physical simulation will likely appear quite quaint and formalized in comparisson in not too long.

ZbuffeR
05-14-2005, 03:44 PM
So to sum up what was said, future of OpenGL graphics is basically 100% hardware Renderman (implicit surfaces done with micropolygons+complex shaders+some ray tracing capabilities) mixed with physics.

Ok, that is pretty obvious.

michagl
05-14-2005, 11:06 PM
Originally posted by ZbuffeR:
So to sum up what was said, future of OpenGL graphics is basically 100% hardware Renderman (implicit surfaces done with micropolygons+complex shaders+some ray tracing capabilities) mixed with physics.

Ok, that is pretty obvious.if i post too much dorbie will jump my tail, but that is basicly correct. only thing i would dispute is micropolygons.

i would predict that fpo's relief mapping approach will become standardized. polygons will be considered hulls, and will span about 3 to 8 pixels. the pixels inside the polygon hulls will be displaced to achieve perfect curvature and detail mapping.

it doesn't make practical since for polygons to be on average one pixel wide as long as you rasterizing triangles and not raytracing. doing so pretty much defeats the purpose of using polygons in or at least makes it look pretty ironic and foolish.

the future of global raytracing for real time apps is a bit more complicated though. i personally can't offer up any predictions. much more realistic than classical hardware raytracing is hardware photon mapping. hardware likes straight forward procedures.

3k0j
05-15-2005, 02:06 AM
I predict that soon there will be apocalyptic chemical/bio/nano/antimatter/nuclear war, or alien invasion, or asteroid/comet impact, or global warming caused weather disasters, or worldwide nazi/communist/fundamentalist revolution, or second Christ/Satan/Cthulhu coming.

Therefore, we should not be worried about having to change our profession 10 years before retirement because of hardware evolution making our graphics programming m4D $k|11z hopelessly obsolete.

tamlin
05-15-2005, 03:51 AM
Considering a game featuring Cthulhu was basically what got so called "consumer-level" hardware OpenGL support going (even if initially in form of "miniport drivers"), and it seems we've now reached a period of more-or-less maturity/stagnation when it comes to hardware progress, would the return of Cthulhu complete the circle or would the minions simply chant "death to polygons"? :)

michagl
05-15-2005, 10:15 AM
sorry if i suggested graphics programmers will be out of work in 10 years. but ever since the dawn of hardware graphics the domain of the graphics programmer has been gradually consumed by the hardware. there is no reason to believe this will not continue until there is nothing left. (especially for vr which mimics reality to the last detail and has no explicit rules such as 'life meters')

its just a matter of how exponential the assimilation process will be at its limits.

there is no reason to believe though that opengl will not continue to be backwards compatible at least to its dying days, heaven forbid, for anyone looking for nostalgia.

i would say that graphics programmers skills will be marketable for at least the next 30 years. mind you i say this with a sort of vane disassociation myself as i am no less exempt from the rule and graphics programming has and will continue to consume a great deal of my hours and in the end at best all i will probably have to say for it is that if i'm lucky, 'i helped lay the foundation of hardware graphics'.

graphics programmers will still likely always have a place in building modeling environments. that will probably be their only saving grace and application for their knowledge.

schools will teach the history of graphics rather than applicable technique, and the number of people who really understand it at the lowest level will dwindle probably all the way down to a select few a lot like the number of people who actually really know assembly language. if not for SSE and programmable gpu's assembly would probably be an ancient unregarded art form save for compiler programmers.

dorbie
05-15-2005, 11:46 AM
I disagree, I've been programming graphics for at least the last 15 years. There is more varied and more interesting programming now than ever before. The fixed function pipeline has been replaced with programmable hardware.

If you arrived late and were using PCs exclusively during some of that time you might have missed what was going on and thought those crappy early software renderers were where the action was in graphics programming. You'd have been wrong.

If there's a trend it's the opposite of what has been claimed.

There's also a lot of software work required to design the hardware, for every *major* 3D engine written on a PC at any given time (say in the Quake era) there's probably 3 graphics hardware implementations being developed today (a wild guess). All of those efforts keep *teams* of graphics systems software engineers busy.

Improved performance also makes more algorithms feasible. We're now implementing rendering algorithms that weren't even attempted in the past because they were obviously impractical. That in conjunction with more programmable hardware means a lot of interesting specialized software development getting done.

Twixn
05-15-2005, 12:54 PM
Well, soon enough the graphics are going to be soo good, that they will seem perfectly real to us...and at that point if you doubled the speed of the graphics hardware, and (for the sake of argument) doubled the poly count...we wouldn't notice.

And then, when the CPUs get fast enough for Software rendering to be this fast, i think we will change back to Software Rendering.

Its just more flexable, compatible, and programming your own Software Rasterizer separates the Men from the boys :p Although, it will be a fair few years away.

But hardware would probably focus towards some other technique, like Ray Tracing.

-Twixn-

MickeyMouse
05-15-2005, 01:41 PM
Thanks everyone for discussion,

I was considering if there will ever be established any standard for a _really_ high-level graphics, which would let forget us about all that relatively 'low-level' stuff we do now. Well, maybe not quite forget, but not to have to handle OpenGL state machine, care for its optimizations etc.. There are so many different aspects of 3d graphics that make it hard to believe that such high-level API could happen. What kind of higher level API than OpenGL would suit everyone and every possible use case? Hard to think of such API, so I think we will stick with OpenGL level API for a long time yet.

Concerning ray-tracing and other expensive 3d graphics techniques - their biggest disadvantage is being not scalable. The quality of real-time ray-tracers is always way behind compared to what we can do using 'faked' polygonal OpenGL like graphics. We still need A LOT more horse power to handle ray-tracers, not to mention global illumination algorithms. They're not quite the same as today's 3d graphics seen 15 years ago. Z-buffering even though impractical these 15 years ago, was surely reachable in few years because its cost is constant, while things like global illumination cost exponentially more. Fortunately, as said before, our CPUs' speed is progressing exponentially too.

Last thought, concerning perfectly real environments - what I would call some kind of milestone in 3d graphics, is when we will be able to make so real humans that one wouldn't be able to tell if they're faked or not.

michagl
05-15-2005, 04:08 PM
in defense to some stuff i said, the definition of what a graphics programmer 'does' as ever will continue to evolve.

as for programming everything in software this will never be the way vr will be done, because it is simply a supreme waste of energy... your machine will require much more energy and cooling and your utilities bill will go through the roof.

ideally you want to do everything possible in hardware. shaders are essentially software if there is such a thing as software. but much of what a contemporary graphics card does is still extremely static stuff, as it should be.

even as for shaders, eventually about probably 3 shaders will emerge as the standard for totally realistic virtual reality and other popular shading models. these shaders would probably be mapped directly into hardware due to their extensive popularity.


Originally posted by MickeyMouse:
Thanks everyone for discussion,

Last thought, concerning perfectly real environments - what I would call some kind of milestone in 3d graphics, is when we will be able make so real humans that one wouldn't be able to tell if they're faked or not.the only thing which makes human models appear extremely fake at this point is the fact that the anatomy is not being simulated. the sooner we can begin to distance ourselves from weighted transform 'skinning' the sooner familiar anatomical models will begin to appear believable.

as a classicly trained artist computer graphics lack of respect for anatomical form is my major dissappointment with the field. as a result this happens to be my primary expertise. i could say a lot about it, but this is probably not an appropriate forum. anatomical simulation is actually the holy grail of computer science right now due mostly to its potentially scary cybernetics applications... however public research in the field appears to be dead. it was real strong in the 90s but all of the heavily funded projects fell through and the field is dead now. keep in mind this is real simulation, and not utter hacks like you see being touted in smokey scenes in ilm type cgi offerings.

michagl
05-15-2005, 04:48 PM
Originally posted by Twixn:
Well, soon enough the graphics are going to be soo good, that they will seem perfectly real to us...and at that point if you doubled the speed of the graphics hardware, and (for the sake of argument) doubled the poly count...we wouldn't notice.oh no, you'll notice for a long time. at some point it is true that the techniques will probably level... but quantity will still be pushed for a long time, just like cpu technology hasn't really changed but the numbers of transistors don't stop. believe me, the amount of energy needed to spit out the number of data points in a typical forest with underbrush is just massive. the realism won't change, but the amout of data in the environment will continue to go up just like cpu speeds have.



And then, when the CPUs get fast enough for Software rendering to be this fast, i think we will change back to Software Rendering. the freedom might be interesting, and emulation more feasible, but mainstream will never go back to software. if you ever wanted to say watch a movie from the inside like a ghost and be free to move around in a photo realistic environment undiscernable from your own. this will almost definately all be done entirely in hardware. software will be managing massive event scheduling and AI demands. executive AI is a software task, graphics is pure hardware to the bone at the end of the day.



Its just more flexable, compatible, and programming your own Software Rasterizer separates the Men from the boys :p Although, it will be a fair few years away.most people probably have better things to do than build software renderers. i've built the beginnings of a software render which is specially designed for drawing directly to and from locally partitioned disk space, and believe me it is a depressing, time consuming, and boring chore even if you know exactly what you are doing.



But hardware would probably focus towards some other technique, like Ray Tracing.high level ray tracing is especially poorly fit for hardware, especially for instance transmitted shadows. low level ray tracing would require probably only hardware collision processing. photon mapping is much more straight forward, merely an extension of a collision system, and would allow for something very interesting which is missing from graphics, namely volumetric lighting. in the future offline cgi rendering will probably be little more than just setting the exposure level of your shot like shooting with a camera, then have hardware photons shot at it until you are happy with the photon spread of the shot.

finally just a last tacked on comment, there is probably room for hardware voxel systems as well. already the medical industry is pushing this, but it will probably go further into consumer hardware once people are able to get over polygons as the 'only way'. some awesome effects can be better modeled with voxels. it will be interesting as well to see if texture maps ever go voxel. think paint and grime that can actually be scratched off in 3D, or a layer of snow or mud to walk through which is actually just a dynamic voxel texture. voxels are basicly all about 3D texture compression.
great long winded thread.

sincerely,

michael

EDIT: there is this:

http://www.ageia.com/technology.html

first i've heard of it. the world just got more crazier. i wonder if i can keep up.

will keep looking for architecture details. any ideas?

Twixn
05-15-2005, 08:15 PM
i've built the beginnings of a software render which is specially designed for drawing directly to and from locally partitioned disk space, and believe me it is a depressing, time consuming, and boring chore even if you know exactly what you are doing. I have also built my own software renderer, i found it interesting, and i think its made me a better programmer...Thats one reason why i like them so much, as i've made my own. Im even porting my latest project (Subliminal, for those who helped with my ATI problems) to it for fun :p . Thats why i joked about the manliness of it, as i have done it myself.

Maybe we will never go back to software, but you have to take into account that soon enough the CPU will be too powerful just to do AI. And for the end consumer it may cheaper just to tone down the graphics and skip the Graphics card altogether. But, its just a thought.


photon mapping is much more straight forward, Ray Tracing was only an example, i was not being specific...When Hardware is able to move onto more advanced techniques than rasterization it will. And it doesnt matter how ill suited a technique is, there will always be someone who will do it anyway.

I've heard about the PPU a while ago (havent looked at the link, but im assuming because of AGIEA), as for the Architecture...it will most likely be a GPU similar chip that outputs to matricies, even directly to the GPU in the form of transformation matricies, or even on the same card for added fun :p .

-Twixn-

V-man
05-15-2005, 08:22 PM
PhysX was announced a couple of months ago but there isn't any detail other than the API it will use and that a couple of game companies support them.

For the near future, I think it will be just incremental steps like more fillrate, more RAM, more shader power, a couple of parts becoming programmable.
I think IHV's want to implement things that are straigforward enough to lead to a product within a year.

Someone once said that this PPU based card idea is dumb and if the PPU does prove itself to be valuable, it belongs on the GPU.
It would be interested if the PPU can compute transform matrices for the objects and feed them directly to the GPU.
Or how about a PPU that can simulate a soft body like muscle and tissue. It computes the new polygons, write them to a VBO and the GPU picks it up and renders immediatly.
etc etc etc.

knackered
05-16-2005, 06:16 AM
removed bile for biles sake
Michagl, as dorbie has just pointed out, over the last 5 years the hardware vendors have invested huge amounts of money creating something called a 'programmable pipeline'.
They had two options: 1) continue extending the fixed functionality (ie. hard-wired paths) to accomodate more elaborate and realistic shading models, or 2) create an open architecture, whereby programmers (or 'the unemployed', according to you) could create their own shading models, more complex, less complex, more surreal, more stylistic, whatever. Heck, they could even implement their own ray tracers if they wanted.
What you're suggesting is that after all this effort, the vendors are going to close up shop again, hard-wire everything, and give the application programmer a fixed choice of shading models. Doubt it. There will be standardised shader configurations built on top of programmable hardware (like there is now), but there is no need to hard-wire it (and never will be), just as you can't buy a hard-wired database chip.
BTW, I think someone should break it to you at some point: you are not the the only one to discover parametric surfaces, you're not the only one to realise their potential in saving bandwidth and storage, you're not the only one to try to construct a standardised, opengl-like, API for parametric surfaces - you just need to look further than the glu library, that's all.
You are not the saviour of CGI. We have not been awaiting your arrival.

zed
05-16-2005, 11:29 AM
werent curves done in consumer hardware as far back as the gf3 (never really took off)
even with 3d modelling theyre not used exclusively, in the nearterm(3-10 years) i believe models will be mostly polygons with *displacement mapping, far more easier to control for the artists that working with minute curves to express minature detail
*either true or some quasi method eg fpo's mentioned on these forums before

michagl
05-16-2005, 01:33 PM
briefly, i've been looking over the ageia Novodex SDK... its all 'virtual' c++... hopefully an opengl type API will be developed. and yeah, it would make more sensee to put the ppu on the same board/chip with the gpu, but that will probably take at least a year to get started if the ppu proves popular. as for the architecture, there is no telling how it works just looking at the API (or is it the SDK?). the SDK might be all software, but it is extremely complex. i wonder if the hardware is sophisticated enough to manage its own collision 'worlds' internally? there is probably some embedded software running on the hardware. i wouldn't know.

---------

knackered, why must you always get personal. yes programability will always be there. but if there is a single massive shader used 99.9% of the time then it should be hardwired. and yes expensive hardware SGInfinity or something does do NURBS models, but i believe they are done with brute force per screen tesselation. anyhow i'm not advocating anything... just pointing out that polygons alone can never produce photo-realistic environments. and though doing the sampling work with the cpu is quite possible, it would make more sense to do it directly on the graphics hardware in parallel.

tamlin
05-16-2005, 02:46 PM
This post became way larger than I had hoped for (seems I have gift for that). I still wanted to share it, as I came to realize some of the ideas might actually lead to something.

Obviously vertices and polygons are here to stay. I think we all know that. Even if something else was/is added, they are still the most obvious, straight-forward and precise instrument to visualize some things.

What they however not are, is a be-all end-all solution to all visualization - at least not storage of represenations, and therefore obviously is a impedance mismatch (to use an EE term).

Voxels (the real kind) have been mentioned, where polygons are a complete misfit but 3D textures are a perfect fit for uniformly distributed sample points, which in turn is a 1:1 match for some medical imaging. Due to current hardware restrictions, 3D textures like much else require artificial paritioning of data to fit the hardware. As partitioning has always been a problem in CS, for just about any field we can think of, I don't expect that to change any time soon - if ever.

Polygon-based models when viewed from a distance are often LODed. Obviously, since it makes no sense having a 100k poly model rendered in all its glory, covering 4 screen pixels. But that also display a class of problems where we both have a mismatch in representation, and where the server potentially could have a bit more flexibility, to possibly do LODing on the server.

Large terrains are also a class of problems, and one I myself have a soft spot for, where the polygon approach is merely an approximation - an approximation that in areas even today has reached its limit. Obviously using a simple 2D heightmap as has been suggested here, or a parametric only geometry, would be too limiting.

What possibly could work, and I'm going out on a limb here so please don't flame me too bad if this is indeed insanity :) , could be something like a more generic program run once for each frame. Perhaps we could have many of them, and run them at different points for different purposes, but I'm thinking of a fairly low-frequency called type of programs. Perhaps callable as display lists, but in fact full server-side programs (in a language yet to be invented).

These programs could as input use ... just about anything. Say texture objects, VBO, more-or-less free-form data much as we can feed gl*Pointer some data with a stride, and what's in between the common data and the stride could be used here, or just about any free-form data the programmer on the client decides to send to the server program. Anyway, that's really an unimportant detail at this point.

What could be the important detail, is what this program could generate. What I'm thinking of could possibly do the same things as any program could do today using the OpenGL API, so long as it (obviously) only affected server-side state and data. It could also be that due to its infrequent calling, it could even be limited to essentially immediate gl calls. I haven't considered that part at all.

But let's explore what such an app could perhaps do, and how more visualization work could be shifted to the server.

Say we are to write a 3D medical imaging program, and have voxels as input. Let's say we have 1536^3 sample points. We break them into 9 3D textures, which are uploaded to the server. As I have exactly zero experience in this field, I'm again going out on a limb here. Say we have the dumbest of dumb programs to visualize this, and we create cubes, one for each voxel, meaning roughly 3 billion cubes. No real program would obviously do this, but lets say we have a 3:rd grade VB programmer creating this. :)

Now, instead of uploading vertices, indices, normals and texture coordinates 'til sunday, we could upload a small program we have written to the server, to generate all this data right where it's used. Say we uploaded a small parameter buffer saying what texture objects to use, a near clip-plane, and some additional stuff like scaling, rotation and stuff, and that program generated those cubes all on the server.

Stupid example? OK, maybe it was. Say we have a heightmapped terrain. We have a bit of texture splatting, a parametric LOD, some terrain features that are to augment the heightmap and so. Upload program, tell it what 2D texture(s) to generate geometry and stuff from, an extra buffer containing data for e.g. splatting, and have it all run on the server. After that, more volatile objects could be added by the client, such as vegetation, CPU-calculated objects (e.g. physics affected) and so on.

I expect the programs would have to have at least some sort of "scratch" area, memory to be used for temporary storage of stuff - but I also envision them to be able to, on the server, modify already uploaded VBO's, augment vertex blending, modify light params and... basically modify most server states and data I can do from the client side today - in addition to "create" geometry.

Currently I think something like this could be quite possible. There would be no undue restrictions on what can be done. No hard-defined "You use a 2D heightmap, period!", and also no "parametric-only surfaces". Geometry generation, incl. LODing, would be all up to the server-side program to decide. Input can be from any server accessible source, and output would be geometry with attached states and data.

I don't know if I managed to explain this idea very well, but I currently see it as this could fit very well into the development and evolution of gfx hardware. It would use existing inputs, and generate a finite class of outputs.

If I somehow did manage to get the basic idea(s) through, what do you think? Insanity? Possible? Plausible? Perhaps even a good idea?

ZbuffeR
05-16-2005, 02:51 PM
>> a single massive shader used 99.9% of the time
You mean, within a single game like in Doom3 ? Or for every new game ? Nobody wants its game look like all the other ones in the market.

Even on a standard outdoor scene, you will need special shaders and tricks for water, glass, rock, dust, grass, stained steel, wet wood, skin, fur, sky, sun, clouds ...

Look at CG packages like 3dsmax/may etc, there is a lot of work on materials, not just setting specular exponent and diffuse texture for a blinn shader.
This is the whole point of Renderman shaders, they can grow complex enough to need a graphic editor (http://www.astro.helsinki.fi/~hannu/renderman/itx_man/dirt.jpg) to create them. Mixing procedural (programmer's job) and textures (artist's job) is important.

michagl
05-16-2005, 05:49 PM
Originally posted by ZbuffeR:
>> a single massive shader used 99.9% of the time
You mean, within a single game like in Doom3 ? Or for every new game ? Nobody wants its game look like all the other ones in the market.

Even on a standard outdoor scene, you will need special shaders and tricks for water, glass, rock, dust, grass, stained steel, wet wood, skin, fur, sky, sun, clouds ...

Look at CG packages like 3dsmax/may etc, there is a lot of work on materials, not just setting specular exponent and diffuse texture for a blinn shader.
This is the whole point of Renderman shaders, they can grow complex enough to need a graphic editor (http://www.astro.helsinki.fi/~hannu/renderman/itx_man/dirt.jpg) to create them. Mixing procedural (programmer's job) and textures (artist's job) is important.of course no 'single' shader would do, i was just playing devils advocate. but for instance total reality (as we know it) simulation, would probably carve out the largest niche in the future of graphics... well at least until people get bored with it.

the truth is there are about 3 major kinds of shaders for photo realistic images (or will be in the future). the standard smooth surface shader, displaced surface shader, and the transmissive volumetric shader. you can pretty much bet that within those major groups a few conditional branches would be about enough to accomodate most physical phenomenon in a realistic scene.

at the end of the day though, for truely realisticly lit scenes, i would bet that a real-time photon mapper would be used. basicly it is nothing more than shooting energy into the scene and recording its collisions in a photon buffer, then rasterizing the buffer at the end. its just so straight forward. you would tweak options like how many photons and how much residual energy to use to hide gaps in the fill due to limited photons. from there its just a bunch of photon collider units running in parallel. light is awesome, it doesn't interact with itself, only matter, that is to say there is zero interdependance. you could really optimize it by treating each photon as a little view frustum and tesselate the collision geometry based on what the photons see and don't see. no need to waste resources in dark regions, awesome for night time city rendering.

Chuck0
05-17-2005, 12:09 AM
Originally posted by tamlin:
This post became way larger than I had hoped for (seems I have gift for that)...wont quote the whole thing :p
yes what you are proposing here is imo certainly something the near future will bring since it is a logical extension to the (rather restrictively) programmable rendering pipeline of nowadays. i think there are already discussions about adding programs that will be able to spawn vertices instead of just altering them and in further enhancements those programs might even get as complex as you describe in your proposition, being able to access multiple types of data and even writing it.

knackered
05-17-2005, 12:33 AM
Originally posted by michagl:
knackered, why must you always get personal.Because you're a person, duh!

RigidBody
05-17-2005, 03:00 AM
i don't think that the PPU introduced by ageia will be a great success. of course game developers will love it, because ageia will provide a library with their PPU which allows a programmer to simulate real physics without knowing very much about it.

but the market for people who are willing to spend 50-100 dollars, euros or whatever on a card which gives their 2 or 3 favourite games a better performance doesn't seem very big to me.

V-man
05-17-2005, 04:04 AM
Originally posted by michagl:
i wonder if the hardware is sophisticated enough to manage its own collision 'worlds' internally? there is probably some embedded software running on the hardware. i wouldn't know.
I don't really know the details but I imagine the board will have lots of onboard RAM to store the world and to create a scratch memory area.

Since this chip is suppose to be handle not only rigid bodies but soft bodies, fluids, cloth and hair, perhaps there is a need to download the newly computed geometry.

I'm not sure how soft bodies and cloth will be handled. cloths can be done with NURBS. Hair can be done by offset values.

RigidBody
05-17-2005, 04:40 AM
the major problem with soft bodies is not how they are internally stored .

the behaviour of soft bodies can be simulated using the finite element method, for example. a complex structure is divided into triangles (or quads), for which a stiffness matrix can be easily calculated. this stiffness matrix can be used to calculate the deformation, which results when external forces are applied on the structure- usually using a time integration algorithm

the problem is that you may have to use very little time steps to keep the time integration stable. maybe 1/1000th second or less. a PPU which is optimised for matrix operations could be much faster than a normal CPU for this special task.

imho it does not make much sense to make the GPU do that work. if you want to display 30 frames/sec, you may have to compute 10-100 integration steps between each frame.

michagl
05-17-2005, 02:11 PM
Originally posted by RigidBody:

imho it does not make much sense to make the GPU do that work. if you want to display 30 frames/sec, you may have to compute 10-100 integration steps between each frame.the gpu couldn't do it, the general need though is for the ppu and gpu to have access to the same memory so you don't have to download your finite element mesh or whatever from your ppu to system memory then back up to the gpu. i figure if the ppu is moderately successful hardware manufacturers will either concieve some sort of ppu>>gpu modem or start putting them on the same card somehow. the ppu would have its own memory, but it would be able to dma its final results per frame out to the gpu memory.

any idea where the ppu will go? will there be boards at the proposed december launch with 3 or 4 pcie16 ports? sometimes there docs talk as if there will be multi-processor mainboards with ppus right on the board presumably next to the cpu.

RigidBody
05-17-2005, 10:06 PM
i don't think memory transfer will be a big problem, because the PPU will compute only a few hundred trias/quads.

although i'm probably not going to buy one, i'm curious to see how it will turn out. 10 years ago there was a 3D accelerator card by 3dfx, which was an addition to an existent graphics board. as i remember, i did not know many people who had one. and i don't think there will be many people who buy an extra PPU card. if they have some bright heads at ageia, they will try to find a mainboard manufacturer who will integrate their chip into a mainboard (if that's possible).

bChambers
05-18-2005, 10:18 AM
Originally posted by RigidBody:
i don't think memory transfer will be a big problem, because the PPU will compute only a few hundred trias/quads.

although i'm probably not going to buy one, i'm curious to see how it will turn out. 10 years ago there was a 3D accelerator card by 3dfx, which was an addition to an existent graphics board. as i remember, i did not know many people who had one. and i don't think there will be many people who buy an extra PPU card. if they have some bright heads at ageia, they will try to find a mainboard manufacturer who will integrate their chip into a mainboard (if that's possible).Well, I bought one of the original Monster3D cards, and I'd love to buy a physics board. Of course, my budget right now is a bit stretched, so that probably won't be possible...

As for rendering only a few hundred quads... IIRC, they mentioned 50k discrete particles running at 200 fps on the board, but I'll have to look up that info to be sure I'm remembering it correctly. I remember thinking at the time, "50k particles isn't enough to accurately simulate *anything*!"

Chuck0
05-18-2005, 11:01 AM
dont know if its of interest, but it seems that asus will be the first to start integrating this ppu
http://www.tomshardware.com/hardnews/20050517_202925.html
what i really wonder is if this additional processing unit will really be that useful since now everybody is jumping on the multicore cpu train...

RigidBody
05-18-2005, 12:11 PM
bChambers, the world should be full of people like you :D always buy the new stuff, keep the world turning...

i'm not too familiar with particle systems, i was thinking of something like this:

http://www.ageia.com/img/novodex/novodex_center.jpg

(as you see, the image comes directly from www.ageia.com (http://www.ageia.com) )

there are many race games, and their developers would be keen on a PPU which can simulate a crash like
shown in the upper image. but for a crash like this i think you'll need at least 2000-5000 trias (not considering
the flat tyres :) )

Chuck0, thanks for the link. it confirms me in my opinion that a PPU will only sell if it is integrated in a mainboard.
like a 3d accelerated chip could not be sold for itself, but only integrated on a regular graphics board. asus will first
build an additional board with the PPU and thus get experience with the chip itself, and maybe later on they will
put it on one of their mainboards, as a feature like sound, s-ata or lan.

michagl
05-18-2005, 12:32 PM
that image is total unfounded hype like 90% of the material in ageia docs and internet resources. notice the 'getty images'... that is just stock footage of a real car probably after some kind of natural disaster if it isn't obvious. the current generation ageia ppu can't get close to that and probably won't even be able to in a decades time.

Chuck0
05-18-2005, 01:10 PM
well vodoo2 and especially vodoo1 cards were extremely succesufl mostly because they were head and shoulders above everything else which was available during their time... if this ppu would be able to accomplish the same, then i guess it would have a chance even as expansion card. but i doubt that since im quite sure that one core of modern dual core processors can alreay be utilized as physics (and ai) coprocessor and even if its not that highly specialized for the task i guess it could perform quite well in comparison to this ppu.

michagl
05-18-2005, 02:17 PM
its difficult to take ageia seriously if they won't release internal architecture details, but appearently they have industry partners backing the endeavor so presumably they have some inside information or at least some company shares... still looking at the SDK API, if someone doesn't produce an opengl style wrapper for this i would either be turned off or decide to do it myself. apparently the SDK is built on DirectX conventions, but i wouldn't know not being a microsoft champion myself.

RigidBody
05-18-2005, 10:18 PM
well, i think that a single core cpu together with a ppu will have a higher performance than a dual core cpu without ppu. i think the instruction set will be very small compared to a normal cpu and thus highly accelerated- i read that the ageia ppu has about 125 million transistors.

the interesting part is about their physics api- i wonder if they will provide a non-accelerated version. if they do not- it should be hard to find a software developer who spends probably a lot of time and money creating a game which can be sold only to someone who has a ppu, which is a big financial risk.

but if they do provide a software-only api they run into the risk that someone can compare the performance of a ppu- with a non-ppu system and thus find out that a dualcore system without ppu could be a better buy.

T101
05-18-2005, 10:29 PM
It's supposed to accelerate Novodex physics. So there's your software physics engine.

But it would be nice if they opened up an API for others to use.

Adrian
05-18-2005, 11:31 PM
Originally posted by Chuck0:
im quite sure that one core of modern dual core processors can alreay be utilized as physics (and ai) coprocessor and even if its not that highly specialized for the task i guess it could perform quite well in comparison to this ppu.But will developers want to use an entire cpu for physics? Personally I'm hoping that the next gen graphics cards are dual core gpus so we can have two graphics contexts and make full use of dual core cpus for graphics. It might go some way to explain why were having to wait a relatively long time for this next generation of cards. It also ties in with reports that the g70 is twice as fast as the gf6.

V-man
05-19-2005, 10:05 AM
Originally posted by T101:
It's supposed to accelerate Novodex physics. So there's your software physics engine.

But it would be nice if they opened up an API for others to use.The PPU is designed around Novodex so no other API! It is pretty stable, fast and feature rich.
And yes, it's suppose to be GL like. No hw drivers means soft emu, else use ICD.

bChambers
05-20-2005, 08:40 AM
Originally posted by Chuck0:
dont know if its of interest, but it seems that asus will be the first to start integrating this ppu
http://www.tomshardware.com/hardnews/20050517_202925.html
what i really wonder is if this additional processing unit will really be that useful since now everybody is jumping on the multicore cpu train...Of course it will sell. It doesn't have to be faster than a dual-core machine; it just has to take some work away from the CPU, allowing the CPU to do other things (like edge detection, AI, sound mixing, etc).

As for Novodex, you can get the SDK by registering with Ageia (it's free). The SDK comes with a number of fun demos; what I'd like to see, though, is a video comparison of hardware accelerated and software run. The demos all ran great on my system (Athlon XP-1700), so the question is: how much faster will they run with the hardware?

The only thing that really bugs me, though, is that to use Novodex you have to pay for a licence, which is something I (as an independant wanna-be) will never do. If they really want people to accept it, they should let ANY developer use it for free, and then make their money off of having a kick-a** chip.

knackered
05-20-2005, 09:38 AM
You're talking a hard-wired rigid body integrator here, aren't you?
So, how many stacked boxes is it going to take to impress joe puplic enough to buy a dedicated add on card for?
It's just not neccessary, is it?
Half Life 2 had a fully fledged physics engine, but how much of the gameplay actually used it? Not much as I remember, see-saws and a crane was just about it. This is because there's a limit to how much gameplay *needs* realistic physics. Valve had a sandbox, played in it for literally years, and came up with some see-saws and a crane. Doesn't that tell us something?

michagl
05-20-2005, 12:26 PM
Originally posted by knackered:
You're talking a hard-wired rigid body integrator here, aren't you?
So, how many stacked boxes is it going to take to impress joe puplic enough to buy a dedicated add on card for?
It's just not neccessary, is it?
Half Life 2 had a fully fledged physics engine, but how much of the gameplay actually used it? Not much as I remember, see-saws and a crane was just about it. This is because there's a limit to how much gameplay *needs* realistic physics. Valve had a sandbox, played in it for literally years, and came up with some see-saws and a crane. Doesn't that tell us something?they probably didn't use much physics because it would subtract from other needs of the cpu. i'm sure anyone can think of a billion ways to use hardwired physics that are much more suddle than boxes. the chip right now only does rigid body collision detection and response, constrained joints, and simple spring dynamics as far as i can tell. right now peoples minds are probably just a tad narrowed by the fact that physics comes at a loss from the cpu.

as for the sdk demos, i believe the frame rates they show are projected frame rates if the hardware was installed... if you hit the zero key in the viewer it will display performance and considerably lower frame rates.

dorbie
05-20-2005, 02:10 PM
The core problem is one of content, unbounded complexity and game design.

Even the most flexible games rely on serious limitations to where you can go and what you can do with few exceptions.

It's inherent to the robustness of the game and the production time involved in creating it.

If you have real physics you can break the gameplay, and it's not that you couldn't have different gameplay, you could but it would seriously affect the game design.

You also don't have the data there nor do I think it's feasible to have the data in the near term. If you cut a car in half you need to see the inside of the car, if you blow a building up it's a lot more complex to make it real or even approximately real.

Half life could crunch the numbers for moving stuff around but the game design didn't use it, even where it was used it was horribly contrived. There's a lot you could have done in a real world scenario that couldn't be done even when physics was used. Cut a rope and the barrel falls and squishes a guy, OK, but launch an RPG at a plank of wood and nothing. It's ridiculous and it's very intentionally ridiculous because of the limits of game logic and game design.

It's not enough to naively say you can do arbitrary stuff or arbitrary complexity. Not every game can be FarCry w.r.t where you can wander (that had serious limitations too), it affects the gameplay in ways game designers don't want it affected.

Moreover how often do you use physics in the sense a game engine does, apart from your own body motion, driving a car or throwing or catching something, really physics doesn't get much of a lookin in driving everyday events more than stuff staying where it is most of the time and more than the stuff games do today anyway. It would be a very contrived and strange game where you had to use physics to solve problems. It'd be like lemmings but even more quirky.

So what does physics really get you? The power destroy everything you can see and have it break up in a realistic way and you can consequently go anywhere. Maybe someone will write a game like that now, it's not entirely impractical to do a lot of that now but it would be pretty bad if every game let you do that with impunity, and you'd quickly run out of original data. Just imagine a game where you could go anywhere in a city block never mind a city, and destroy or cut open or break open everything and the complexity required, and think of the boredom. I don't mean in a rigged contrived fashion, rigged is easy, that's why all you get is rigged stuff. I mean go anywhere do anything see anything physics & data.

michagl
05-20-2005, 05:24 PM
personally i can't get into a game without 'go anywhere' mechanics. when i play a game i want to feel like i'm really in it... its not worth playing if it can't be that convincing, and there are enough games that deliver that to keep anyone who can balance their game life and real life busy.

as for do anything physics, i guess i'm not really crazy about games that would give you a rocket launcher and unlimited ammo. i also don't see the fun in just going deviant like the GTA game people can't get enough of (which i've never played)

anyhow, i actually suggested suddle effects. walking on grass, wind effects, just general reactions, that sort of stuff.

as for solid stone and metal structures, i don't think it is asking too much to not be able to rase them to the ground. but i wouldn't think cutting and burning wood structures and trees would be too much to ask of a hollywood type funding game in the future.

i'm not really thinking in terms of destruction though. think unconstrained animation, garments, things like jewelry, hair, suddle side effects and that sort of stuff that really draw you in to the illusion.

personally i would rather watch a movie or read a book if the game can't draw you into its world. i go for chess every now and then, but games for games sake are not really my bag. if you are drawn in though you are in a different world where you can forget that you are really wasting your time in the real world. it doesn't have to seem real so much as anything that makes the world seem fake pulls you out.

in the end though i think its best to focus all the physics on the characters. fluids would be nice and a good wind hack is always cool.

michagl
05-20-2005, 06:20 PM
The only thing that really bugs me, though, is that to use Novodex you have to pay for a licence, which is something I (as an independant wanna-be) will never do. If they really want people to accept it, they should let ANY developer use it for free, and then make their money off of having a kick-a** chip.well, if i for one ever use it, i will build on opengl state machine type wrapper for it. i can't imagine why you could not simply make such a wrapper agnostic and link it against whatever ppu drivers are out there. i figure this will happen anyhow if the ppu catches on and the drivers will not require a license similar to opengl. for what its worth, the novodex docs are advocating dynamic linkage, and the api though structured does appear quite ameniable for an opengl wrapper.

off topic, but i'm curious about how many 'indapendant' developers there are out there. that is not people who would like to be part of the publisher/studio system but have yet to be indoctrinated, but people who really don't like the model and would prefer something different and probably more scalable. is there any place where these sorts would gather? i'm always walking at least three different lines as a developer, but my dream masterpiece is a free run-time configurable/programmable completely abstracted virtual reality environment, a sort of game 'interpreter'. its something i'm always working toward in the back of my mind. i can produce specs for something very close to the optimal platform for anyone who is interested. my implimentation is probably about 50% complete. the idea is to allow a single person to rapidly realize their personal vision and render game design about as demanding as writing a book or a song. this can be made possible by users sharing their artistic data and virtual device drivers in a shared central repository. the basic model is give a little take a lot! everyone wins. zero design constraints.

'personal message' me if interested.

T101
05-20-2005, 09:53 PM
W.r.t. physics stuff:
I seem to remember reading about a supposed demo of either Doom3 or Duke Nukem Forever (I suppose that referred to how long we were supposed to wait for it by the way) where you were supposed to be able to blow holes in walls and individual bricks would be flying.

I suppose that's the sort of stuff you could use it for. In theory.

Then again, flying bricks isn't your primary concern when you're busy blasting the opposition in a first-person shooter.

Maybe in a car-racing game you could have some more realism (twisting of the chassis).

But I have to say, I think it's with more serious applications that something like this would be worthwile to have.

Michagl:

the idea is to allow a single person to rapidly realize their personal vision and render game design about as demanding as writing a book or a song. What? And put all of us out of business? ;)

michagl
05-20-2005, 11:06 PM
What? And put all of us out of business? ;) no you just go into business for yourself like an author... or if you want to do something more complicated do something more like a rock band. smaller teams basicly where everyone might have a slight expertise.

i did some scouting around tonight though. it looks like all of the indapendant game communities aren't interested in developing tools(engines)... according to them, they only want to use pre-built platforms, which i figure probably really hurts their cause. they say however that popular indie platforms like torque are still more complicated than what an indie movement really needs to work.

if nothing comes up i figure i can deliver a competive platform in about 3 more years, but i worry about slow adoption due to the platform being spec-ed and implimented by a single individual rather than a community. still there is nowhere to go to openly discuss the matter it would seem.
it has to be so much more solid and flexible than a disposable platform.
if there are indapendantly minded people here willing to develop platforms that would like to help out, the gesture could probably go a long way.

bChambers
05-21-2005, 11:53 AM
Personally, I'd love to use something like you're describing. I'd also love to help develop it, but I'm rather short on time... take a look at the webpage for a game I'm making(www.geocities.com/bdchambers79/kimball) and you'll see a couple months worth of posts saying "I haven't had time to get the next release out!"

I even decided to scale back the features, so I could focus on gameplay instead of graphics. And I *still* haven't had time to finish the next release :(

michagl
05-21-2005, 03:59 PM
Originally posted by bChambers:
Personally, I'd love to use something like you're describing. I'd also love to help develop it, but I'm rather short on time... take a look at the webpage for a game I'm making(www.geocities.com/bdchambers79/kimball) and you'll see a couple months worth of posts saying "I haven't had time to get the next release out!"well for what its worth i've started a discussion here:

http://www.garagegames.com/mg/forums/result.thread.php?qt=30358

if the url doesn't get you to the right place, look for a thread called 'a new model' in the 'industry' forum.

i'm not sure where it will go, but the more people who are interested in indapendant platforms that can get there the better.

JD
05-25-2005, 01:23 AM
I can think of two types of physics right now. One for gfx stuff like particles and one for game related stuff that will effect the player and his outcome in a game. The second is more fun, say you see a barrel and foes ahead. You get inside the barrel and then roll down the hill hitting the baddies. Then while still inside the barrel, you hit the shore and float away to a nearby island. Or pull a rope and have tons of bricks come down on foes like in that Twins moving with Arnold and Devito. Physics that would add to the gameplay rather then be there for the visuals.

I think they still have problems with their sw and speed is like 4x of cpu. They claim to be much faster after bugfixes. I'm playing with ODE and it's interesting. In a game, I would have dedicated blow up spots like in duke nukem game. You see a crack you can fire into it type deal. That way you know what you can hit and not. Say, a non-breakable light would be enclosed in a metal grid mesh and breakable wouldn't. Maybe this can't be done uniformly but every bit helps.

Rob The Bloke
05-30-2005, 06:25 AM
Well, i for one will be buying the first physics chip out - but the first thing i'll be doing is bolting it into Maya and Endorphine. I see the first market being in content creation - since anything that helps speed that process up will ultimately lead to better games, even if you don't support the chip in the final product.

Which also brings me onto another point, every advance in graphics hardware requires vastly more content to be produced. So whilst the hardware may be improving, i think the current standard of games is dropping (in terms of gameplay and originality). In some ways i'd like a 2 year armistice from new advances in hardware so that we all have a bit of time to catch up..... ;)

I very much doubt we will be going back to software renderers - it's too much hassle to impliment given the current expected standard of graphics - as an industry, we simply can't afford the expense of doing a U-turn on that - no developer is going be able to justify the expense.

oh yeah, keep Nurbs away from Hardware!! glu and nv evaluators may be convienient, but efficient they are not - i'd go so far as to say that a HW nurbs implimentation will never out perform a SW one - one of the few occasions where that is true. I doubt Nurbs will ever be popular, since the Art department will normally throw a tantrum if you suggest it - they are just too annoying for artists to work with. You may also notice that Nvidia removed NV-evaluator support from the drivers some time ago.

bChambers
05-30-2005, 07:14 AM
Originally posted by Rob The Bloke:
oh yeah, keep Nurbs away from Hardware!! glu and nv evaluators may be convienient, but efficient they are not - i'd go so far as to say that a HW nurbs implimentation will never out perform a SW one - one of the few occasions where that is true. I doubt Nurbs will ever be popular, since the Art department will normally throw a tantrum if you suggest it - they are just too annoying for artists to work with. You may also notice that Nvidia removed NV-evaluator support from the drivers some time ago.You're kidding me. What does the Art department want to use instead, POLYGONS?!?!?

The only way I will ever accept the use of straight edges for approximating curves is if the edges are less than 1 pixel wide. I abhor polygonal approximation as much as I abhor aliasing artifacts.

dorbie
05-30-2005, 04:30 PM
Subdivision surfaces.

zed
05-30-2005, 05:34 PM
yeah i was thinking that as well, but then i thought what about polygon edges?, the driver would have to take into consideration the surrounding polygons as well.
then again IIRC didnt ati have some extension that dealt with this?

bChambers
05-31-2005, 08:08 AM
Originally posted by dorbie:
Subdivision surfaces.Umm... that's what NURBS are. Or rather, NURBS are a specific type of subdivision surface, being the most useful Quad type.

Yes, ATI did have support in their cards (around Radeon 8 I think?) to subdivide triangles automatically. I don't even remember the name of it, it was ignored so much.

dorbie
05-31-2005, 08:53 AM
Originally posted by bChambers:

Originally posted by dorbie:
Subdivision surfaces.Umm... that's what NURBS are.BZZZZZZZZT

http://grail.cs.washington.edu/projects/subdivision/

michagl
06-06-2005, 01:27 AM
i don't know what you are implying with that last one dorbie, but subdivision surfaces are nurbs if you understand the the underelying relationships.

subdivision surfaces are really just hierarchical nurbs modeling. currently subdivision surface implimentations typicly only utilize bezier curves, but this is too limiting for surface coninuity within many different types of subdivision surface junctures.

as far as hardware implimenting nurbs, subdivision surfaces are just the next thing up the ladder. easier to impliment in hardware than subdivsion surfaces is just to have an artist manually select surface parameters and connect them together to form an arbitrary mesh. then hardware can subdivide the resulting mesh easier than a subdivision mesh.

they are not pretty in hardware, but people won't be willing to settle for polygons forever. the trick is not manually forcing the hardware to resolve the mesh every frame, but rather giving it the surface parameterization and allowing it to dynamicly tesselate the mesh itself internally via random access sampling of the surface with respect to the view frustum.

this may sound like a total u-turn away from opengl mechanics. but a physics processor requires just this. you upload the geometry once, and it solves subsequent frames itself. once people get comfortable with this workflow assuming the physics hardware takes off, then giving the hardware an implicit mesh in this fashion will seem less alien after a couple years. even now there are fluid simulation apis which solve implicit meshes in this fashion... why is it any different to let graphics hardware do this.

you can do the same thing on the cpu, and you must presuming you can not compute an infinity of surface samplings offline and shipped out on your game disc. letting the gpu do it is more practical though. imagine a first person game that lets the player put their nose on a toilet bowl seat. currently the camera can't get that close cause the polygons would show through. its more practical to build a simple nurbs parameterization of the toilet bowl seat which can be sampled at run-time as needed. not allowing the camera this sort of freedom subtracts from emersion.

anyhow... the reason i'm posting is for the last week i've been wrapping the novodex structured API with a flat opengl style API. i'm calling it OpenPL for internal purposes. i don't believe it is good practice to mix structured code. if people are interested i can share it. as soon as i can get an example off my development machine i will post a more thorough glimpse at the API i've devised if desired. i'm sort of on vacation in town, otherwise i would have more freedom to share directly from my source. in the meantime i would need to find a blank optical disc.

here is a quick example of some of the divergent stuff i've added. all the standard opengl stuff is there where applicable. i have to key this in manually:


plBindActor(PL_ACTOR_3D,<actor>);

plActorfv(PL_PIVOT,PL_POSITION,<0,0,0,1>);

plActorfv(PL_PIVOT,PL_QUATERNION,<0,0,1,0>);

plActorfv(PL_BODY,PL_KINEMATICS,PL_DYNAMIC);

plActorfv(PL_BODY,PL_POSITION,<0,0,0,1>); //manual center of mass

plActorfv(PL_BODY,PL_INERTIA_TENSOR,<1,2,1>); //manual inertia

plNewFrame(PL_PIVOT,PL_COMPILE);

plMaterialfv(PL_FRONT,PL_STATIC_FRICTION_2D,<1,0.5>);

plMaterialfv(PL_FRONT,PL_STATE,PL_SOLID);

plBegin(PL_SPHERES);

plQuatro4f(); plBounds1f(2); plVertex3f(1,2,3);
plQuatro4f(); plBounds1f(1); plVertex3f(4,5,6);

plEnd();

plBegin(PL_BOXES);

plQuatro4f(); plBounds3f(2,3,2); glVertex3f(7,8,9);

plEnd();

plBegin(PL_TRIANGLE_STRIP)

plVertex(); plVertex(); plVertex();
plVertex(); plVertex(); plVertex();
plVertex(); plVertex(); plVertex();

plEnd();

plVertexPointer(); plEnableClientState();

plDrawElements(PL_TRIANGLES,....);

plEndFrame(PL_PIVOT);some notes:

the manual stuff could alternatively be computed by setting material density and calling some function. alot of the parameter values i've chosen are defaults and hince redundant.

frames are bound to actors and are not manually named to avoid excessive naming. they are names like opengl lights ie. 0~N. each actor has a pivot and body frame. these are coordinate reference frames. the pivot frame is about the center of the actor, and the body frame is about the center of mass and its inertia tensor. also actors have a potentially infinite number of auxilary frames which can be accessed as PL_FRAME0, PL_FRAME1, ... etc, or alternatively PL_FRAME0+5... also PL_FRAME is an alias for PL_FRAME0. you would generally use a frame when you want to have a way to access a set of primitives. you can define the local position and orientation of auxilary frames just as with the pivot and body frames. the PL_WORLD frame is also used as a proxy for the top level reference frame.

scaling is typicly not allowed by physics solvers apparently. at least novodex does not allow it by rule. vertices are transformed by the modelview matrix, but 'offsets' are not. for instance the center of a sphere can be scaled by the model view matrix, but the radius of the sphere is not. alternatively plBounds1f() is used for the radius of a sphere, plBounds3f() used for the dimensions of a box, and so on. plane primitives are defined by a plNormal() subsequent to plVertex(). conveniently plQuatro4f() is used to define the 'posture' of the primitive. the quaternion axis is transformed by the current modelview matrix but is not scaled. alternatively plOrient3f() can be used to define three axis aligned angles for transformation rather than a quaternion. tricks are generally used throughout the api to ensure that rotation matrices do not get scaled.

oh and another plBegin primitive is PL_FORCES, also PL_CAPSULES and PL_PLANES as already more or less stated, and probably more to come... a cylinder shape is noticibly missing from novodex.

the idea is to try to keep globally named objects to a minimum. right now it looks like Actors Joints Events and Groups will be the only named objects. Groups are collision groups. i imagine eventually Materials would also become nameable objects because they have generally more potentional parameters than opengl materials and will probably eventually be referenced by texture mapped per pixel indexed material objects and already may be indexed on a per face basis by novodex.

another interesting functionality which i had to add was a pair of functions that look like plEnterFrame(PL_FRAME0,PL_LOCAL_SPACE) and plLeaveFrame(PL_FRAME0,PL_LOCAL_SPACE) ... basicly these two allow for shifting the modelview matrix across global and local spaces in an intuitive fashion. they are a little tricky to grasp at first, but very worth while.

also necesarry was a function that looks like :

plCopyFrame(PL_FRAME0,PL_COPY_SRC,PL_DUPLICATE);
plCopyFrame(PL_FRAME1,PL_COPY_DST,PL_MIRROR_X);

these are necesarry because of tight instancing constraints, also because scaling is not allowed generally, this provides a way to have mirrored instanced meshes that the system can verify internally to be locally instancible. mirrored meshes are basicly negatively scaled.

calling CopyFrame with PL_COPY_DST is equivalent to a NewFrame/EndFrame pair. you can think of this like NewList/EndList for display lists.

anyhow, this is a lot more than i intended to layout (especially in this thread), so i better leave it here for now. if interested i can give a link to the complete list of constants and functions i have so far.

i'm not suggesting an official OpenPL effort... but if people like it then maybe that could happen. personally i think the way i'm going with it is probably as good as could be expected as far as applying opengl constructs to physics.

i won't get into why i'm much more comfortable with an opengl style API here, but there are so many reasons. never the less i think the novodex API is really great, minimilist, and ideal for wrapping from whatever angle you please.

sincerely,

michael

dorbie
06-06-2005, 01:32 AM
I'm not going to get into a semantic debate with you. Take a moment to follow the link and read it:

"We believe that subdivision surfaces are a likely candidate to replace NURBS in future graphics and CAD/CAM applications."

michagl
06-06-2005, 01:41 AM
Originally posted by dorbie:
I'm not going to get into a semantic debate with you. Take a moment to follow the link and read it:

"We believe that subdivision surfaces are a likely candidate to replace NURBS in future graphics and CAD/CAM applications."sure but if you read around in technical docs just like someone else stated subdivision surfaces are really just an application of nurbs. in other words subdivision surfaces require a nurbs library to pull off even if the end user (modeler) doesn't realize it. all the same math is there.

sorry, i mean yeah, for the artist, they may not realize they are working with nurbs, but nurbs hardware subdivision surface processing would be necesarrily at least as expensive as nurbs and would naturally facilitate both modeling 'paradigms' if one or the other.

dorbie
06-06-2005, 01:45 AM
I disagree especially w.r.t. the original context of my post, and if you read the link instead of posting you'll see exactly why I disagree and why subdivision surfaces are explicitly defined as not being NURBS.

michagl
06-06-2005, 02:05 AM
Originally posted by dorbie:
I disagree especially w.r.t. the original context of my post, and if you read the link instead of posting you'll see exactly why I disagree and why subdivision surfaces are explicitly defined as not being NURBS.sorry, but i did read the link prior to posting for what its worth. you can call it whatever you want, but really a subdivision surface is just a collection of nurbs patches where each polygon is a nurbs patch.

dorbie
06-06-2005, 02:32 AM
Sigh, you're persistent in the face of evidence you're wrong and you ignore the context of the original comments like this rubbish:

"Umm... that's what NURBS are. Or rather, NURBS are a specific type of subdivision surface, being the most useful Quad type."

These words are used to describe something specific.

Now if someone suggested that someone should use bump mapping in preference to Phong shading and someone else said "Umm..... that's what Phong shading is. Or rather, Phong is the most useful type of bump mapping being the smooth type without bumps", they'd be dead wrong and so would the fool who later persisted in blurring the distinction to make a point by saying you interpolate vectors and do lighting per pixel.

So go on keep posting nonsense, I encourage anyone who is in any doubt to follow the link I posted & get the facts instead of your misleading remarks. The distinction is absolutely clear.

michagl
06-06-2005, 02:49 AM
Umm... that's what NURBS are. Or rather, NURBS are a specific type of subdivision surface, being the most useful Quad type.
sorry, but i read this as a subdivision surface which uses quads rather than triangles would be equivalent to the iconic 2 dimensional nurbs patch model.

but for what its worth there are baryocentric (triangular) nurbs models, and even nurbs can be applied to polygonal baryocentric coordinates.

i don't know why you insist on dragging this sort of stuff on.

no of course subdivision surfaces are something beyond traditional N.U.R.B.S. modeling. but i belive you implied that subdivision surfaces are not internally mathematicly identical to nurbs, and this is just not true.

a nurbs curve is nurbs, a nurbs patch is nurbs, a nurbs surface is nurbs, a nurbs mesh is nurbs, so in this sense there is no reason why subdivision surfaces are not nurbs... and this is the context in which you originally replied.

no one actually believes that subdivision surfaces and traditional nurbs modeling are the same identical concept at every level.

spasi
06-06-2005, 04:02 AM
Originally posted by michagl:
i don't know why you insist on dragging this sort of stuff on.A friendly advice, you should show more respect to people that are much older and much more experienced than you.


Originally posted by michagl:
a nurbs curve is nurbs, a nurbs patch is nurbs, a nurbs surface is nurbs, a nurbs mesh is nurbs, so in this sense there is no reason why subdivision surfaces are not nurbs...The subdivision surface scheme that has the closest resemblance to nurbs is the Catmull-Clark one, which is actually based on the mathematics of nurbs. They're still not mathematically equivalent, but even if you don't accept this, there are other schemes too: Loop, Doo-Sabin and surely various custom implementations that support edge/vertex weighting, etc.

^Fishman
06-06-2005, 04:04 AM
Dorbie, please do yourself a favor and ignore michagl's post. You presented the facts already, I'm sure pretty much everybody here except michagl understands there is a clear difference.

bChambers
06-06-2005, 10:03 AM
Originally posted by dorbie:

Originally posted by bChambers:

Originally posted by dorbie:
Subdivision surfaces.Umm... that's what NURBS are.BZZZZZZZZT

http://grail.cs.washington.edu/projects/subdivision/ My bad, I assumed that by "Subdivision Surfaces" you meant surfaces that are defined as curves which may be subdivided indefinitely for whatever granularity / quality level you desire. I didn't realize you were talking about a specific implementation.

As the term "Subdivision surface" is quite general (and that's an extreme understatement), you might in the future refer to the actual name of the implementation.

dorbie
06-06-2005, 11:08 AM
I was referring to subdivision surfaces as the term is correctly applied as a field of computer graphics, I will use the term in future, in the generic sense and it won't mean NURBS, it's not as general as you imply. I'd hoped this thing had died until michagl's shenanigans. Your minor faux pas certainly doesn't merit the attention it's getting, so sorry about that, but stuff needed to get corrected, and will continue to be corrected.

michagl
06-06-2005, 06:07 PM
Post edited by dorbie:

michagl, those kinds of attacks are not acceptable and literally make you sound like gollum. Tempted as I am to leave it up here just for comic value, it's gone and this thread is over.

Take a breather, smile, go for a walk in the sunshine.

brinck
06-07-2005, 12:23 AM
Originally posted by : michagl
as well only the true nurbs based models show any promise of being able to provide continuous derivitives at arbitrarily configured joints. http://www.smlib.com/Manual/SubSrf.html

'One thing to note is that the subdivision surface math automatically makes smooth blends joining together the different tubes of this pipe the blend is not a separate operation that must be carefully computed, as it is with NURBS.'

'subdivision surfaces give you a surface that is guaranteed to be G1, or tangent-plane smooth over its entirety'

dorbie
06-07-2005, 12:32 AM
OK you posted while I was editing and deleting some unbelievable stuff, NOW the thread is over.