sorry, but i felt like i had to add some more.
i’m personally very excited about this business. i can see a sizable chunk of the future of computer graphics in it.
we can’t throw triangles at the rasterizer that no bigger than a pixel… that defeats the purpose of using triangles in the first place. but on hte other hand we need believable silhouettes right down to the pixel.
i believe the future is in environments that are so large in scope and scalability that it is preposterous to precompute everything and store it on disk… that is every vertex should be sampled at run-time from most likely some infinitely scalable parametric base geometry (nurbs control mesh) and a combination of various sorts of map encoded data.
i’ve done a lot of work to manage clod systems. and my focus has changed in the meantime from trying to beat static precomputed algorithms to simply managing an environment where all data is sampled at run-time. that is where the real bottle-neck is. its not about necesarrilly effeciently displaying data, as much as it is retrieving the data.
this texture space displacement algorithm allows for the need for high resolution detail in the geometry to be pushed back even futher meaning that run-time sampling can be relaxed because you can rely on the fragment shader to pick up the slack in the geometry department. (which is especially helpful once you get in deformable run-time sampled geometry).
this fragment geometry is awesome really, its like a little baryocentric lattice deformer. how long will we have to wait until hardware supports robust vertex lattice deformations, but for fragment shaders we have them right here.
i just think this is awesome.
i think this linear/binary sampling should be handled in a single instruction entirely on hardware.
i would really like talk about how this silhouette capable curvature based shader differs from say the shader outlined in the ATI paper i referenced ealier.
i’m assuming the ray being cast is parabolic rather than linear. the curvature ‘bends’ the shape of the ray.
i would like to know if the curvature can be sampled from a nurbs surface straight forwardly via a derived surface.
the ATI presentation says that the u and v vectors of tangent space are derived from ‘b’ and ‘t’ basis vectors… does some relationship between these vectors have something to do with the curvature of the surface?
any other ideas?
i fully intend to do whatever investigation i can in my free time. i will have to drag out some books and hit the internet i guess. i wish i could give this investigation a higher priority. that is why i’m hoping for some leads here.
sincerely,
michael