Fragment Program and Pixel Locations

Hello all,

I know that it’s theoretically not possible for a fragment shader to affect pixel location. But does anyone know any hacks or work-arounds for this?

Can fragment shaders write to arbitrary locations in texture memory?

Basically what I’m asking is, if we think of the output of the rendering pass as a 2D array, is it possible for a fragment shader to write to any arbitrary location in that array?

Absolutely no way.
Imagine a super-duper chip which hardwires one pipe per pixel on your screen :cool: .
As soon as you are in that pipe, you’re locked down to its final position.

The workaround would be to render a grid of GL_POINTS carefully adjusted to pixel centers (no FSAA!) and program the vertex pipeline to do the offset math.
If you have a texture dependent displacement in mind, you’d need a GeForce 6 using vertex textures.

Or, if your algorithm is able to store a unique inverse offsets per pixel (“From where should I read?”) inside a floating point texture, this could be done with a dependent texture lookup in the second path of a fragment program.

Well, some limited fake-displacement seems to be possible.
On NVidia developer site http://developer.nvidia.com they’re giving away some free chapters from the lastest GPU gems book. There’s a chapter which explains per-fragment displacement mapping.
Well, it’s not really much flexible as you may need but maybe you should take a look.

That is a displaced texture dependent read not moving the pixel. You can attempt to move texture fetches but ultimately the fragments you get are those from the primitive being rasterized. The net result may be a pretty good approximation of what you would have done with displaced pixels.

Thanks all, that’s pretty much what I’ve been trying, but, as you all point out, those methods are messy and not very robust.

What about vertex programs, can they write to arbitrary locations, in texture memory or otherwise? I suspect that they can, but I’m really just getting started with shaders.

Carmack talks about screen space displacement mapping at the last QuakeCon.
You could project the offset vector from (steep) parallax or relief mapping into screen space. Then you’ve got the vector the current pixel should be moved. But how can you get the inverse offset (“From where should I read?”)?

there is a boiling desert heat type effect that the playstation 2 seems to do very easily. ie. the scene is swimming globally.

i figure for non-ps2 hardware it would be achieved by rendering to a texture slightly larger than the screen and then after you are through writing the texture to the frame buffer while modulating the fragment reads.

for the ps2 though it seems to be a built in effect.

would it be worth while to have programmable frame buffer swapping units?

sincerely,

michael

Originally posted by LarsMiddendorf:
Carmack talks about screen space displacement mapping at the last QuakeCon.
You could project the offset vector from (steep) parallax or relief mapping into screen space. Then you’ve got the vector the current pixel should be moved. But how can you get the inverse offset (“From where should I read?”)?

are you sure??? wouldn’t this leave all sorts of holes? i couldn’t imagine what kind of technique you could use to ensure that no holes would be created. and how do you assign proper depth values? or they talking about doing this before the fragment shader entirely in hardware?

I have some interesting thoughts for being able to do
sort of a screen space displacement mapping, where we render different offsets into the screen and then
go back and render the scene warping your things as necessary for that, which would solve the T-junction
cracking problem that you get when using real displacement mapping across surfaces where the edges
don’t necessarily line up.
http://www.gamedev.net/community/forums/topic.asp?topic_id=266373

This is an interesting idea. But how to get the correct offset if you only know how far the current pixel should be moved? Perhaps some kind of filtering or flow simulation?

It initially seems possible with image based approach but the question of where you read the image from is an embodiment of a complex problem, but perhaps not for the reasons assumed, you have an offset vector that could approximate the destination depending on how the assets are represented (you can chose to have the polygon hull enclose an entirely negatively offset displacement to the true surface for example). The real issue is resolution of offset fragment occlusion. Each sample gets a single read on the subsequent pass and a single offset value. Cracks are a non issue due to texture filtering, every fragment gets hit. The silhouette is described by alpha, the rgb image is actually an RGBA with the alpha term determining the final displaced silhouette.

So making this work is actually a question of rendering your offset vector map in a way that resolves the occluded offset issue. That may be possibly when rendering the offset vectors to screen space.

Without resolving occlusion in the offset vector map I think you could fix this with a depth map and multiple itterative image fetches almost like a search, it’s no worse that the itteration through the map now for accurate tracing. It just does it in screen space.

Definitely doable I’d say, worth it? I dunno… One problem would be discovered fragments on the same model. For example a body displacing in to reveal a hidden limb. Perhaps that could be fixed by a hull with all positive displacements but it’s a second order effect. The magnitude of the displacement and required search is a performance limiting factor and a positively displaced hull has no starting vector for the displacement search.

i lost track of this topic… not that it went to far.

anyhow, here is an answer to my own question that i ran into today:

http://www.ati.com/developer/gdc/Tatarchuk-ParallaxOcclusionMapping-FINAL_Print.pdf

i wouldn’t call this ‘screen space’ displacement mapping (which would be awesome and probably voxel related). i guess it is texture space displacement mapping. the silhouettes always reduce this sort of stuff to little more than a visual hack for me. bump mapping at the end of the day is really only good for speckle type surfaces, and faces facing the camera.

edit: traditional silhouetteless displacement bump mapping also works well with inlays such as on jewelry where the silhouette can easilly be hidden… but lets talk about silhouette bump mapping please!

i dunno, all the images in that paper were clipped at the silhouettes, but it seems like since it uses negative displacement mapping, if the outer geometry completely encompassed the inner geometery ( a hull as dorbie suggested ) … it seems to me like it might be possible to use this texel space raytracing aproach to achieve proper silhouettes by assigning alpha values to silhouette rays which escape the surface… you might even be able to do some sort of antialiasing on the silhouettes.

maybe i should give that paper another look over.

sincerely,

michael

edit: sorry dorbie about not noticing your suggestion to use alpha fragments on the silhouettes. (admitedly in a hurry i sort of skimmed your post because it looked wordy and complicated) maybe someone here might find that paper interesting. i’m considering looking into do this… i think it would work very well with the system i’m building right now… i need some way to sop up gpu cycles as it is.

oooo, i find this an exciting prospect.

anyone think it might be possible to use a thing filtered 3d texture (say 512x512x8) with this sort of technique to lay down a ‘pot marked’ surface.

about the most complicate microscopic terrain i can think of is soft soil/mud rutted up by horses… or maybe a loose gravel road.

so might it be possible to capture a loose gravel road in a 4~8 pixel deep filtered 3D texture with this technique?

could even do a scaley type surface maybe.

You could possible get correct silhouettes by transforming the three edges of the triangle into texture space and killing those pixels that lay outside of the triangle. That would be three DP3 instructions and a KIL.

http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=3;t=012842

Originally posted by LarsMiddendorf:
You could possible get correct silhouettes by transforming the three edges of the triangle into texture space and killing those pixels that lay outside of the triangle. That would be three DP3 instructions and a KIL.
i was thinking yesterday after i logged off that there is probably no straight forward way to determine silhouette pixels after displacement. i could be very wrong… but really tangent space is flat locally i believe… so there seems to be no way of determing in the fragment shader alone if a ray cast escapes tangent space.

btw? what is a ‘kill’ and how do you do it?

Originally posted by knackered:
http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=3;t=012842
wow! this is simply awesome… i didn’t realize so much could be achieved so flawlessly…

i definately want to do this with this. the latest system i’m working (which i’ve been most vocal about in this forum) is the perfect platform for this technology. i think i will use a multi path shader with mipmapping to variate the effects and sampling frequency as suggested in the paper i posted.

did fpo never release any high-level shader code for this effect?

it would be very cool if fpo would chime in here.

i have a real-time clod nurbs system with geometric displacement mapping running for macroscopic displacement. tangent space on the fly is practicly a necessity for normal generation anyhow. the density of the screen space vertices can be leveraged to ensure that the triangles requiring the deepest paths / higher sampling frequency can be alotted fewer pixels.

anyhow, i would be very interested in either some high-level cg/glsl code, or a fairly detailed break down of the method.

sincerely,

michael

potmat, might i ask what specifically you had in mind for this or was this sort of a generic kind of inquery?

pesonally i dont think the texture kludge is all that ugly but this clearly depends on the objective and of this you would certainly know better than i. since hacks are very task specific, some task specifics might be helpful. by that i mean the higher level objective beyond the lower level implementation of “moving pixels” if such a distinction could be made.

respectfully,
bonehead

as far as potmat is concerned, there is no literal way to move fragments with a frag shader… this is these are viable alternatives though. remember a forum isn’t here just for potmat or any individual, it is for everyone, and positive discourse.

bbs forums bother me a bit i must admit. every time i see a new response to a thread i’m happy to see discussion moving (wherever it goes) but i still can’t resist crossing my fingers and praying that the new posts are positive.

opengl isn’t the product of any particular society for any particular society. just loosen up and enjoy yourself. why would you want to force everyone to act like you as long as the intended direction is positive.

if potmat wanted something more specific geared to their application, than just like everyone else, they are free to chime in.

FPO’s work is very exciting in my opinin. if he is not going to grace us with the promised paper and demo, then at the least i think he owes us an explanation of his disposition.

i have a vague idea of what is going on with his routine, but it would save me and probably a lot of others some heart ache if fpo would be more giving.

that is all i’m waiting for.

if people like, i would be quite happy to start a discussion discussing the ins and outs of this aproach.

i’m especially interested in how fpo’s curvature metric is derived. is it literal gaussian curvature or what, and how is it aproximated and utilized?

i can make guesses, but i also believe that such forums should be used to make people’s lives easier for the betterment of the scientific knowledge of the human race in general, and not just as a last resort after the internet and pricey textbooks and commercial books have been exausted… i mean its not as if this forum is just so flooded with discussion that it can’t be managed. its pretty dry actually in my opinion.

sincerely,

michael

sorry, but i felt like i had to add some more.

i’m personally very excited about this business. i can see a sizable chunk of the future of computer graphics in it.

we can’t throw triangles at the rasterizer that no bigger than a pixel… that defeats the purpose of using triangles in the first place. but on hte other hand we need believable silhouettes right down to the pixel.

i believe the future is in environments that are so large in scope and scalability that it is preposterous to precompute everything and store it on disk… that is every vertex should be sampled at run-time from most likely some infinitely scalable parametric base geometry (nurbs control mesh) and a combination of various sorts of map encoded data.

i’ve done a lot of work to manage clod systems. and my focus has changed in the meantime from trying to beat static precomputed algorithms to simply managing an environment where all data is sampled at run-time. that is where the real bottle-neck is. its not about necesarrilly effeciently displaying data, as much as it is retrieving the data.

this texture space displacement algorithm allows for the need for high resolution detail in the geometry to be pushed back even futher meaning that run-time sampling can be relaxed because you can rely on the fragment shader to pick up the slack in the geometry department. (which is especially helpful once you get in deformable run-time sampled geometry).

this fragment geometry is awesome really, its like a little baryocentric lattice deformer. how long will we have to wait until hardware supports robust vertex lattice deformations, but for fragment shaders we have them right here.

i just think this is awesome.

i think this linear/binary sampling should be handled in a single instruction entirely on hardware.

i would really like talk about how this silhouette capable curvature based shader differs from say the shader outlined in the ATI paper i referenced ealier.

i’m assuming the ray being cast is parabolic rather than linear. the curvature ‘bends’ the shape of the ray.

i would like to know if the curvature can be sampled from a nurbs surface straight forwardly via a derived surface.

the ATI presentation says that the u and v vectors of tangent space are derived from ‘b’ and ‘t’ basis vectors… does some relationship between these vectors have something to do with the curvature of the surface?

any other ideas?

i fully intend to do whatever investigation i can in my free time. i will have to drag out some books and hit the internet i guess. i wish i could give this investigation a higher priority. that is why i’m hoping for some leads here.

sincerely,

michael