I want to implement a smudge effect like photoshop's (can't paste a link here, but a quick youtube search will give you the example)
I think that the ideas outlined in this article point me in the right direction, but I want to be sure before digging into the problem: (again, can't paste a link, just google for how to implement smudge and stamp tools, in lostingfight)
Which would be the right approach to implement it using OpenGL?
I know that I would have to make a "stamp texture" by reading the pixels around the finger tap location. This can be done by changing the viewport and the projection matrix to make sure that only the pixels around the tap location will be drawn, and then draw in a separate buffer.
But where to go from here? How can I compose the original image with the smudge stamps that I'm generating while dragging? Does it make any sense to compose the image in the fragment shader? I don't think so. How about composing the image in CPU and then re-upload the resulting image to the GPU via glTexSubimage2D()?
What happens if I generate a smudge stroke, but I want to draw another? It seems that I should use the result of the first as input to the second, right?
Anyone can point me in the right direction?
By the way, I'm working on an iPad Air, iOS 7, and OpenGL ES 2.0.