I’m developing a 2D HTML5 game that includes a water rippling effect as a background (so it does not directly interact with game objects). I was able to successfully implement and tweak the effect in JavaScript, but the performance was much too slow to be acceptable. As this effect works by performing calculations per individual pixel in the background, I rewrote the effect to work as a series of fragment shaders. It does basically what it’s supposed to do, but with some unexpected differences from the original effect. My hypothesis is that the reason must lie in something I don’t understand about the way my shaders are executed (and this is the first work I’ve done with GL, so I expect there is a lot I don’t understand).
I don’t believe excerpts of my code should be necessary to answer my question, but if anyone wants to see it, I’d be happy to post the relevant bits as well as the source material explaining the algorithm that I use. In case it’s relevant, I am using WebGL, not standard OpenGL.
In broad terms, the algorithm works by assigning a height value for each pixel of the image meant to represent the surface of the water and storing the value in a matrix the same size as the image. (As I’m using fragment shaders, this matrix is actually a GL texture with grayscale values representing the data.) Each frame, the updated value for the height of a given pixel is calculated using the average of the heights of the pixels around it. The waves themselves are an emergent property that arise once you create a disturbance somewhere in the height values. There is more to the effect, but this is the piece that is not acting quite as it should.
It almost works as it should. There are a few subtle things that don’t look exactly right, but the most informative of them is the appearance of waves that, rather than expanding outward and diminishing, instead slowly collapse inward on themselves. Rather than the forces of the waves acting on the undisturbed parts of the water, the undisturbed water is exerting itself on the waves, pressing in on them from the outside until they are smothered out, which is basically inside out. These pockets of inside-out waves also tend to very slightly move as a whole, as though they were a solid object.
I’m not aware of anything that should be causing the algorithm to behave differently when executed in a GL context. As I understand it, fragment shaders perform individual per-fragment calculations all at once, versus one calculation at a time as a CPU would execute, but they are referring to the same data, which is not being changed until the end of the frame. Tweaking values have not changed the unwanted behavior. Is there something relevant about the GL pipeline that I’m missing here?
Thanks!