this topic was inspired by the ‘aproximating gaussian blur’ thread… but i thought it better i post it indapendantly. the basic idea is the ins and outs of sampling an excessive number of times in a fragment shader.
i’ve been trying to get around to doing a radial disk fog effect with oscilating colours.
edit: the colours don’t actually move though like a spinning disk – though that would probably be a much cooler use of the effect and come for virtually free. i just meant the fog would oscilate between light and dark regions. ie. regions of greater and lesser atmosphereic scattering. another cool use for the effect might be to sample across a project cloud shadow or something. realisticly there is less fog under a heavy cloud cover because fog is just light reflecting off atmospheric particles, so where there is less light there should be less fog.
my idea so far has been to setup 4 intersection tests against a line segment and 2D planes. then lerp between the intersection points, and sum the outputs to get the total amount of fog along the line segment from the eye to the pixel.
but with this talk of sampling so intensely, i have to ask if it might be just as good to just sample the line segment so many times based on its length against a colour wheel type texture and just sum those samples to get the final fog value.
i guess i will try both… but could anyone give me a guess on which would be the more fruitful approach? i figure the sampling would introduce some dither type noise which might actually prove more visually appealing than a perfect sampling or might be totally distracting.
just trying to get a handle on whether or not all of this ‘super’ sampling is productive enough if applied say to every rasterized fragment.
edit: also just thought that using the colour wheel aproach would also mean much greater control over the actual colour of the fog along the radial axis for a more variable effect. also i’m curious if somehow mipmapping could be leveraged for sampling texels further from the eye. must fragment shaders do mipmapping manually or is the fragment early z depth used for mipmapping under special circumstances? how do you do mipmapping manually in a shader?