Hi everyone.
I’m fairly new to OpenGL/GLES and I’m taking on an unusual project. I’m trying to get a realtime Mandelbox explorer working on the iPad2. I’m using shaders pulled from “boxplorer” by “Rrrola”: Here is the project he made.
His technique uses ray casting and puts all the computation in the fragment shader. This is reasonable on today’s modern desktop & laptops but can’t achieve a reasonable frame rate on iOS devices. I would like to implement screen door style partial renderings, rendering every Nth fragment. On subsequent frames, I can then render another Nth offset by 1. Think even pixels, then odd pixels, then even, etc. For sample purposes, I’ll just assume two passes. The fragment shader would look something like this:
uniform int pass; // driving app flips this on each frame: 0,1,0,1
....
main() {
if (int(frag.x + frag.y + pass) % 2 == 1) {
// ... expensive rendering
}
else {
discard;
}
}
I’ve run tests and discard drops my workload in half, despite general claims that it’s a terrible thing. I suspect that’s specific to my hardware, which I’m fine with for now.
To do what I want to do though, I want to draw on top of the last frame, not clear the buffers. The GL analyzer tells me that if I don’t clear the frame, the GPU has to load the frame back in from system memory; an expensive operation. Is it better to do all my drawing into a persistent offscreen buffer (texture?) and then draw that on screen in a second drawing pass?
The bigger metaquestion is how can I make this shader faster to increase interactivity on slower devices. If anyone has thoughts on that, I’d appreciate the help as well.
Charlie.