While thinking of some optimizations that I can make on my fragment program, I began to wonder how expensive texture2D() sampler calls are. I have to make 4 sampler calls (with GL_NEAREST) per fragment.
What if I could read and write the data in a huge array and access that instead of textures?
The question is: Would GLSL allow me to declare arrays of incredibly large size? The GLSL Language Spec doesn’t mention a limit.
Would it be one of those “driver dependant” limitations?
I am not sure that the arrays would need to be uniforms. My application code has no need to read or write to the array. Though I would need one vert+frag program set to write to the array while a second set of vert+frag programs read from it… I don’t know if that is possible.
I guess I am just wishing for something that has a better fit for full screen deferred rendering.
As it is now, I have to write to several different textures in order to store all the data I need. Which means multiple parses of the scene’s geometry during writing and multiple non-interpolated sampler calls (to the same coordinates) during reading.
It just seems like a different system would fit better… Maybe a future OpenGL development could be: Textures with an arbitrary number of data channels per pixel.