Array size limits

While thinking of some optimizations that I can make on my fragment program, I began to wonder how expensive texture2D() sampler calls are. I have to make 4 sampler calls (with GL_NEAREST) per fragment.

What if I could read and write the data in a huge array and access that instead of textures?

The question is: Would GLSL allow me to declare arrays of incredibly large size? The GLSL Language Spec doesn’t mention a limit.

Would it be one of those “driver dependant” limitations?

Use textures :slight_smile:

If you use uniforms, then that uses up “constant registers” which is a kind of memory on the GPU and the GPU may have 256 or 512 or who knows what.

Do what he said.

There is also this new extension, EXT_texture_buffer_object. It allows to create huge (2^27) one dimensional textures directly from buffer objects.

I am not sure that the arrays would need to be uniforms. My application code has no need to read or write to the array. Though I would need one vert+frag program set to write to the array while a second set of vert+frag programs read from it… I don’t know if that is possible.

I guess I am just wishing for something that has a better fit for full screen deferred rendering.

As it is now, I have to write to several different textures in order to store all the data I need. Which means multiple parses of the scene’s geometry during writing and multiple non-interpolated sampler calls (to the same coordinates) during reading.

It just seems like a different system would fit better… Maybe a future OpenGL development could be: Textures with an arbitrary number of data channels per pixel.

What about multiple render targets (MRT)?

Really? I hadn’t heard of that. Thanks, I will look into it.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.