Programmability in the Pixel Transfer Pipeline?

Is there any kind of programmability in the Pixel Transfer Pipeline in either OpenGL 3.x or 4.x, or in any new extension?

I’m asking it because I want to implement ARB_imaging in hardware (I’ve old source code which used ARB_imaging a lot, and ran with hardware acceleration on SGIs years ago).

If it’s not possible, I guess the only way is to get the programmability through textures, but this complicates the implementation (deeper modification of the original programs, and the need to use temporal textures which must be power of 2).

Thanks a lot for any suggestions.

Is there any kind of programmability in the Pixel Transfer Pipeline in either OpenGL 3.x or 4.x, or in any new extension?

Not really.

and the need to use temporal textures which must be power of 2

If you’re using GL 3 or greater, textures don’t have to be power of two. Technically, this was removed in GL 2.0, but some hardware that exposes 2.0 couldn’t really handle it for all textures. But all 3.0 or greater GL hardware can handle it.

Just found that it’s possible to share a renderbuffer across OpenGL and OpenCL. Reading the docs, it seems to be possible to do so with GL 3.x and CL 1.0, although I still didn’t try it.

If this works, I believe it can be the best way for my custom implementation of ARB_imaging in hardware, because I’ll just need to write custom myReadPixels(), myDrawPixels(), myCopyPixels(), and so on, that internally share the renderbuffer when OpenCL and then do all the stuff.

Also, I believe it’s easier to implement histograms in OpenCL than in GLSL.