7 component color

My current project requires 7 component color. That is to say, there are 7 independent color components wherever a color is used. Each component or channel behaves independently, just as they do in 4 component (RGBA) colors.

The current hack I am using is adding two vertex attributes that contain the 7 components. Then rendering two separate 4 component (RGBA) color buffers that together contain the full 7 component pixel. These are merged outside GL for the application.

Was this the best way? What alternatives could I have tried?

No that is pretty much the only way.
the only changes you could possibly do is to how you render them, if you use FBO, p-buffers or perhaps render them side by side.
Unfortunatly there is no 6 or 8 component texture format, wich is odd since that would enable a lot of cooler stuff like individual color alpha.

Something for the ARB to consider I think.

While individual-color-alpha could be cool, it’s not something needed when OpenGL was created. Heck, back at that time it was a small miracle just to have a hardware-rendered pyramid with color-interpolation. Besides that, is there really any use for it?

That OpenGL has come this far without substantial redesign displays a thing or two about how well-designed it was (me, stabbing MS and DX/D3D in the back? Naaahh :wink: ). Either that, or the designers had such an unbelievable good luck to just hit the right spot that it’s almost statistically insignificant.

Anyway, OpenGL is for visualization. As such, we really do need just RGB. Combine them additively, and you can express the spectrum we can see on any kind of display device I know of.

Alpha is required for blending, but it’s never “seen”.

The same goes for multi-spectral images from e.g. scientific images from sat’s, that might have both 10 and 20 floating-point channels in a single image.

But you do raise what I consider a valid question here: while the output format is limited to RGB, why can’t I source, and use as intermediate (incl. intermediate output fmt, to some kind of “über-texture”), images of any number of channels? Why can’t you use e.g. texel.chan[0] instead of texel.r, and in the extension e.g. texel.chan[7] for an 8-or-more-channel texture?

I think this is a valid question, and I think the ARB should have a look at it. After all, we already have something similar for vertices, where we can add all kinds of data to them and process that in vertex programs.

All vertex attributes are limited to vec4 too. For multiple color output we have MRT.

Originally posted by tamlin:
But you do raise what I consider a valid question here: while the output format is limited to RGB, why can’t I source, and use as intermediate (incl. intermediate output fmt, to some kind of “über-texture”), images of any number of channels? Why can’t you use e.g. texel.chan[0] instead of texel.r, and in the extension e.g. texel.chan[7] for an 8-or-more-channel texture?

I think this is a valid question, and I think the ARB should have a look at it. After all, we already have something similar for vertices, where we can add all kinds of data to them and process that in vertex programs.
this is a valid point that ive mentioned here before, i would love gl to have it

You can always use an array of textures :wink:

Of course not more than you have texture image units, but that’s merely a hardware limitation, and can be raised without changing the API.

For writing it’s the same. At the moment you’re limited to 4 RGBA render targets, giving you a maximum of 16 channels, but that’s just a hardware limitation, too…

And for vertex attributes, uniforms or interpolants, you can use float arrays instead of vectors for as many components as you want (again only limited by hardware capabilities), or vec4 arrays for possibly a bit better performance.