why secondarycolor is restricted to 3 components and not 3 or 4

whats the reasoning behind this
eg glColor lets me specify either rgb or rgba yet secondary color only allows rgb and it sets a to 1.0 (thats with gl2.0 in gl previously this was 0.0)

im wanting access to secondarycolor in some of my shaders and it would be useful for me to be able to change all 4 components.
is this just a legacy from the fixed pipeline?

ta zed

When it was introduced, the hw was limited.
I think the extension doc says in the future, an ext may be added to indicate support for RGBA.

Starting from 1.4, A = 1.0

If you want to completely dettach from fixed func, then you need generic attributes.

wasn’t it something to do with the secondary colour being added on after the texture stage (separate specular), so it didn’t make sense to have an alpha?
Can’t remember now…

I think the original idea was that graphics cards kept passing the fog color via the secondary color’s alpha, so they had to reserve the alpha.

It could also be that, since the original purpose of the secondary color was to allow for adding the specular component, it was decided that there was no need for the alpha. Adding alpha’s together, outside of HDR, doesn’t really make much sense.

If you want to completely dettach from fixed func, then you need generic attributes.
true but its nice to use the statemachine part of gl, set and forget.

i take it then theres no way for me to manually set the A of the secondary color? seems like a waste of a perfectlu good float

I think I remember reading (perhaps in an Nvidia doc) that if the secondary color is written by a vertex program, the alpha is available for use in a fragment program.

Originally posted by Korval:
I think the original idea was that graphics cards kept passing the fog color via the secondary color’s alpha, so they had to reserve the alpha.

That’s an interesting explanation, but what is the fog doing there?
Shouldn’t there be a stage that computes fog and adds to the fragment?
What you are saying sounds more like per vertex fog, instead of per fragment.

Originally posted by V-man:
[quote]Originally posted by Korval:
I think the original idea was that graphics cards kept passing the fog color via the secondary color’s alpha, so they had to reserve the alpha.

That’s an interesting explanation, but what is the fog doing there?
Shouldn’t there be a stage that computes fog and adds to the fragment?
What you are saying sounds more like per vertex fog, instead of per fragment.
[/QUOTE]IIRC, fixed-function fog is per vertex.

…Chambers

Depends on the fog hint…fastest, nicest etc.
It’s up to the vendor to decide what ‘nicest’ means.
Of course with linear fog it’s going to be per-vertex no matter what…pointless doing it perpixel. But exp/exp2 is another matter.

i take it then theres no way for me to manually set the A of the secondary color? seems like a waste of a perfectlu good float
It’s not a waste of anything, because it’s not there. All cards, that I know of, from before the programmable pipeline era implement secondary color as 3 values. The 4th “value” is universally used as the per-vertex fog weighing factor. That’s way you have some quirky extensions like ATIX_texture_env_route that allow you to use GL_SECONDARY_COLOR as an RGB source for texture blending and GL_FOG as an alpha source.

As another poster pointed out, the way the secondary color was originally conceived, there was no reason to access and alpha value if there was one.