Shaders

So i was working on my shader language, taking q3a’s shader language a bit as an example, altough i wasn’t really following it that closely…
And i implemented all kinds of neat stuff like being able to a translation, then a rotation and then yet another translation etc. and even translations, shears and scales for either x & y or both…
And gave all it color functionality which could do all that too…
And i was feeling pretty good about myself…
And then i wanted to do multitexturing…
umpf
I read some stuff about multitexturing before so i didn’t think it’d be that hard…
unfortunatly i didn’t realize that all examples i had used glvertex instead of vertex buffers…
So it took me some time to figure out i should use glClientActiveTextureARB… and considering i didn’t have internet all weekend (damn provider) i couldn’t look that kind of info up on the net
Oh well.
So i was feeling kinda happy, finally having figured out how multitexturing with vertex buffers works and then i realized something…
Blending works completely differently with two textures you draw using multitexturing compared to two textures that are drawn on top of eachother using blending…

Now, on itself this isn’t a problem…
But when you consider that i’m working on a shader language which i want to get to work on cards that:

  1. have no multitexturing (unlikely, but still)
  2. can draw 2 textures at the same time
  3. can draw 3 textures at the same time
  4. future cards that can draw x textures at the same time.

so i can’t write my shaders to simply work with a specific number of textures units…
Which means i have to write the code in such a way that no matter how many texture units your card has, it’ll always try to draw it with the least ammounts of passes…
This also means that i probably need to rewrite the entire symantics of my shader language (boohoo)
(not that it’s that complex, so i won’t lose any sleep over it)
so i’ll have to define a lot of stuff in a more abstract way…
(instead of using shader commands like “blendfunc one one”)

Soooo…
The thing is, i’m not that familiar with multitexturing and how textures are combined in x different types of ways…
So i was wondering, does anyone have any suggestions on what to do with my shader language or where i can find excellent resources on the topic?
Also, if you have good ideas/comments related to shader languages, please share

thanks…

Hehe, good luck. This is not so easy a task to do referring to number 4. I’ve been working on my extension of the Q3 shader language for many months now. And one of the things that often throws a monkey wrench in the gears of optimizing the passes is the use of different depth and alpha tests, stencil tests, and different rgb and alpha generation for each pass of a given shader. On my TNT, this ends up making most fancy shaders render using multiple passes. I’ve even traced Q3’s GL calls and verified how it optimized (or failed to optimize) various shaders. Now if I included a special shader parser that took into account the presence of the GL_NV_texture_env_combine4 extension, I could get many of the shaders optimized a bit better. And even more if the NVIDIA register combiners extension is found. And once NVIDIA makes their NV_vertex_program extension available, there is no practical limit to the types of shaders that can be optimized (at least on NVIDIA hardware). However, all those that use different depth and alpha tests, or any stencil testing, can not be optimized that I can tell. Have you found a way to deal with those issues?

[This message has been edited by DFrey (edited 09-25-2000).]

Well i’ve only just started trying to solve these problems…
But i’m thinking… what if the entire problem is viewed as an arithmetic problem?
like
result = textureA * textureB - textureC?
that way a execution tree could be build, and it could be optimized as if it’s a mathematical calculation…
And then it could be split into certain parts specific to hardware capabilities…
How does that sound?
It wouldn’t help for a lot of cases, but it might be good enough for most cases…

just curious (not an important question)
i too am using the q3 models with shaders + what ive noticed (assumed) is someit like so
theres a model eg
model torso which has
mesh hand, mesh chest , mesh stomach, mesh neck etc.
now say one of the shaders just adds some specular to the hands + over the other meshes of the model it adds nothing.
yet the whole model will get drawn again , not just the hands.
aint this a lot of extra drawing or is there somewhere that saiz only draw this shader on this mesh.

i’m not entirely sure what you mean, but yes, there is one shader per model…
altough you can have multiple models in one .md3 file…
so you could have a seperate model for the hands if you want…

Also you must remember that though it is typical for each mesh within the md3 to use only one shader (not necessarily the same), the file format does allow for the possibility of multiple shaders per mesh. Something Quake3: Arena appears to not have used, at least I have yet to come across such a model.

[This message has been edited by DFrey (edited 09-29-2000).]

Really?? that’s odd… i reversed engineered the md3 file format a looong time ago, and it didn’t appear to be able to have multiple shaders per mesh…
i wouldn’t make much sense either, because you wouldn’t be able to use vertex buffers as easily if you could…
So how can you do that?