Default vertex/fragment shaders.

I am learning how to use vertex and fragment shaders. In a program I’m writing now, I want to have all the default functionality except I want to modify one small thing in the vertex and fragment shaders.

Unfortunately, of course, using a vertex and fragment shader means you have to reimplement everything the graphics card normally does.

So… does anybody know where I can find vertex and fragment shaders that do everything the graphics card normally does? It’s frustrating because, for example, I still want lighting to work as normal in my application, and I am not interested in learning about the intricacies of lighting and material calculations right now – and what I’m doing has absolutely nothing to do with lighting or materials. However, I need it to work. Same with textures (esp. multitexturing), fog, all that stuff. I don’t want to sit here and attempt to reimplement everything that, say, GL_ARB_multitexture already does – and then also have to test to make sure it’s identical to the original functionality (heck, I’m having a hard enough time just finding out what the original functionality actually is, let alone implementing it in GLSL, which I just started learning about 4 hours ago).

Thanks,
Jason

For this there was this GLSL ShaderGen tool from 3Dlabs, but apparently they removed it from their website.

It allows to select your fixed function elements, generate the emulating GLSL code, and see the result. You can even edit the GLSL code and test the changes. Is is said to be “educational” and not for performance though.

As the license allows its redistribution in unmodified binary form, I will try to make it available.
I don’t have the original installer, but it should be fine.

Here it is :
http://dl.free.fr/onatV6lt8/GLSLShaderGen.zip

Hi,

The author can be found at http://www.prideout.net/index.html .
Try emailing him for the 3D programs he did for 3DLabs.

Ido

Thanks for the link ZbuffeR. I am playing around with that program now and I am noticing two things:

  1. You have to have a different set of shaders for every combination of glEnable/glDisable/etc. features you use…? Is this always the case? For example, changing the texgen mode isn’t supported in the shaders it generates – you have to pick the one you are going to use and then regenerate the GLSL code that emulates the options you set. So if I wanted to use some shaders in an application that uses two different texgen modes, do I have to have to separate shaders one for each mode, and maintain two copies of my own modifications…?

  2. Lighting doesn’t seem to work quite right in the generated shaders. Specifically, only light 0 seems to work, and it’s a little brighter than the fixed functionality mode one.

There has got to be an easier way to deal with this stuff. The reason I assume there must be an easier way is there is virtually no Google results searching for default implementations or related information, very few forum posts regarding this, and the GLSL ShaderGen program was removed from the 3DLabs web site. If it was a common problem, you’d expect many solutions to exist. And if ShaderGen was a commonly used tool, you’d expect it to still be there and maintained. I feel like I am missing something – it seems like nobody has this problem except me and maybe a very small handful of others. How do other people manage to write small shader programs for use in applications that use a lot of OpenGL features? Does everybody somehow just, know exactly how to reimplement everything the card already does for them? This doesn’t make any sense to me…

I’ve contacted the author of ShaderGen about the issues with that program at least, if I get some more info I’ll post it here.

Thanks,
Jason

  1. that is almost the way it worked with the old fixed GL. You enabled/disabled some hardware paths. Now you select the shader you use. Another way is to have an “uber shader” that computes everying but some parts have zero contribution depending on what you “enable”. Real dynamic branching is only effective on the latest hardware, and for quite large chunks of the code, that is why it is often preferable to switch the shader. You can decompose a shader in common and specific parts, and assemble your different combinations at runtime, it is easier to maintain.

Another thing is that most of the time you will only need a small subset of the complete fixed GL spec.

And something even more important : a lot of stuff will be done done in the pixel shader and very differently. You can now do fresnel shading, refraction, parallax bump maps, procedural shading, etc etc.

  1. for me (win2000 Geforce6800 forceware 163.44) all lights (and everyng else in fact) works and produce perfectly equivalent results compared to fixed point, at least within 1% precision, tested with my bare eyes and this line at the end of the fragment shader :
    gl_FragColor = color+0.01; // I see this difference

Don’t forget to “build” when you change the GL settings, and when you customize the shader, use “compile” then “link” to see the result.

  1. That seems ridiculous. So you are saying that, for example, if I have an application, perhaps a 3D model editor where a user can enable/disable lighting, two-sided lighting, culling, and texturing, per object on the screen (or whatever features that the shaders take care of), I have to either:

A) Prepare 222*2 = 16 (or whatever, 2^number_of_things_i_enable) separate programs each with every possible combination of things I want to do, and switch based on current settings rather than simply using glEnable/glDisable as I do now, and as I render objects in my scene, continuously check large combinations of settings and switch to the appropriate programs, or

B) Define each “feature” as a separate program, losing any possible optimizations I could make across features (since they’re all implemented as modular, separate programs), and linking each one on the fly as I render (so relink an entire fragment program rather than simply glDisable(GL_LIGHTING)). I suspect glDisable(GL_TEXTURE_2D) is slightly faster than relinking and switching fragment programs but I may be wrong.

Those are my only options? And that’s not just difficult with the example I gave, let’s say I’m trying to make a nice library of code that can use, perhaps a depth peeling library with fragment shaders for support and I want to be able to use it in any situation – that is impossible unless I come up with a separate shader program for every single one of the thousands of possible combinations of GL states that any application using my library is likely to use?

Furthermore, do shaders -not- give access to current glEnable/glDisable states? Does that mean that I -have- to use, say, my own uniform variables to “enable” and “disable” things, and so if I have an existing application with a lot of rendering stuff going on that I want to add some shader functionality to, I have to then go back and rework the entire application to use my own custom enable/disable things?

If I write my own fragment shader can I no longer use OpenGL’s texturing API, like glTexImage2D? Do I now have to reimplement that all myself with some kind of uniform sampler variables or something?

Another question I have is: Have you ever been able to reuse a shader program in two different applications? It’s looking like every time I want to do a small thing, I have to code up a new GLSL program specifically optimized for that application. Do I want lighting support? Do I want multitexturing (and good bye ARB_multitexture convenience)?

Was ARB_vertex_shader and ARB_fragment_shader designed with only large teams of programmers with the resources to work on large applications in mind? Is it intentional that it takes 60 hours of coding and research and a thousand lines of shader code for one person to implement even the most trivial new feature (like emulating the sadly forgotten EXT_paletted_texture or something)?

  1. I did “build”, the lights do not work. This is WinXP with an ATI Mobility Radeon X1400, I’m not sure which driver version (not at that machine right now) – it’s the last version I got via Windows Update, but I’m sure it’s newer than the last revision of this software (which was around Dec. 2005-ish).

In general: Is it possible to use small vertex and fragment programs conveniently? Or will using them always add a weeks worth of work, testing, and general hassle to any application development? Every site you see with examples of some interesting thing you can do with shaders shows a small bit of example code, but unfortunately leaves out the 200000 lines of code you need just to bring the rest of the functionality back up to ground level.

Thanks…
Jason

P.S. When new extensions come out do you start seeing forum posts with people scrambling for GLSL code, like “does anybody have a shader program to reimplement the possibly proprietary and highly-complex, extensively researched EXT_some_great_texturing_extension (or whatever) that just came out?” Are people relying on shader programs for certain features unable to use certain new built-in GL features that may come out without reimplementing them all on their own?

First, calm down a bit.
Then, take some time to read “OpenGL Shading Language (2nd Edition)” aka “The Orange Book” http://www.3dshaders.com/ to get more of the big picture about shaders.
For example you will see that multitexture with GLSL is very simple. And I am glad I went directly from single texture to GLSL, skipping the whole clunky register combiners/crossbar etc :smiley:

I am pretty sure it is not possible to get proper video drivers via windows update, instead try the ATI/AMD driver directly from their website. And mobile cards are even more of a pain. But something like that may help :
http://game.amd.com/us-en/drivers_catalyst.aspx?p=xp/mobility-xp

Edit: and on 3dshaders you have the GLSL ShaderGen source code this may help you.

Edit2: re your PS : you get it backwards. New GL extensions either extend GLSL, or are orthogonal (like floating points textures).
The only case of “GLSL implemented replacement” I am aware of is to support PCF shadows an ATI, whereas it is an automatic feature on Nvidia.

No. I will not calm down. I hate new things. :mad: :wink: In fact, I strongly believe that vertex_shader and fragment_shader should be moved to a new extension category, WTF, creating WTF_vertex_shader and WTF_fragment_shader.

Then, take some time to read “OpenGL Shading Language (2nd Edition)” aka “The Orange Book” http://www.3dshaders.com/ to get more of the big picture about shaders.

I think that this stuff is way more difficult than it needs to be but, I will keep an open mind, and work on that book over the next few days. Thanks for that link, I was actually just wondering why none of the 3DLabs developer stuff was around but it’s nice to see that it’s still alive somewhere.

I am pretty sure it is not possible to get proper video drivers via windows update, instead try the ATI/AMD driver directly from their website. And mobile cards are even more of a pain. But something like that may help :
http://game.amd.com/us-en/drivers_catalyst.aspx?p=xp/mobility-xp

Thanks once again for a helpful link. I do have the latest version of the drivers, but I’m still having a problem with ShaderGen. Maybe you can figure out what is going on here. So here are the steps to reproduce the problem I’m having:

  1. [li]Start up ShaderGen.[]In “Light” tab disable L0 and L2, so only L1 is enabled. Leave all other settings at initial defaults.[]“Build”.

When I do that, fixed functionality mode has the default red L1, but equivalent shader mode uses the diffuse color and light position (and other light parameters) of L0 instead of L1 (producing a yellow light, at the wrong location, for L1). I get the same results when I use L2. However, I am able to “fix” this by editing the generated vertex shader – in the pointLight() function, if I replace all occurences of [ i] with [1] (or [2] for L2), it works! For example, changing the Diffuse calculation to:

Diffuse += gl_LightSource[1].diffuse * nDotVP * attenuation;

Correctly uses L1’s diffuse color rather than L0’s. This seems strange to me because it looks like pointLight() is indeed being passed the correct index down in flight():

pointLight(1, normal, eye, ecPosition3);

But it gets weirder. I did some further investigation. Using L1 and the default generated vertex program, I added this debugging code to the very end of pointLight() as a sanity check:

if (i==0)
  Diffuse = vec4(1,0,0,1);
else if (i==1)
  Diffuse = vec4(0,1,0,1);

So if i is 0 it makes the object red, and if it’s 1 it makes the object green. Since it seems to be using index 0 regardless of the value of i passed to the function, I expected the output to be red. Well, when pointLight(1, …) is called, lo and behold, it’s green. So it knows i is 1… yet it still uses [0] when i is used to index the lights array! WTF[_vertex_shader] is going on?

It’s really weird. I also tried renaming “i” to something else (“iii” in my case) just for grins, but that didn’t help. Is there some strange syntax problem with the generated vertex program?

Anyways, I got the source code, maybe I will try to make a Linux port some day, too (I dual boot Linux/Windows on my laptop, am developing an application in Linux but all the useful tools are for Windows, I can’t count how many times I’ve had to reboot in the last 2 days messing around with all this stuff).

Jason

Well strange indeed.

About the tools running only on windows, you can try to run directly the binary through wine, it works surprisingly well.

I think you’re missing the obvious: it isn’t a common problem because few people want to do this.

Most people use shaders to implement their own material/lighting model. This flexibility is what shaders are all about. They’re not interested in OpenGL’s inflexible and dated fixed function lighting and texturing.

That is the first thing I tried :slight_smile: I’ve always had problems running OpenGL applications through wine for some reason. And yes, the problem is weird – and unsolved, so I’m still stuck with no solution to my original problem other than trial and error and porting my entire application to use my own GLSL code instead of the OpenGL API it was already using. Fun, fun, fun!

I guess the root of that is how much OpenGL advancements seem to be geared towards writing games with the latest and greatest visual effects, which is understandable. Personally, I don’t find, say, volumetric fog or whatever to be particularly useful for, say, a medical imaging application. I don’t want or need to reimplement default lighting math; I shouldn’t be spending more time working on graphics than I am working on core functionality.

It will be a huge pain, for me at least, if OpenGL ends up so geared towards creating cheesy first-person shooters with bump mapping and skinned monsters that the “inflexible” and “dated” API becomes deprecated and removed (I know it’s about more than just these games – the point is there are plenty of simpler applications where ease of development is more important than cutting edge visual effects; I’m not forgetting about the value of shaders for doing things besides graphics on GPUs, but that is a very rare case and not what I see the OpenGL API itself being tailored to). I don’t write games, and I don’t want to write GLSL code for texture math every time I want to put a texture on a surface when all I need to do now is 2 or 3 simple API calls. I hope that isn’t where the API is headed (I mentioned this above, but one thing that bummed me out was when EXT_paletted_texture disappeared in favor of fragment shaders, and you have to do the same thing plus reimplement all the default functionality just to get this simple feature – from the standpoint of an application that needs nothing more than fast paletted textures, nothing was gained but huge amounts of convenience was lost – that is not a good property of an API).

How many applications do you think exist right now that have the exact same portion of fragment shader code that does basic texture operations that were already possible with the fixed functionality (for example)? The same tedious, basic texture operations in what, hundreds, thousands of applications? If you have to type the exact same GLSL code to get basic functionality every single time you want to use a shader for something else, that’s a sign that, say, “applytextures()” might be a good function to add in the next version of GLSL…

It seems highly unreasonable to me that the GLSL specification does not supply built in functions that replicate existing fixed functionality. Such a simple addition would have made GLSL far more accessible to a non-game developer. I wonder what their rationale for that was.

So do you have any advice for my situation, Xmas… here is my problem, now: I have an entire application that makes extensive use of the OpenGL texturing API. I want to add the ability to do slightly more advanced per-fragment depth testing, for depth peeling (just a simple addition, emulating dual depth buffers). How would you go about doing this? Does this small, tiny, simple, clearly defined trivial feature that I want to add really require me to rework the -entire- application to use my own personal “texturing API” since I am now no longer able to use OpenGL’s built-in, convenient, perfectly functional, useful, and adequate texturing calls? If so, does that really seem reasonable? That suggests, to me, that something may have been overlooked by that big group of ARB_fragment_shader contributors. Do my expectations seem unreasonable?

I don’t think having a more flexible pipeline equates to the API being more geared toward games. I use GL for visualization purposes, and many of the things I have done would not be possible (or would be very complex) if it were not for shaders.

There will not be an “applytextures()” function in GLSL. The whole point is to be able to perform lookups (into many sampler types) and do whatever you want with the data, hence the flexibility. This “default lighting math” you speak of is per-vertex, which is very limited in itself.

IMO, the GLSL multitexturing way is much cleaner and much more elegant than the fixed pipeline way. You declare your samplers as uniforms, bind your textures, and access the textures however you want.

I am willing to bet that fixed-function blending and maybe even the stencil and depth test will be the next things to be moved into a shader. For example, you would be given a variable which holds the current framebuffer color, and you could take that into account in any arbitrary computation you wanted.

It’s true: you need to master all lighting by yourself when using shaders. In fact, that’s what shading is all about, right? If you didn’t care about this in the past, it will make some effort. I’d also recommend to read the Orange Book first and then trying again. It’s really not that hard!

But switching an old application to some shader based approach can admittedly get harder.

I also can assure you that those gaming effects can be very useful in “serious” applications, too.

CatDog

The key point, though, and what is the source of a lot of my frustration, is that it’s only beneficial if you actually want to perform arbitrary computations. As for multitexturing, right now -all- I do is glActivateTexture, glBindTexture as usual, glMultiTexCoord, and texture blending happens.

On the other hand, with GLSL, if I want to continue using multitexturing with no arbitrary, advanced functionality:

  1. [li]I end up with virtually the same interface, but with different API names. My own “texturing API” is not simplified. I’m still enabling and binding my own multitextures and setting texture coordinates using whatever uniform variable names instead of the built-in API calls. Same thing, different function names. Nothing is gained there. True, nothing is lost either, however…[]Now I have to rework every existing texturing API call in my application to my new ones. This work comes with no benefit.[]I have to develop and test the GLSL code to reimplement this. While it may be trivial to implement, I still need to implement it, and I still need to test it. This work, again, comes with no benefit, as I just want to use the fragment shader for something completely unrelated and am not interested in adding on to the existing adequate multitexturing functionality.

That same thing applies to other things besides multitexturing, such as lighting, or fog, or whatever. And it will apply to depth, stencil, etc. testing as well if those are ever moved into shaders. If a new extension moves depth and stencil testing into a shader, then every time you add new stencil functionality you reimplement the built-in depth testing (trivial, yes [unless you want the power to switch between many different depth-testing functions, in which case good luck I guess, from what I hear that means 8 different GLSL programs you have to write if you want to avoid expensive branches, but I digress], but just one more thing you have to remember to put in your GLSL program).

Jason

I do see where you are coming from. When I was first moving to shaders, it seemed like a lot of work trying to get back the fixed functionality I was used to, but one huge benefit is that when I needed to make things more complex, I knew exactly where to go. Now, I’m implementing shading models I never would’ve dreamed of doing without shaders - and the best part is, it is not for a game. :smiley:

Like the others have said, the Orange Book is a good place to start. You will definitely need to invest some time into learning this, though.

This is not really correct. The idea is, the hardware has become very flexible and the API has to reflect it (or it misses the point).

Your argument, which is quit valid, does not really apply to an graphics API like OpenGL. The goal of such an API is to allow access to hardware features. Any redundancy (=more then one standard way to do the same thing) results in a bad design.

Basically, you are interested in a “higher-level” library then a core 3D API, one that would provide level of abstraction you wish. The classical abstraction of sequential texture application just is not true anymore, so there is no point of including it to a core 3D API.

From this point of view, designers of OpenGL made a correct decision: they let you use the old abstraction and introduce new abstractions independently; you you may choose which you want to use. Trying to combine this abstractions would be really redundant, error-prone and overly complicated.

Depth peeling is not a part of the old OpenGL, if you want to have it, you need to do it yourself. But of course, you will have to do anything else yourself as well. Transitions like this are always painful, but they cannot be avoided.

The goal of such an API is to allow access to hardware features. Any redundancy (=more then one standard way to do the same thing) results in a bad design.

Absolutely, but while having > 1 standard way to do the same thing is a problem, so is having no standard way to do something. If the entire OpenGL [multi]texturing API were removed, and it became required that anybody wanting multitexturing implemented it in GLSL (which is effectively what happens when you want to use a fragment shader for any reason), then the number of standard ways of performing multitexturing is reduced back to 0 – which is a consequence of having a flexible language like GLSL but no built-in, standard GLSL function to perform a common task such as multitexturing.

From this point of view, designers of OpenGL made a correct decision: they let you use the old abstraction and introduce new abstractions independently; you you may choose which you want to use. Trying to combine this abstractions would be really redundant, error-prone and overly complicated.

What isn’t redundant, error-prone and overly complicated about having to reimplement all the fixed functionality in a vertex/fragment shader when your real goal is to do something unrelated? It does not seem complicated to me for GLSL to provide a standard “applydefaultlighting()” function, for example, which you could call from your vertex shader if you weren’t interested in modifying the default behavior, rather than you reimplementing your own non-standard error-prone lighting every time. The addition of a simple GLSL function like that now allows you to have all the power of GLSL with the convenience and standardization of the fixed API. That is what I want.

Related to this, earlier on:

I think this is reversed. I see accessibility as a major problem with GLSL, because of the fact that there can be no “simple” GLSL program. It’s either all or nothing, do everything, or don’t do it at all. As a consequence, I sincerely believe that many casual programmers who may benefit greatly from the flexibility of GLSL instead choose not to learn it, and to stick with the fixed functionality, because of how intimidating it is, and how much knowledge you need to have to use GLSL for anything beyond cute examples you’ve downloaded off of NeHe. However, while it’s perfectly reasonable to say “well, since GLSL gives you lower level access to flexible hardware, intimate knowledge of the hardware is required”, in this case I think that is an unreasonable claim: I can conceive of just a few simple things (such as an “applydefaultlighting()” function) that immediately open up the world of GLSL to a whole slew of people that otherwise would have avoided it like the plague, while at the same time adding no redundancy and taking nothing away from GLSLs power.

Anyways, off to continue reading the Orange Book. This depth peeling thing has set me a week behind schedule so far. But like Zengar said:

Transitions like this are always painful, but they cannot be avoided.

So what can you do…

Jason

No,

This is not a correct conclusion and I explained why. There is not such thing as “multitexturing” today. The graphics hardware can sample different textures at different coordinates and perform some math on it. The GL-style multitexturing follows an old abstraction that is no longer valid. Can you make a proposal how to incorporate it into GLSL? Something like “gl_Multitexture(vec4 texcoords)” that will return the result of classic texture combination? Ok, this could be done (as a matter of fact you can easily do it yourself by generating the appropriate code automatically), but it is still a redundant and not really useful feature (as it somehow defeats the purpose of shaders).

This is really a matter of perspective. I find it much more convenient to write a simple GLSL shader than to write the up to 16 or so glTexEnv calls that may be required to set up a GL_COMBINE texture environment.

(I mentioned this above, but one thing that bummed me out was when EXT_paletted_texture disappeared in favor of fragment shaders, and you have to do the same thing plus reimplement all the default functionality just to get this simple feature – from the standpoint of an application that needs nothing more than fast paletted textures, nothing was gained but huge amounts of convenience was lost – that is not a good property of an API).

EXT_paletted_texture wasn’t dropped “in favor of fragment shaders”. It was dropped by IHVs because paletted textures are expensive to implement in hardware. It’s also an extension for a reason - if you want it to work everywhere, you better have a fallback solution ready.

The fastest way to “support” paletted textures is usually to expand them to RGBA - a very simple operation. You don’t need shaders for that.

It seems highly unreasonable to me that the GLSL specification does not supply built in functions that replicate existing fixed functionality. Such a simple addition would have made GLSL far more accessible to a non-game developer. I wonder what their rationale for that was.

Not wanting to sound arrogant, but it’s very obvious that you never tried to specify or implement something like this. It really isn’t “a simple addition”.

So do you have any advice for my situation, Xmas… […] Do my expectations seem unreasonable?

I don’t think they are unreasonable from your perspective, but you have to understand the ARB perspective as well. You’re unfortunately part of a minority in need of a feature which the ARB doesn’t see much use or future in.

The best advice I can give you is to analyze your application for the different permutations of texture state you use at any draw call. You may find that the number is actually quite low - in this case you can write the required fragment shaders manually. Otherwise you have to write/find a shader generator which in this case doesn’t have to be very complex.

Also note that you don’t need to write a vertex shader if you don’t want to change the fixed function vertex processing - a program with just a fragment shader should work fine.

I haven’t followed the whole thread, but judging from the last few comments you might want to read http://www2.ati.com/developer/gdc/D3DTutorial10_Half-Life2_Shading.pdf unless someone has beaten me to it :slight_smile:

It contains a lot of other stuff, but since their shaders are complex, they talk about how they managed this (starts at page 40). You can’t take the exact same approach in GLSL, but should perhaps give you some ideas.

Of course, if I’m just talking nonsense here, please ignore me :slight_smile: