A couple of questions

Hi . I’ve done some C programming before and I’m starting to tinker around with OpenGL (mostly for fun) and I’ve got a few questions.

  1. Most of the things I’ve read about blending say you have to turn off depth testing to get it to work properly – otherwise, the polygons behind will disappear rather than blend. But, if you have to do this, then how do you depth test it with OTHER objects? For example, if I have a particle system that uses blending, it appears everywhere no matter how many polygons are between me and it (or it always appears behind other objects, no matter how far in front of them it is). Do I have to keep track of the location of all my objects and depth test them manually? This seems very complicated – do real OpenGL apps just not use blending at all?

  2. I have a simple model that I loaded from a file, and I then added lighting to my scene. For some reason, the same side of the model is lit no matter where the model is with respect to the light. It is always lit on the negative-z side of the model, even if that is the side facing away from the light, or even if it is sideways from the light. I think this is because I’m doing something wrong, but I can’t figure out what it is.

Thanks if you can answer my questions.

  1. blending is a difficult and still unresolved issue in real time computer graphics. most applications render in two separate passes. first they render the opaque objects like normal, then they render transparent objects in another pass. in the transparent object pass, depth writing is turned off, but depth testing is left on. so if there’s an opaque object in front of a transparent object, the transparent object won’t get rendered. the problem is that, to get correct results, you need to draw the transparent objects from back to front. so you have to sort them according to their depth in screen space (the distance to the eye location is a good enough approximation though). once you do that, you can render with a blending function glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA) and get good results. but then there are more problems. if your objects intersect, then you’ll need to break them up into separate objects in order to correctly render everything from back to front. it gets to be a real headache to be honest.

there are some methods for doing order independent transparency. these methods work such that you can render your transparent objects in any order you want and the blending works out perfectly. one such method (the only one i know of) is called depth peeling. it’s very expensive though. it requires a rendering pass over your scene for each “layer” of transparency. nvidia has a demo on their site.

  1. are you specifying the light position after setting up the viewing transformation? if not, try doing so and see if that helps.

-steve.

  1. Ah, works like a charm! I wasn’t even sure what the difference was between depth writing and depth testing until now. The blending I’m using right now – all of the polygons only vary in position along the y-axis – is really simple so it appears that the depth sorting isn’t necessary, but its good to know. One would think that there would be a simpler way to do this.

  2. I’ve tried moving the light declaration to several differnt points in my code and it hasn’t seemed to do much. I’m beginning to think that I loaded the normals from the file wrong – although it seems rather strange that one side is always lit and the other is dark rather than spastic lighting I would think would be associated with bad normals. I’ll have to experiment with it I guess.

Try setting the position of the light after you’ve done any “camera” transformations, but before you do any transformations for your model. The position of the light is affected by the current modelview matrix, so if you are setting the position of the light after you’ve done the transformations for your model, it will appear to move with your model, keeping the same side always lit.