Stencil INCR's

hello, Im trying to write a stencil shadowing program, so I draw all the shadow volumes front geo with GL_INCR and well a very odd thing happens if any shadows overlap so it is incremented above 1 then it becomes 0 and if its incremented again then it goes back to 1, I don’t under stand why it would do this, im not doing any other stencil actions right after I do this I am drawing the lit geo with stenciling so (GL_EQUAL 0,1) this leaves the Shadow volumes as pure black as I would like but where ever they over lap the value just likes to keep going back to 0.

I hope you where able to under stand my problem, does any one know why it would INCRremnt from 1 to 0? thx

Maybe you just have an 1 bit stencil buffer.
You should check your window creation, where you set your pixelformat. You normally need 32 bit colorbuffer with 24 bit depth to get an 8 bit stencil.

Lars

actually to tell you the truth I do have it at 8 bits but it really has no diffrence then if I were to run it at 1 bit, I also checked to make sure it was working correctly by changeing it to 0 and it did as it was supposted too, so its not stencil bits. any other ideas about what it could be?

Ok well here is a most interesting update on my problem, I dicided I would go ahead and finish it so I added the code where it draws reverse geo in thello, Im trying to write a stencil shadowing program, so I draw all the shadow volumes front geo with GL_INCR and well a very odd thing happens if any shadows overlap so it is incremented above 1 then it becomes 0 and if its incremented again then it goes back to 1, I don’t under stand why it would do this, im not doing any other stencil actions right after I do this I am drawing the lit geo with stenciling so (GL_EQUAL 0,1) this leaves the Shadow volumes as pure black as I would like but where ever they over lap the value just likes to keep going back to 0.

I hope you where able to under stand my problem, does any one know why it would INCRremnt from 1 to 0? thxo DECR and well it was not any diffrent so then I started messing with diffrent things seeing how it would affect the diffrent things and I almost gave up hope when I changed all of the maskes in StencilFunc to 3 and now it looks perfect no problems at all could any one explane to me how this would do anything to fix this problem?

if you mean with this :(GL_EQUAL 0,1)
that you have glStencilFunc(GL_EQUAL 0,1) then this can be the problem. The last parameter is the stencil mask, it defines on which bits the stencil operations are done.
If you set it to one, all stencil operations will only affect the first bit.
to have it affect all the bits, you need to call glStencilFunc(GL_EQUAL 0,0xff).

Lars

edit: corrected 0xffffffff to 0xff

[This message has been edited by Lars (edited 09-08-2002).]

Make sure your stencil mask (the third parameter in glStencilFunc) is 0xff, and not 0x1.

If you have it set to 0x1 (or any other value where the second bit is 0), then incrementing 1 will always wrap to 0.

D’oh – apparently Lars beat me to it.

I (almost) always use a mask of (~0). That way, if you got more than 8 bits, you could use more than 8 bits. :slight_smile:

Cass

…mmmh are there some new pixelformats with bigger stencil on the way ?
maybe 64 bit color, 32 z and 32 stencil ?

Would be nice :slight_smile:

Lars

I do the same thing as Cass. This way, no matter what bit depth your stencil buffer is, the complement of 0 (~0) will always fill it with all 1’s for you.

-SirKnight

-1 would also work. Depends on what you find most intuitive.

Yes -1 works also because ~0 == -1. I like ~0 because it looks cool to me.

-SirKnight

Great that helps alot thanks every one. problem solved and now I know why

It equals it because you’re comparing bitwise operators applied to a signed int. You’re hoping -1 is cast to 0xFFFFFFFF for the type conversion to unsigned.

This is very bad code IMHO, how can you find this intuitive? Or expect someone else to?

NOT 0 ends up as the full 32 bits set to 1 anyway, 0xFFFFFFFF. What’s really happening here is that it always sends 0xFFFFFFFF, because the function takes a full unsigned int.

While we’re here, how about ~0x0, but maybe I’m being too pedantic. I mean it’s a whole extra two characters to type.

[This message has been edited by dorbie (edited 09-10-2002).]

as far as i know, -1 is defined as all bits set to one… even for unsigned… thats at least what i learned trough some c++ discussions but it doesn’t mather, it works, that counts by 99%…

There is nothing wrong with using -1. The computer stores numbers using 2s complementry arithmetic. So lets see here…~0 is the 2s complement of 0 which means you take all eight 0’s, invert them then add one. Thats how you do a 2s complement. Ok so after doing so you will get 9 bits but in 2s compelment arithmetic that extra (n+1) bit is just dropped so you end up with 11111111 (8 1’s). Ok now since the computer uses 2s comp arithmetic to store numbers, we can easily find out why -1 works. The MSB (left most bit) is the sign bit. 1 means negative and 0 means positive. Ok so now, inorder to find what 11111111 represents in the computer we do a 2s complement (invert the bits and add one). We do the complement on the remaining 7 bits which gives us 1. Wow, there is our 1 and our sign bit is a 1 so this equals to -1. This is why -1 will work and why there is no problem using it.

-SirKnight

[This message has been edited by SirKnight (edited 09-10-2002).]

Fine, -1 sets all the bits to one. No disagreement there.

But are you sure it won’t change when your compiler casts it to an unsigned? Will all compilers do so? If you can quote the relevant portion of the C/C++ spec that says so, I’ll believe that its safe code (for complient compilers).

But depending on compiler compliance to do the -1 trick makes it a little scary to me. I’ll just use ~0, thank you.

Ok now that I am at home I have been able to test some things. It looks like to me that VC++ anyway will cast it correctly. When i made an unsigned int and put a -1 or ~0 (same thing) it gave me a large number that when converted to binary was all 1’s. Now the thing is, passing -1 or ~0 (i dont think) make a difference in any compiler because ~0 is still a -1 no matter how you look at it. Im not sure how compilers do this, but what im betting is that when they see the ~0 they go ahead and convert that to -1 and replace the ~0 with the -1, say in the function parameter and continues compiling. I wouldnt think it makes the cpu do the 2s comp on 0 at run-time, that would be kind of stupid really, thats just extra calcs that dont need to be done. ~0 will always be the same no matter how many times you calculate it.

-SirKnight

[This message has been edited by SirKnight (edited 09-10-2002).]

One more thing I forgot to mention. Some compilers may throw an error or something if they see a negative number trying to be assigned to an unsigned variable. That would be crappy. But I would think that if the compiler would do this, then it should throw an error when ~0 was being assigned to an unsigned var. Every compiler knows how to handle the ~ operator, they all know its for doing a 2s comp. If not, then its a ****ty compiler and you shouldn’t be using it anyway.

Now like I said before when I made a test app and put a ~0 (or -1) in an unsigned int var it gave me a number that is all 1’s (since int is 32 bit, it was 32 1’s, well to the console window this number was 32 1’s in base10 but anyway ). So i’m thinking what happened was it allocated the memory in RAM to hold this 32bit integer and did the ~0 calculation and stored this into that memory space. But since this variable was decl’d as unsigned, somewhere it put a flag (im not sure how the computer does this exactly yet) that says this number in this mem locaiton is unsgined; so when reading it back do NOT use 2s complement to find what the number is, instead use Sign Magnitude to read the whole number (while not taking the MSB as the sign bit of course ). This has to be how it happened because it works and makes sence.

-SirKnight

[This message has been edited by SirKnight (edited 09-10-2002).]

NO, it’s the code that’s crappy. Step back and look at the big picture, code that raises these questions is bad code.

[This message has been edited by dorbie (edited 09-11-2002).]