PDA

View Full Version : Regarding Near plane clipping



myk45
07-28-2012, 01:43 PM
Hello All!

Well, I was playing a game yesterday and i was reminded of Near plane clipping's artifacts. I have attached an image to show the same.

Can anyone please point out and explain why these artifacts occur as a result of near plane clipping and if there are ways to make it look better?
830

Thanks in advance!

Dark Photon
07-28-2012, 05:09 PM
I was playing a game yesterday and i was reminded of Near plane clipping's artifacts. ... Can anyone please point out and explain why these artifacts occur as a result of near plane clipping and if there are ways to make it look better?
Are you also a graphics developer, or just a user of graphics in games?

Assuming the latter, these artifacts typically happen because the software developer didn't set their depth bounds properly. They could have handled this (in a number of ways), but they didn't. They let objects get closer to the camera than the near clip plane of their nearest render pass.

myk45
07-28-2012, 09:33 PM
Are you also a graphics developer, or just a user of graphics in games?

Assuming the latter, these artifacts typically happen because the software developer didn't set their depth bounds properly. They could have handled this (in a number of ways), but they didn't. They let objects get closer to the camera than the near clip plane of their nearest render pass.

Thanks for the reply Dark Photon. I am a student. I had seen these artifacts in many applications. So, i was curious to know what really was the problem, and how it can be solved.

Could you please provide some more details(technical) or perhaps some links on this?

menzel
07-29-2012, 01:07 AM
Hi,

simply said, you store for each pixel on the screen the distance to the nearest object, when you render a new object (say the car ontop of the street) you can check if the new object is closer and draw it or discard it if it is farther away anyway (e.g. in case you draw the car first and then the street). This is Z-testing using a Z-Buffer (aka Depth-Buffer) so the order of rendering your stuff doesn't matter. This distance, or depth is stored in a 24bit buffer (most of the time, other precisions are possible). So you can store distances between 0 and 2^24-1 (interpreted as a range from 0..1 or -1..-1 but that doesn't change the fact that your resolution is only 24 bits).

So there is a distance far away after which you can't decide anymore which objects are closer because your depth buffer has reached its limit resolution (far plane) and you also have to decide where you want to start counting distances in front of your eye (near plane). For mathematical reasons this can't be your eye itself but has to be in front (in your screenshot it may be 1m in front). Look up projection matrices where you can input values for the near and far plane, you will see, that you would have to divide by 0 if you wanted a near plane with distance 0...

So why don't you set the near plane 1mm away and the far plane 100 km away (avoiding those clipping artefacts)? Well, the limited 24 bit resolution which is not even uniformly distributed over the distance from near to far plane would result in other artefacts: objects being close together would blend because the calculated depth would end up to be the same (google depth-fighting or Z-fighting)

So in the end you want the near plane to be as far away as possible and the far plane to be as near as possible to prevent z-fighting but still near/far enough to not intersect any objects (e.g. your physics code code prevent you getting close enough to an object to see the problem). Clearly, this tradeoff did not work well in the presented screenshot...

To fully understand all of this, you might want to look up z-buffer, projection, the rendering pipeline. If you are interested in graphics, you might want to get a copy of "real-time rendering" by akenine-möller (maybe your local library has it).

myk45
07-29-2012, 08:32 AM
Hi,

simply said, you store for each pixel on the screen the distance to the nearest object, when you render a new object (say the car ontop of the street) you can check if the new object is closer and draw it or discard it if it is farther away anyway (e.g. in case you draw the car first and then the street). This is Z-testing using a Z-Buffer (aka Depth-Buffer) so the order of rendering your stuff doesn't matter. This distance, or depth is stored in a 24bit buffer (most of the time, other precisions are possible). So you can store distances between 0 and 2^24-1 (interpreted as a range from 0..1 or -1..-1 but that doesn't change the fact that your resolution is only 24 bits).

So there is a distance far away after which you can't decide anymore which objects are closer because your depth buffer has reached its limit resolution (far plane) and you also have to decide where you want to start counting distances in front of your eye (near plane). For mathematical reasons this can't be your eye itself but has to be in front (in your screenshot it may be 1m in front). Look up projection matrices where you can input values for the near and far plane, you will see, that you would have to divide by 0 if you wanted a near plane with distance 0...

So why don't you set the near plane 1mm away and the far plane 100 km away (avoiding those clipping artefacts)? Well, the limited 24 bit resolution which is not even uniformly distributed over the distance from near to far plane would result in other artefacts: objects being close together would blend because the calculated depth would end up to be the same (google depth-fighting or Z-fighting)

So in the end you want the near plane to be as far away as possible and the far plane to be as near as possible to prevent z-fighting but still near/far enough to not intersect any objects (e.g. your physics code code prevent you getting close enough to an object to see the problem). Clearly, this tradeoff did not work well in the presented screenshot...

To fully understand all of this, you might want to look up z-buffer, projection, the rendering pipeline. If you are interested in graphics, you might want to get a copy of "real-time rendering" by akenine-möller (maybe your local library has it).

Thanks a lot for the reply menzel! I will look up those terms, and also check out the book you've suggested.