Hi Guys,
I am trying to implement the depth buffer, but without using openGL. I understand the theory behind it but have no idea how to actually program it. Has anyone even ever tried it (except the person who programmed it into openGL).
I am trying to calculate the depth buffer of a single triangle.
The vertices are at (0, 10000.0, -100.01) , (-10000,-10000,-100.01),(10000,-10000,-100.01)
near = 100, far = 100000, left = -100, right = 100, top = 100, bottom = 100.
The camera is at (0,0,-1000).
I really need help on this if anybody can. Feel fre to ask any question. I’ll try to clarify as much I can.
The second paragraph sounds like you also have a problem with 3d to 2d projection…
Back to depth buffer : the theory works like a triangle rasterizer, with interpolation of depth between spans, much like gouraud interpolation. Depending whether you need performance, you may do that with bresenham line algo, but with depth instead of y axis.
Orignal vector triangle :
+
/ \
/ \
+_ \
\__ \
\__+
Rasterised spans (—), with interpolated endpoints (marked *) having depth interpolated between 2 vertices + :
+
*-*
*---*
+-----*
*----*
*--+
Then, each rasterized pixel, should store its own depth in a buffer (named depth buffer, surprise).
Of course each rasterized pixel should first compare its depth to existing depth and only be drawn if test passes.