scissor region from bounding sphere

i use this function to get the scissor region of a bounding box. i can then use this region like this:

glScissor( kScissor.m_iLeft, kScissor.m_iBottom, kScissor.m_iRight-kScissor.m_iLeft, kScissor.m_iTop-kScissor.m_iBottom );

it works pretty well, now i’d like to do the same for bounding spheres… i could of course create a bounding box from the sphere and then use the same function, but i’m sure there are much faster ways… i read tzhe section in eric’s article on stencil shadows about this issuse, but i have to admit that i’m not understanding all those formulas… and i’d like to understand what i’m doing. so how do i have to modify my function to support spheres? thanks in advance :slight_smile:

struct WindowCoordinates
{
	int x, y;
};

struct ScissorRegion
{
	int m_iLeft, m_iBottom, m_iRight, m_iTop;
};

ScissorRegion BoundingBox::GetScissorRegion()
{
	// get maxima/minima of bounding box
	Vector3D kMinimum = m_kMinimum+m_kTranslation;
	Vector3D kMaximum = m_kMaximum+m_kTranslation;

	// get corners of bounding box
	vector<Vector3D> vkCorners;
	vkCorners.reserve( 8 );
	vkCorners.push_back( Vector3D(kMinimum.x, kMinimum.y, kMinimum.z) );
	vkCorners.push_back( Vector3D(kMaximum.x, kMinimum.y, kMinimum.z) );
	vkCorners.push_back( Vector3D(kMaximum.x, kMinimum.y, kMaximum.z) );
	vkCorners.push_back( Vector3D(kMinimum.x, kMinimum.y, kMaximum.z) );
	vkCorners.push_back( Vector3D(kMinimum.x, kMaximum.y, kMinimum.z) );
	vkCorners.push_back( Vector3D(kMinimum.x, kMaximum.y, kMaximum.z) );
	vkCorners.push_back( Vector3D(kMaximum.x, kMaximum.y, kMaximum.z) );
	vkCorners.push_back( Vector3D(kMaximum.x, kMaximum.y, kMinimum.z) );
	
	// set initial values for scissor box
	ScissorRegion kBox;
	kBox.m_iLeft = Camera::GetActiveCamera()->GetViewport().m_uiWidth;
	kBox.m_iBottom = Camera::GetActiveCamera()->GetViewport().m_uiHeight;
	kBox.m_iRight = Camera::GetActiveCamera()->GetViewport().m_uiX;
	kBox.m_iTop = Camera::GetActiveCamera()->GetViewport().m_uiY;

	for( unsigned int i = 0; i < vkCorners.size(); i++ )
	{
		// get window coordinates of transformed point ( similar to gluProject() )
		WindowCoordinates kCoordinates = Core::Get()->GetRenderer()->ProjectToScreen( vkCorners[i] );

		// compare these to previous values
		if( kCoordinates.x < kBox.m_iLeft )
			kBox.m_iLeft = kCoordinates.x;
		else if( kCoordinates.x > kBox.m_iRight )
			kBox.m_iRight = kCoordinates.x;
		if( kCoordinates.y < kBox.m_iBottom )
			kBox.m_iBottom = kCoordinates.y;
		else if( kCoordinates.y > kBox.m_iTop )
			kBox.m_iTop = kCoordinates.y;
	}

	vkCorners.clear();

	return kBox;
}

ps: can i only have one scissor region per frame or could i also do:

ScissorRegion kScissor;
glEnable( GL_SCISSOR_TEST );
kScissor = kMesh1->GetBoundingBox().GetScissorRegion();
glScissor( kScissor.m_iLeft, kScissor.m_iBottom, kScissor.m_iRight-kScissor.m_iLeft, kScissor.m_iTop-kScissor.m_iBottom );
kMesh1->Render();
kScissor = kMesh2->GetBoundingBox().GetScissorRegion();
glScissor( kScissor.m_iLeft, kScissor.m_iBottom, kScissor.m_iRight-kScissor.m_iLeft, kScissor.m_iTop-kScissor.m_iBottom );
kMesh2->Render();
...
glDisable( GL_SCISSOR_TEST );

With spheres you need 2 points - center and one point placed on the sphere that is most far away from center when you look from camera position. I will probably describe it better when I’m at home this evening.

As for your second question - glScissor is not different than glViewport, glRotate, glEnable - it affects current OpenGL state, and when you render something it’s rendered according to that state. So you can change scissor region as often as you want.

I will probably describe it better when I’m at home this evening.
yeah i’d greatly appreciate this, thanks :slight_smile:

i read tzhe section in eric’s article on stencil shadows about this issuse, but i have to admit that i’m not understanding all those formulas… and i’d like to understand what i’m doing.
What exactly don’t you understand. And, perhaps more importantly, what the heck are you talking about?

P.S. Sounds like someone didn’t do their math homework :wink:

Several years ago i’ll successfully implement
Eric’s paper - all work well.
(there is my old code below)

  
void i_project_point(float_4 p_projected[2],float_4 p_point[3],const math::matrix_44& p_m)
{
    float_4 onNearPlane[3] = 
    {
      p_point[0]*p_m.e[0][0]+p_point[1]*p_m.e[1][0]+p_point[2]*p_m.e[2][0] + p_m.e[3][0]         
    , p_point[0]*p_m.e[0][1]+p_point[1]*p_m.e[1][1]+p_point[2]*p_m.e[2][1] + p_m.e[3][1]
    , p_point[0]*p_m.e[0][3]+p_point[1]*p_m.e[1][3]+p_point[2]*p_m.e[2][3] + p_m.e[3][3]
    };

    float_4 rcpW = 1.f / onNearPlane[2];
    
    p_projected[0] = onNearPlane[0]*rcpW;
    p_projected[1] = onNearPlane[1]*rcpW;    

 }
//////////////////////////////////////////////////////////////////////////
/*
 *  Sphere's scissor box
 */

bool local_light::eval_scissor_sphere
    ( const transform_matrix& p_to_camera_space_transform
    , const math::matrix_44 & p_projection_transform      
    , count_4 p_screen_width
    , count_4 p_screen_height
    , count_4 *p_right
    , count_4 *p_left
    , count_4 *p_up
    , count_4 *p_down
    )
{

    using math::pow2;
//1. Camera space light's position
    coord3_xyz cameraSpaceLigthPos;
    math::mul(&cameraSpaceLigthPos,position,p_to_camera_space_transform);
    float_4 L[3] = {math::x(cameraSpaceLigthPos),math::y(cameraSpaceLigthPos),math::z(cameraSpaceLigthPos)};
    
//2
//   Tx = <Nx,0,Nz,0 >                  Ty = <0,Ny,Nz,0 >
//   L - light pos , r - radius
//   solve Tx plane:
//   a) T.L = r             ()
//   b) Nx*Nx + Nz*Nz = 1   ()
//  
    float_4 R = attenuation_radius(); 
    float_4 D;
    D = 4.f * (pow2(R)*pow2(L[0]) - (pow2(L[0])+pow2(L[2]))*(pow2(R)-pow2(L[2])));
    if( D <= 0 ) 
    {  // no solution
        return false;       
    }
    // solve equation
    float_4 Nx1 = (R*L[0] + sqrt(D/4.f))/ (pow2(L[0])+pow2(L[2]));
    float_4 Nx2 = (R*L[0] - sqrt(D/4.f))/ (pow2(L[0])+pow2(L[2]));
    float_4 Nz1 = (R - Nx1*L[0]) / L[2];
    float_4 Nz2 = (R - Nx2*L[0]) / L[2];
    // T1x = <Nx1,0,Nz1,0>          T2x = <Nx2,0,Nz2,0>

// P1 , P2
    float_4 Pz[2] =  
    {
      (pow2(L[0]) + pow2(L[2]) - pow2(R)) / ( L[2] - Nz1/Nx1*L[0] )
    , (pow2(L[0]) + pow2(L[2]) - pow2(R)) / ( L[2] - Nz2/Nx2*L[0] )
    };

    float_4 Px[2] =  
    {
      - (Pz[0] * Nz1 ) / Nx1
    , - (Pz[1] * Nz2 ) / Nx2
    };
    
    // 
    // 
    // 
    
    float_4 pointXPlane1[3] = {Px[0],0,Pz[0]};
    float_4 pointXPlane2[3] = {Px[1],0,Pz[1]};    

    // projects points to screen. [-1,1];
    float_4 pX1[2], pX2[2];
    i_project_point(pX1,pointXPlane1,p_projection_transform);
    i_project_point(pX2,pointXPlane2,p_projection_transform);    
    
    
    float_4 right,left;
    bool leftValid,rightValid;

    if( pX1[0] > pX2[0] ) 
    {
        right = pX1[0], left = pX2[0]; 
        rightValid = (Pz[0] < 0.f );
        leftValid =  (Pz[1] < 0.f );
    } 
    else
    {
        right = pX2[0], left = pX1[0]; 
        rightValid = (Pz[1] < 0.f );
        leftValid =  (Pz[0] < 0.f );
    }    

//   Y
    D = sqrt( pow2(R)*pow2(L[1]) - (pow2(L[1])+pow2(L[2]))*(pow2(R)-pow2(L[2])))  ;
    if( D < 0 ) return false; 
 
    float_4 Ny1 = ( R*L[1] + D) / (pow2(L[1]) + pow2(L[2]));
    float_4 Ny2 = ( R*L[1] - D) / (pow2(L[1]) + pow2(L[2]));  
 
    Nz1 =  (R - Ny1*L[1]) / L[2];
    Nz2 =  (R - Ny2*L[1]) / L[2];

    Pz[0] = (pow2(L[1]) + pow2(L[2]) - pow2(R)) / ( L[2] - Nz1/Ny1*L[1] );
    Pz[1] = (pow2(L[1]) + pow2(L[2]) - pow2(R)) / ( L[2] - Nz2/Ny2*L[1] );
    float_4 Py[2] = 
    {
      - (Pz[0] * Nz1 ) / Ny1
    , - (Pz[1] * Nz2 ) / Ny2
    };

    float_4 pointYPlane1[3] = {0.f,Py[0],Pz[0]};
    float_4 pointYPlane2[3] = {0.f,Py[1],Pz[1]};

    float_4 pY1[2],pY2[2];
    i_project_point(pY1,pointYPlane1,p_projection_transform);
    i_project_point(pY2,pointYPlane2,p_projection_transform);

    float_4 up,down;
    bool upValid,downValid;
    if( pY1[1] > pY2[1] ) 
    {
        up = pY1[1], down = pY2[1]; 
        upValid = Pz[0] < 0.f;
        downValid = Pz[1] < 0.f;
    }
    else
    {
        up = pY2[1], down = pY1[1];
        upValid = Pz[1] < 0.f;
        downValid = Pz[0] < 0.f;
    }    
    
    if( !leftValid)     left = right,   right = 1.f;
    if( !rightValid)    right = left,   left = -1.f;
    if( !upValid )      up = down,      down = -1.f;
    if( !downValid )    down = up,      up = 1.f;

    if( _isnan( right ) ) right = 1.f;
    else if( right > 1.f ) right = 1.f;

    if( _isnan( left ) ) left = -1.f;
    else if( left < -1.f ) left = -1.f;

    if( _isnan( up ) ) up = -1.f;
    else if( up > 1.f ) up = 1.f;        

    if( _isnan( down ) ) down = -1.f;
    else if( down < -1.f ) down = -1.f;


    *p_left   = (count_4)((left * .5f + .5f) * (float_4)p_screen_width);
    *p_right  = (count_4)((right * .5f + .5f) * (float_4)p_screen_width);
    *p_up     = (count_4)((up * .5f + .5f) * (float_4)p_screen_height);    
    *p_down   = (count_4)((down * .5f + .5f) * (float_4)p_screen_height);    

    return true;    
} 

Here’s the code that I actually use in the C4 Engine . Compared to the Gamasutra article, it’s simplified a bit and slightly more robust.

enum ProjectionResult
{
    kProjectionEmpty,
    kProjectionPartial,
    kProjectionFull
};

ProjectionResult Camera::ProjectSphere(const Point3D& center, float radius, ProjectionRect *rect) const
{
	float cx = center.x;
	float cy = center.y;
	float cz = center.z;
	float r2 = radius * radius;
	
	float cx2 = cx * cx;
	float cy2 = cy * cy;
	float cz2 = cz * cz;
	float cxz2 = cx2 + cz2;
	if (cxz2 + cy2 > r2)
	{
		float left = -1.0F;
		float right = 1.0F;
		float bottom = -1.0F;
		float top = 1.0F;
		
		float rcz = 1.0F / cz;
		
		float dx = r2 * cx2 - cxz2 * (r2 - cz2);
		if (dx > 0.0F)
		{
			dx = Sqrt(dx);
			float ax = 1.0F / cxz2;
			float bx = radius * cx;
			
			float nx1 = (bx + dx) * ax;
			float nx2 = (bx - dx) * ax;
			
			float nz1 = (radius - nx1 * cx) * rcz;
			float nz2 = (radius - nx2 * cx) * rcz;
			
			float pz1 = cz - radius * nz1;
			float pz2 = cz - radius * nz2;
			
			if (pz1 < 0.0F)
			{
				float x = nz1 * focalLength / nx1;
				if (nx1 > 0.0F) left = Fmax(left, x);
				else right = Fmin(right, x);
			}
			
			if (pz2 < 0.0F)
			{
				float x = nz2 * focalLength / nx2;
				if (nx2 > 0.0F) left = Fmax(left, x);
				else right = Fmin(right, x);
			}
		}
		
		float cyz2 = cy2 + cz2;
		float dy = r2 * cy2 - cyz2 * (r2 - cz2);
		if (dy > 0.0F)
		{
			dy = Sqrt(dy);
			float ay = 1.0F / cyz2;
			float by = radius * cy;
			
			float ny1 = (by + dy) * ay;
			float ny2 = (by - dy) * ay;
			
			float nz1 = (radius - ny1 * cy) * rcz;
			float nz2 = (radius - ny2 * cy) * rcz;
			
			float pz1 = cz - radius * nz1;
			float pz2 = cz - radius * nz2;
			
			if (pz1 < 0.0F)
			{
				float y = nz1 * focalLength / (ny1 * aspectRatio);
				if (ny1 > 0.0F) bottom = Fmax(bottom, y);
				else top = Fmin(top, y);
			}
			
			if (pz2 < 0.0F)
			{
				float y = nz2 * focalLength / (ny2 * aspectRatio);
				if (ny2 > 0.0F) bottom = Fmax(bottom, y);
				else top = Fmin(top, y);
			}
		}
		
		if ((!(left < right)) &#0124;&#0124; (!(bottom < top))) return (kProjectionEmpty);
		
		rect->left = left;
		rect->right = right;
		rect->bottom = bottom;
		rect->top = top;
		
		return (kProjectionPartial);
	}
	
	return (kProjectionFull);
}

If the sphere fills the whole screen (because the camera’s inside it), then the return value is kProjectionFull. If the sphere can’t be seen, the return value is kProjectionEmpty. Otherwise, the return value is kProjectionPartial, and the rect parameter is filled in with the min and max x/y values. These are in screen-normalized coordinates, so [-1,1] in both directions represents the whole viewport.

The variables focalLength and aspectRatio represent properties of the camera. The focal length is given by 1/atan(fov/2), and the aspect ratio is the height divided by the width of the viewport.

Ok, people have pasted some code here, but you mentioned you want to understand it better. However, I will describe a bit different approach.
Here is a simple case:

The trick is to find point B or D.
We know the radius of a sphere (AB) and we can easily compute distance from camera to sphere’s center (AC). Angle at B is 90 degrees, so we could easily compute distance from C to B, but we don’t need that - we just need to find any point at line passing through points B, C and D. So we actually need only that direction. Since one of angles is 90 degrees this can be easily computed.
If you compute direction to center of sphere in camera space (vertical angle and horizontal angle - these will be 0 if camera looks at sphere’s center) then you can just add/subtract the angle between CA and CB to/from these two angles which will give you angle to top, bottom, left and right edge of sphere.
If you have angles in camera space, then you can very easily find pixels on screen that represent these directions using just tan() function.
In parameters passed to glFrustum you have left, right, top and bottom of the screen - if you divide them by zNear they will give you tangents of angles at edges of screen.
This approach is a bit different but can actually prove faster (and perhaps simplier to understand). I still can’t tell that all I have written here is true, since I created this solution “on the fly” while writting this post, but I believe it’s more less correct.

wow thank you for all your help! :slight_smile: eric, i couldn’t implement your function sucessfully, though - it aways returns kProjectionFull. what confuses me is that neither the view nor the projection matrix are involved… how do i have to pass the center and the radius? does the center already have to be multiplied by the view matrix, is it expected to be already projected to the screen…? thank you for your explanation, k_szczech, i’ll try to adapt it, but as you’ve all probably realized, i suck at maths ^^

Yes, the center of the sphere should already be in eye-space coordinates, that is, transformed by the model-view matrix. The projection matrix isn’t involved because the function assumes that your projection matrix would be constructed using the focalLength and aspectRatio parameters of the camera.

In k_szczech’s explanation, the hard part is actually calculating the point B. But you don’t really need to worry about angles or trig functions. The right vector math will get you the answer, and that’s what the ProjectSphere() function uses.

In k_szczech’s explanation, the hard part is actually calculating the point B
As I mentioned - we do not need point B - just the angle between CA and CB which is simply:
arcsin(AB / AC), where AB=radius, AC=distance to sphere center - we know both.

The general idea is to:

  1. create vector pointing up and vector pointing right and rotate them with the camera (this is done once per frame) - let’s call them camera vectors - these must be normalized
  2. compute vector from camera to sphere’s center (AC) - remember it’s length and normalize it
  3. compute dot product of that vector with camera vectors
  4. compute arcsin from these two dot products - now we have vertical and horizontal angle at which sphere’s center is when looking from camera (let’s call them AH and AV)
  5. compute angle between CA and AB, which is arcsin(AB / AC) - let’s call it AR
  6. now compute 4 angles: AH - AR, AH + AR, AV - AR, AV + AR - these are angles to edges of sphere
  7. project hese angles to screen, for example:
    xLeft = tan(AH - AR) * zNear
    the equation above tells us at which x coordinate the ray to sphere’s left edge crosses the zNear plane. If we transform it like this:
    xLeft = (xLeft - frustumLeft) / (frustumRight - frustumLeft)
    then we get screen cordinate in range <0, 1>, so now just multiply it by viewport size:
    xLeft = xLeft * viewportWidth

Note that for every sphere we need:
2x dot3
3x arcsin
4x tan
1x vector normalization
and a few multiply/add/subtract
but we don’t need any projectToScreen function.

Too bad SSE does not support arcsin and tan. This would probably allow to implement this using 30-40 instructions. You could use vertex shader for it combined with feedback mode - vertex shaders can do trigonometry on 4-component vectors.

ok i finally got it working using a combination of your suggestions and my own code (posted at the top). thanks for all your patience! :slight_smile: the performance boost for lights with small radii is impressive, i got +30% for a small scene with multiple shadow casting lights :smiley: