glClipPlane sliding around. I'm baffled!

I’ve used glClipPlane in rendering reflections in the past. The usual deal, flipping the scene, setting a clipping plane, rendering to FBO, etc – it worked fine.

Now, I’m refactoring how I do this to make it more generic, and have encountered a baffling problem. The clip plane “slides” with the camera! It’s as if I’m putting it into the projection matrix ( or something to that effect, I’m totally confused ).

It can be best described by this video I recorded:
ClippingPlaneWoes.mov

Here’s my relevant code:


void World::displayForCamera( const CameraRef &camera, bool shadows )
{
    LightRef noLight;

    /*
        Set camera
    */
    
    glError();
    
    
    Frustum frustum = camera->frustum();
    
    if ( _renderingReflection )
    {
        _activeReflectionCamera->setFarPlane( _farPlane );
        _activeReflectionCamera->set();
        
        plane reflectionPlane = _activeReflectionCamera->reflectionPlane();
        mat4 householderTransform = _activeReflectionCamera->householderTransform();
        vec3 refPoint = vec3( 0,0,0 ) + ( vec3( reflectionPlane.a, reflectionPlane.b, reflectionPlane.c ) * reflectionPlane.d );

        glError();

        GLdouble clipPlane[4] = { reflectionPlane.a, reflectionPlane.b, reflectionPlane.c, reflectionPlane.d };
        glEnable( GL_CLIP_PLANE0 );
        glClipPlane( GL_CLIP_PLANE0, clipPlane );   


        glPushMatrix();
        glTranslatef( refPoint.x, refPoint.y, refPoint.z );
        glMultMatrixf( householderTransform );
        glTranslatef( -refPoint.x, -refPoint.y, -refPoint.z );

        glError();
        
        frustum = _activeReflectionCamera->reflectedFrustum();
    }
    else
    {
        camera->setFarPlane( _farPlane );
        camera->set();      
    }
    
    // render the world
    
    if ( _renderingReflection )
    {
        glDisable( GL_CLIP_PLANE0 );
        glPopMatrix();
    }
}

And here’s ( relevant parts of ) the method Camera::set()


void Camera::set( void )
{
    glMatrixMode( GL_PROJECTION );
    glLoadMatrixf( _projection );
    
    glMatrixMode( GL_MODELVIEW );
    glLoadMatrixf( _modelview );
    
    _frustum.set( projectionWithFarPlane(), _modelview, _position );    
}


Finally, I use a macro ‘glError’ sprinkled around to dump gl errors at runtime ( it’s only enabled for debug builds ) and usually I can catch mistakes like matrix underflow from too much popping and other problems. But I’m not seeing any errors…

Observing the clip plane sliding around, you’ll notice it is always directly in line with the camera – e.g., you’re looking straight down the clipping plane. This implies to me it’s not being multiplied by the current modelview to me. That’s the best I can come up with.

http://opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=231923#Post231923

Wow, I have never heard of gl_ClipVertex… I should dive back into the orange book for a while.

I’m guessing, then, that since I’m not writing to gl_ClipVertex in my vertex shaders the clipping is therefore screwed up?

Were you able to solve your problem? And if so, what were you doing incorrectly? I’m pretty certain that I’m submitting my clipping planes at the right time.

As an addendum: I added the code I saw from your linked post


gl_ClipVertex = gl_ModelViewMatrix*gl_Vertex;

to my vertex shaders, and by god, it worked! Thank you very much!

Note that many Radeon’s do not support gl_ClipVertex and fall back to software mode when you use it. On radeon’s, you don’t have to use gl_ClipVertex to make clip planes work.
I use:


#ifdef __GLSL_CG_DATA_TYPES
gl_ClipVertex = gl_ModelViewMatrix*gl_Vertex;
#endif

That’s interesting. my ATI x1600 ( on Mac OS X 10.5 ) falls to software when I write to gl_ClipVertex.

Unfortunately, without that line I get the behavior which brought me here in the first place. Damned if I do, damned if I don’t.

surely u can do the per fragment test yourself in the fragment shader
and trigger a discard if the fragment is on the other side of the plane

IMO performing the clipping on the fragment level will most likely result in a performance hit compared to the vertex level clipping…

most likely but its certainly gonna be much faster than if the card reverts to a software driver

the test aint slow (well i suppose it does have a conditional :frowning: )

if ( dot( vert,plane_normal) < plane_dist )
discard;

Note that many Radeon’s do not support gl_ClipVertex and fall back to software mode when you use it. On radeon’s, you don’t have to use gl_ClipVertex to make clip planes work.

The official OpenGL 2.1 spec wording on this is in chapter 2.12 page 53:

“When a vertex shader is active, the vector ( xe ye ze we )T is no longer computed. Instead, the value of the gl_ClipVertex built-in variable is used in its place. If gl_ClipVertex is not written by the vertex shader, its value is undefined, which implies that the results of clipping to any client-defined clip planes are also undefined.
The user must ensure that the clip vertex and client-defined clip planes are defined in the same coordinate space.”

This is the official behavior. ATI defines (in its programming guide) that when ftransform is used on the ATI hw, this undefined behavior is equivalent to fixed function clipping.

As a followup, I got it working using oblique frustum culling. So, glClipPlanes are out for me.