ATI, GLSL and custom cliping planes

Hi

I’v made a simple test program demonstrating that custom cliping planes do not work on ATI (I’v tested on 1900XT and 4800HD, catalysts 9.5 and 9.6)
The link to the program is here: http://mxadd.org/bin/ATIClip.zip
In vertex shader I’m outputing gl_ClipVertex, in code I’m enabling clip plane and setting equation - and all works perfectly on any nvida card … but not on ATI :stuck_out_tongue:
I’m doing something totaly wrong - or ATI just does not support cliping planes in OpenGL ?
(Under DX9 / 10 they work perfecly without performance drop - so hardware is capable)

I’v send a bug raport some weeks ago to ATI devsupport - but well - they didn’t bother to ansfer ;|

afaik it does. on ATI cards you don’t need gl_ClipVertex output, on NVidia you do which is not programmer-friendly.

shader code would look like this:
(check if you have NVidia or ATI and provide a #define)

#ifdef NVIDIA
gl_ClipVertex = gl_ModelViewMatrix * gl_Vertex;
#endif

Well - programer friendly or not it does not work (and specification states that gl_ClipVertex should be used)

You suggesting that commenting gl_ClipVertex line will make the simple app to work on ATI ? – I’ll check it tomorow on my desktop (on laptop I have nvidia).

But outputing gl_ClipVertex should not break the custom planes on ATI IMHO - so something is wrong here :expressionless:

I use clip planes all the time with ATI. You don’t have to do anything. Use this:

#ifdef __GLSL_CG_DATA_TYPES
	gl_ClipVertex = modelvertex;
#endif

This define is only valid on NVidia hardware.

Without the use of gl_ClipVertex, how are clip planes supposed to work if I don’t use “global” matrices like GL_MODELVIEW and GL_PROJECTION. The clip planes must be provided in clip-space then, right?

Getting your shaders to do clipping in such a way that both ATI and nVidia work is a hassle:

nVidia wants to use gl_ClipVertex (vec4) ala GL 2.x

ATI wants I think to use gl_ClipDistance[] (array of floats) ala GL 3.x

gl_ClipDistance gives the “signed distance” between the vertex and the clipping plane, the equations of the clip planes set from glClipPlane are stored in gl_ClipPlane[], which is marked as deprecated in GLSL 1.3. The relation to think is:

gl_ClipDistance[i] = dot( gl_ClipVertex, gl_ClipPlane[i] )

the thing is that gl_ClipDistance[] gives more flexibility, AFAIK, gl_ClipDistance[] is not supported on nVidia cards (someone correct me if I am wrong), so you shaders will have this kind of beauty in it:

#ifdef USE_CLIP_VERTEX
gl_ClipVertex = vertex.xyzw in same co-ordiante system as the planes were specified in glClipEquation;
#else
for(i=0;i<number_clip_planes;++i)
gl_ClipDistance[i]= dot( vertex.xyzw, plane_equation[i].xyzw);
#endif

on the surface so far it looks like gl_ClipVertex is better, but actually it is not as flexible, with gl_ClipDistance[] you can have clipping determined by something else that just a plane equation, the sky is the limit (make sure you know it’s interpolation rules!). With gl_ClipDistance[] you can choose how clipping is done from the shader, i.e. the “clip planes” can be made to vary from vertex to vertex! (but be aware of the interpolation rules, better for it to vary just primitive to primitive).

Additionally, if you do not specify gl_ClipVertex, then GL does:
gl_ClipVertex= gl_Position

which most often you won’t like as gl_Position is in clip co-ordinates, not eye-coordinates, you can cheese muffin the entire issue by doing GL 2.x, not writing to gl_ClipVertex/Distance[] AND make sure you specify the clip planes in cough clip-coordinates which are given by:

Plane_ClipCoords.xyzw = Inverse Transpose ProjectionMatrix ( Plane_EyeCoords.xyzw)

ewww… a while back on nVidia hardware I tried to use gl_ClipDistance but the link stage always failed, I have not tried ATI hardware with gl_ClipDistance[], anyone have experience?

No direct experience with gl_ClipDistance but as you say it looks like it’s just a per-vertex value, linearly interpolated across the primitive; clipping occurs where the value < 0.

Another way to clip is to use a projected 2D texture. The texture can be 2x1. 1 pixel with alpha=255 and the other alpha=0
In the fragment shader, you use the discard keyword.

texel=texture2D(thing, texcoord);
if(texel.a==0.0)
discard;

for using discard to do clipping, I have memory of considering this on these boards even, but the punchline, as some one told me, is that it is bad because discard does awful things to early z-fail, also the fragment shader will then get run across the entire primitive, where as clipping clips it before rasterization, at nay rate there is no need for texture anyways you could just do this:

vertex shader:
out vec4 my_clip_thingy; //holds 4

my_clip_thingy.x in place of gl_ClipDistance[0], etc

fragment shader:
if my_clip_thingy.x < 0.0 || my_clip_thingy.y<0.0 || …)
discard

but that is still pants because the entire primitive still gets rasterized and does bad things to early z-fail.

I can’t recall if gl_ClipDistance[] gets interpolated perspective correct or not… anyone know?

There is a technique which can perform the same effect as a single user clip plane - without actually using any clip planes at all. It’s called oblique fustrum clipping and involves the manipulation of the projection matrix.
Unfortunately, I haven’t got a complete exmple of setting up such a matrix to post it here.

Has anyone else used this oblique matrix technique because it’s of general interest at all as it’s vendor independant, and means we don’t have this mess of nVidia/ATI vertex shader issues.

If I recall correctly, it changes zbuffer values making it almost impossible to use more than one clipping scheme per frame without correcting z-values.

It seems the way to go is to use the clip planes in clip-coordinates, which is how I would do it in D3D.

Agreed… distance or vertex. Quite natural to output clip coords/dists from the geo stage to boot.