Anyone have issues with gl_ClipDistance[] on NVIDIA?

Hi, I am seeing an issue with using gl_ClipDistance[] on NVIDIA hardware. The application is the painter-cell demos from the project GitHub - intel/fastuidraw. (Disclaimer I am the primary author). In a nutshell:

[ul]
[li] using discard instead of clipplanes gives correct results [/li][li] on NVIDIA hardware using gl_ClipDistance (I only use [0],[1],[2],[3]) gives flashing issue; stuff stays inside of the clipping region BUT lots of the triangle is not rendered. One can see like the clipper is going haywire making the triangle fan from the single triangle where the fan is not right. [/li][/ul]
Every NVIDIA card/driver I have tried the issue is present; from very old GeForce 8700M to GeForce Titan. Demo works fine on Intel hardware in both MS-Windows (under Intel closed source driver) and on Linux with Mesa (open source driver). The shader for clipping does this:

[ul]
[li] has an attribute which is an index into a texture buffer object [/li][li] reads from the texture buffer object 12 numbers (4 clip planes with 3 numbers per clip plane) [/li][li] essentially does gl_ClipDistance[X] = dot(p, clipX) where both terms are 3-vecs. [/li][/ul]
Oddly, same issue also occurs on open source nouveau drivers as well.

Oddly, same issue also occurs on open source nouveau drivers as well.

That strongly suggests that you’re doing something wrong and Intel’s driver is letting you get away with that. We won’t be able to tell without you showing your code.

the two drivers from Intel have -zero- code in common and are utterly different. As I stated quite clearly one is Mesa and the other is the closed source driver. Just to state again: these drivers have zero code in common. They have very different behaviours, different extensions and so on.

I did not paste the code because it is a REALLY long machine generated shader. However the gist is this:

C++ code:


for(int i = 0; i < 4; ++i)
{
    glEnable(GL_CLIP_DISTANCE0 + i);
}
for(int i = 4; i < m_number_clips; ++i)
{
     glDisable(GL_CLIP_DISTANCE0 + i);
}

GLSL vertex shader code:


#ifdef USE_HW_CLIP
#define clip0 gl_ClipDistance[0] 
#define clip1 gl_ClipDistance[1]
#define clip2 gl_ClipDistance[2]
#define clip3 gl_ClipDistance[3]
#else
out vec4 cp;
#define clip0 cp.x
#define clip1 cp.y
#define clip2 cp.z
#define clip3 cp.w
#endif
void
apply_clipping(in vec3 p)
{
   clip0 = dot(p, plane0);
   clip1 = dot(p, plane1);
   clip2 = dot(p, plane2);
   clip3 = dot(p, plane3);
}

and the GLSL frag shader code:


#ifdef USE_HW_CLIP_PLANE
void apply_clipping(void) {}
#else
in vec4 cp;
void apply_clipping(void)
{ 
  if(cp.x < 0.0 || cp.y < 0.0 || cp.z < 0.0 || cp.w < 0.0)
         discard;
}
#endif

I know that the fetching of the values for plane0, plane1, plane2 and plane3 (which are vec3’s) is correct because the discard fall back works. The issue is that I get wrong render results is USE_HW_CLIP_PLANE is defined.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.