disable clipping

I was thinking about the clipping stage in the pipeline.

Is it an expensive operation to perform the actual clipping? I know that tricks are used to help out in this case. I guess it’s called the guard band, but gl could have a flag for turning on and off the clipping feature of the hardware. Why doesn’t it have one?

Also, doesn’t the use of the band consume a whole lot of RAM? A WHOLE lot!
How big is it on a Gf3 or a Radeon 9000 or a later generation.

The only mechanism that can be used is the ->
glHint(GL_CLIP_VOLUME_CLIPPING_HINT_EXT,GL_DONT_CARE/GL_NICEST/); (disable//enable/)

which is only a hint btw…

Anyhow, as far as i know it has been implemented on ATI drivers for a while (not by NV) but i haven’t noticed any impact onto performances while encapsulating prims with it. :wink:

That’s looks like an old extension.
Have you tried to apply the hint and render something that isn’t entirely in the viewport?

You may have to render a really long polygon to see if some weird things happen. That should indicate clipping is not beeing applied.

Geforce 2/MX and up (or maybe even Geforce 1?) use infinite guardband clipping. There’s no performance impact, and no memory overhead, as it’s done in the trisetup stage. Fragments outside the clip space will simply not be fed to the pixel pipes.

R300 implements a limited guardband (-960 to 2880 IIRC), but it doesn’t seem to work at all in OpenGL.

The performance of ATI’s ‘real’ clipping engine is just horrible, something around 3 MTris/s clipped. That’s IMO the same unit they’ve dragged along since the R100. An R300 performs equal to it (clock for clock).

At least it just works ™, meaning that it’s smart enough to not hinder performance if a triangle is entirely inside clip space.

Originally posted by zeckensack:
Geforce 2/MX and up (or maybe even Geforce 1?) use infinite guardband clipping. There’s no performance impact, and no memory overhead, as it’s done in the trisetup stage. Fragments outside the clip space will simply not be fed to the pixel pipes.

Clipping per fragment? I just dont understand these guys. How about some documents that give implementation information for each GPU plus why it is designed like that. It would be fun to know what’s up under the hood.

I think view clipping and scissor clipping could conceivably be implemented using the same circuitry. Just make the scissor never be bigger than the viewport.

I’d be surprised if the R300 didn’t have efficient clipping and scissoring.