PDA

View Full Version : Hidden Surface Removal



Rizo
02-09-2001, 01:21 PM
Hi,

This topic may not belong to this discussion, but I thought since you guys already discussed Nvidia drivers, maybe I could ask this:

There has been a lot of talk whether the next Nvidia drivers will or will not support HSR. I am sort of new to OpenGL, but is it possible to have OpenGL support HSR (much like the way it supports z-buffer)?

Also, do you guys know if the drivers will include HSR or not?

Thanks,
Rizo

Humus
02-09-2001, 02:12 PM
HSR implementations should be transparent to the application. When depth test is enabled you can activate the HSR functionality and deactivate it when depth test is off. OpenGL itself doesn't need any changes.

To be 100% accurate: Zbuffering itself is a HSR technology, since it remove hidden surfaces. But there are of course better ways of doing it. NV20 is rumoured to include some sort of HSR technology, but I don't think they plan on doing a software HSR implementation for their current generation of cards.

mcraighead
02-09-2001, 03:10 PM
HSR is one of those words that has been thrown around so much that it has lost its meaning.

The correct meaning of HSR is any algorithm that is used to compute which surface at any given point is the one closest to the viewer. Z-buffering won the HSR algorithm competition for real-time graphics some time ago.

To call, for example, HyperZ an "HSR" algorithm is a ridiculous misuse of terminology. It is simply a HW Z-buffering optimization, and HW optimizations are by definition transparent to software.

- Matt

Rizo
02-10-2001, 01:26 PM
Hi,

Pardon me for misusing the HSR term. Now that I've read the responses, I actually, have another question. Z-buffer removed the objects that should not be seen (ie. are behind other objects). Does that increase overhead? Meaning that the vertices/faces actually do get processed and then removed or do they just not get processed?

3dfx, when they existed, had implemented some HSR on the latest drivers which would make the games run much faster because the hidden surfaces were not processed.

If z-buffer does not process the shapes that are not seen, then it should improve performance.

I was thinking of how I could not process the shapes that are not drawn and hence improve performance. Basically I need an axis that is always pointing to the screen, figure out where on the axis that object is and then decide whether to draw it or not. But isn't that the same as processing the objects? (I mean I have to draw it at least once and then figure out it's position). Any ideas oh how to draw only front objects oject with minimum overhead?

Rizo

Ps. I'm at work writing this, and I have to be quick, so I may not have made any sense http://www.opengl.org/discussion_boards/ubb/smile.gif

mcraighead
02-10-2001, 05:28 PM
The Z-buffer is definitely costly in terms of memory traffic. Even if something is not visible, you still need to read the value in the Z-buffer to determine that.

I recently learned how the "HSR drivers" worked. Let's just say that whatever it is they did, it is _not_ HSR in any way, shape, or form.

- Matt