raga34

02-24-2014, 04:02 PM

Hello.

I'm working on a small project where I need to calculate the screen-space bounds of a rendered object in order to provide a region for glScissor. To my surprise, this has turned out to be more difficult than I expected. I'm doing the following:

I have the eight object-local vertices that make up a bounding box for the object

I transform each of these vertices with the current modelview and projection matrices (on the cpu)

I perform a perspective division for each of these vertices on the cpu

I iterate over the eight (now normalized device space) vertices, calculating the minimum and maximum of each of the x,y,z components

I transform the resulting "minimum" and "maximum" vertex to screen-space by applying the standard viewport transform (again, on the cpu)

I manually handle the case where all of the clip-space coordinates w components are negative (the object is behind the observer and therefore there isn't a scissor region to set). If I move the observer inside the object, the result isn't exactly accurate but this isn't too much of a problem as I can handle this case separately.

At this point, what I have seems to *mostly* work. As a debugging aid, I'm setting the scissor region using the screen space coordinates calculated above and clearing the scissored region to a bright magenta colour so that I can see what region is being calculated.

Unfortunately, in many cases where there are some vertices on the screen and some off, the result is wildly incorrect. Assuming the object is at (0,0,0) and the observer is at (0,0,5) looking down the negative Z axis, if I turn the observer to face (5,0,5) so that the viewing direction is roughly perpendicular to the object, the above will often calculate a scissor region that covers the entire screen for a very small range of viewing angles. It seems to occur when some clip-space w coordinates are negative and some positive, but not always.

Is there something obvious I'm missing here? Some easily-detectable edge case I need to handle?

I'm working on a small project where I need to calculate the screen-space bounds of a rendered object in order to provide a region for glScissor. To my surprise, this has turned out to be more difficult than I expected. I'm doing the following:

I have the eight object-local vertices that make up a bounding box for the object

I transform each of these vertices with the current modelview and projection matrices (on the cpu)

I perform a perspective division for each of these vertices on the cpu

I iterate over the eight (now normalized device space) vertices, calculating the minimum and maximum of each of the x,y,z components

I transform the resulting "minimum" and "maximum" vertex to screen-space by applying the standard viewport transform (again, on the cpu)

I manually handle the case where all of the clip-space coordinates w components are negative (the object is behind the observer and therefore there isn't a scissor region to set). If I move the observer inside the object, the result isn't exactly accurate but this isn't too much of a problem as I can handle this case separately.

At this point, what I have seems to *mostly* work. As a debugging aid, I'm setting the scissor region using the screen space coordinates calculated above and clearing the scissored region to a bright magenta colour so that I can see what region is being calculated.

Unfortunately, in many cases where there are some vertices on the screen and some off, the result is wildly incorrect. Assuming the object is at (0,0,0) and the observer is at (0,0,5) looking down the negative Z axis, if I turn the observer to face (5,0,5) so that the viewing direction is roughly perpendicular to the object, the above will often calculate a scissor region that covers the entire screen for a very small range of viewing angles. It seems to occur when some clip-space w coordinates are negative and some positive, but not always.

Is there something obvious I'm missing here? Some easily-detectable edge case I need to handle?