Hiding labels

Hi All,

Suppose you have a rectangular face and two points one in front of it one behind. The two point are labeled with a 2D text saying front and back. You always see both the labels because they are drawn using glDrawPixels().

We want to avoid drawing the label for the hidden point.

The first approach suggested on this forum was to draw points only, get the fragments depth with the glReadPixel() then draw the complete scene and read again fragments depth. Comparing the former depths with the latter ones you can easily sort out the labels to be drawn.

This is fairly easy to understand and implement but if fails in our case because the points are on the model faces so in many cases visible labels flickers do to depth values almost identical.

How can we fix this?

Thanks,

Alberto

Give 'em a normal pointing away from the quad and enable backface culling?

Hi Dark Photon,

I really don’t understand your tip, btw during 3D navigation labels have to be hide/shown depending of the visibility of the anchor point that in some cases is occluded by the QUAD rectangle.

Thanks,

Alberto

This is fairly easy to understand and implement but if fails in our case because the points are on the model faces so in many cases visible labels flickers do to depth values almost identical.

And you still drawing the labels with glDrawPixels? In this case, depth comparison is not performed and the drawn labels should not flicker because of depth fighting.

If you are rendering labels on a quad, disable depth comparison or try to push labels along the face normal for example.

Hi dletozeun,

In the real case there are many faces connected to the same model vertex, therefore it is not easy to find the right face normal to push the label with.

I am starting to think that the only solution is the one to convert non-linear depth coords to linear one and compare depth values: if they are close enough the label is visible if not we hide the label.

What do you think?

Thanks,

Alberto

I am starting to think that the onlu solution is the one to convert non-linear depth coords to linear one and compare depth values: if they are close enough the label is visible if not we hide the label.

What do you think?

Yes, it is worth a try… but that does not answer my question: How do you draw the labels actually? The thing with ReadPixels troubled me and I think it is strange to resort to this function to draw labels on faces.

We are using the glDrawPixels() method to draw a semi-transparent bitmap containg a text string.

As I mentioned above we want hide labels when the anchor points move behind polygons.

Do you know where I can find the formula to get linear depth values?

Thanks,

Alberto

occlusion query might help

edit:
glDrawPixels: poor Intel Onboard cards.

NK47,

What does occlusion query is?

Thanks,

Alberto

Occlusion query:
http://oss.sgi.com/projects/ogl-sample/registry/ARB/occlusion_query.txt

but I don’t know how it would help in your case…

I am afraid that you have to find a way to draw the points a bit further from the faces to prevent flickering.
How are you determining the position of these points?

Can you post some screenshots what you actually have now and what you expect to see?

Also, do not use glDrawPixels for text rendering… its slow.
Better rasterize font glyphs in texture then draw small texture mapped quads. With proper glyphs handling you can get nice kerning and spacing.

Hi dletozeun,

Labels anchor’s points are geometry 3D vertices. The problem is that both the vertex and the label reference point are drawn on the same fragment on screen…

Thanks,

Alberto

yooyo,

Here is the screen shot: http://www.devdept.com/TankNodes.jpg

We need to avoid to draw node’s labels behind the tank.

We are drawing small images, it’s not so slow.

Thanks,

Alberto

Something like this?

Yes, how did you hide labels behind the body?

Thanks,

Alberto

Some pointers:

  • Use polygon offset when drawing the points (before reading back depth), this will fix the flicker on polygons co-planar with the view plane (nearplane/farplane) which is due to z-fighting.

  • Do it off-screen in 2 passes, one to determine occlusion and one for display:

  1. draw points (GL_POINTS, glPointsize 2 or 3 to avoid rasterization/gluProject accuracy problems)
  2. read back entire depth buffer
  3. gluProject points from 3d coordinates to screen coordinates
  4. using those 2d coordinates, get the depth from depth buffer
  5. render the occluder (visual geometry) with polygon offset to push it back from the eye position (to avoid z-fighting on the points)
  6. read back z-buffer again, for each point read back the depth and compare
  7. don’t call swapbuffers
  8. clear the frame buffer, render everything as usual, with labels. Use the same gluProjected coordinates for placement.
  • You can try occlusion queries as alternative to depth buffer readback, if you draw large dots, you can use the occlusion result pixel count to fade out the labels smoothly.

  • Keep in mind that the values stored in the depth buffer are distributed non-linearly.

I’ll explain slowest codepath… later you can optimize depending on underlaying hardware.
Assume you already have rendered geometry.

  1. Get modelview matrix
  2. Get projection matrix
  3. Get viewport
  4. calc matrix composition… mvp = modelview * projection;
  5. Now, project all your vertices on screen

static int Project(const vec4& pos, const mat4& mvp, const int viewport[4], vec4& result)
{	
	vec4 pp = mvp * pos;
	if (pp.w == 0.0f) return 0;

	float c = 1.0f/pp.w;
	pp.x *= c;
	pp.y *= c;
	pp.z *= c;

	pp.x = pp.x * 0.5f + 0.5f;
	pp.y = pp.y * 0.5f + 0.5f;
	pp.z = pp.z * 0.5f + 0.5f;

	pp.x = pp.x * viewport[2] + viewport[0];
	pp.y = pp.y * viewport[3] + viewport[1];

	result.x = pp.x; // screen X
	result.y = pp.y; // screen y (bottom -> top)
	result.z = pp.z; // screen depth

	return 1;
}

  1. Now, for all projected vertices readback depth and compare

float screen_depth;
glReadPixels(projected_point[i].p.x, projected_point[i].p.y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &screen_depth);
float pt_depth = projected_point[i].p.z * 0.9999f; // adjust point depth a bit to avoid flickering
if (pt_depth <= screen_depth) // point is visible
{
  projected_point[i].bVisible = true;
}
else
{
  projected_point[i].bVisible = false;
}


  1. now you have all your points projected (2d screen coordinates) and flags to determine visibility. Loop through projected points and draw labels.

Make sure when you draw small spheres as points it can affect depth buffer, so turn off depth writes while you drawing your red dots (green on my screenshot)

rmdul, yoyoo,

You approaches look very similar, I was wondering what are the pros / cons of the two.

Is polygon offset (remdul step 5) more robust compared to projected_point[i].p.z * 0.9999f (yoyoo step 6) ? In my opinion, considering that depth is not linear, doing p.z * 0.9999f could lead to some accuracy issues.

Thanks,

Alberto

yes… thats correct, but that alse depends on your camera settings. If your near and far plaes are extreme then you can expect some depth issues.
Above code is executed on CPU in 32bit precission. Almost same code is executed on GPU during projection. Only diference is storege… CPU variable is 32bit float, GPU depthbuffer is 24bit whis lead to loss of precission. This introduce flickering artifact. Only way to get rid of those artefacts is to scale point detph by 0.9999f or decrease point depth for example 0.001.

I want to thank you all guys, now labels works perfectly.

Thanks again,

Alberto