Text scaling to match GDI pt size?

When doing 2D OpenGL rendering and showing text (via wglUseFontOutlines and then calling the display lists), I can use the glScale to control the text size. My question is, what value of glScale should I use to get text rendered at a particular point size? i.e., I have a 14pt font rendered using GDI. I want to get the same result using 2D OpenGL rendering instead.

So I want to pick some scaling value which results in it rendering the same size as if I was doing GDI rendering using the 14pt font. Details are below.


I’m using Windows XP with Visual Studio 2005 and C++ developing a native Windows app.

I’m doing some OpenGL 2D rendering on top of the 3D rendered scene. So after doing the 3D rendering but before calling SwapBuffers, I set things up for 2D rendering so that the coordinate system lines up with the screen and the units are in pixels using the below code (pCanvasRect is set via GetClientRect in my window).

LONG rWidth = abs(pCanvasRect->right - pCanvasRect->left);
LONG rHeight = abs(pCanvasRect->bottom - pCanvasRect->top);

glDisable(GL_LIGHTING);
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);

glViewport(pCanvasRect->left,pCanvasRect->top,rWidth,rHeight);

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,rWidth-1,rHeight-1,0);

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

The result of the above is that I can use the same coordinate system I would normally use with GDI rendering. Origin is at the top left, positive X to the right and positive Y down. And I can render a 2D box in OpenGL which is 100 units wide and it renders 100 pixels wide on the screen. One note is that I have to use a negative Y scale for drawing the text so it renders it right-side up (due to the Y axis reversal).

Anyway, if I’m trying to render text which looks identical to 14pt GDI rendering using the same font how do I figure out the correct scaling?

The examples I’ve dug through on the net all seem to just apply arbitrary scaling, probably as a result of trial and error, however I’d like to pick the correct scaling to match a point size font.

Any ideas?

If you find out how, please let me know. :slight_smile:

wglUseFontOutlines() takes a parameter to GLYPHMETRICSFLOAT, so you might want to look at that.

wglUseFontOutlines() completely ignores the LOGFONT lfHeight parameter, so the fonts have to be scaled to whatever size you want.

You may find some particular scaling value that works well with a certain font at a certain size, but
as soon as you switch to another font, that scaling value probably won’t work so well anymore.
So, you really have to get into the technical details of how the font is created I suppose.

If you digeth through the MSDN you’ll find info on GetDeviceCaps and a host of parameters (HORZRES, LOGPIXELSX, etc) which’ll give you what you need to create conversions to and from points, pixels, inches and you name it.

I think I figured it out!

To determine the scale factors for a given glyph, you get the black box size (size of the smallest rectangle which encloses the rendered pixels) using both GDI and OpenGL. You then just divide the GDI black box size by the OpenGL black box size to get scaling in X and Y.

So, with the GDI font selected in a DC, you call GetGlyphOutline using GGO_METRICS, which will tell you the GDI black box size.

When you create the display list for the glyph using wglUseFontOutlines, this will tell you the OpenGL black box size.

Now, just divide the GDI size by the OpenGL size and you have the necessary scaling.

A couple notes:

When calling GetGlyphOutline, you must pass in a valid transform (MAT2). Just zero it out and fill it in with the identity matrix and it should work. I found the MSDN docs a little confusing here but this is what you need to do.

You need to do this for each glyph you render, so if you’re rendering a string 20 chars long, you’ll want to scale differently for each char using the above method. When you use wglUseFontOutlines it creates a display list which contains the calls to render the glyph, and it also contains a translation. Since you want to scale each letter, you’ll want to push and pop the matrix around the display list call (push, scale, call list, pop). This means you’ll lose the translation provided by the display list. So you need to translate yourself after the pop. The correct translation is the cell increment values taken from the GDI GLYPHMETRICS structure (obtained with GetGlyphOutline).

Note that, although the above will get the GL text rendering to match GDI in terms of size, one difference you’ll immediately notice is that GDI text rendering provides anti-aliasing for you without you having to do anything extra. The GL text rendering does not and you’ll have to do some extra work to get anti-aliasing.

You should read this about fonts, aa, windows, osx, linux, adobe…
http://antigrain.com/research/font_rasterization/

To determine the scale factors for a given glyph, you get the black box size (size of the smallest rectangle which encloses the rendered pixels) using both GDI and OpenGL.

Is this correct?

Because the docs say (not pixels):
“The values of GLYPHMETRICSFLOAT are specified as notional units.”

I haven’t looked at this in a while, but I would assume the black box values from both
GDI and OpenGL would be identical. Afterall, that’s where OpenGL gets them in the first place.

Actually, if you check out the wine source for wglUseFontOutlines(), you’ll see they come from GetGlyphOutline:
http://wine.sourcearchive.com/documentation/1.0.0/wgl_8c-source.html


needed = GetGlyphOutlineA(hdc, glyph, GGO_NATIVE, &gm, 0, NULL, &identity);

And I think maybe the only difference, is that they normalize all values with em_size, or 1024.

And if you draw the text per character, how do you handle proper placement and proper
spacing?

The black box size values in GLYPHMETRICSFLOAT are not necessarily pixels. The only reason they’re pixels in my case is because I’ve setup OpenGL for 2D rendering. They’re scaled to be between 0 and 1 units for the entire font when it generates the display lists I believe. It just so happens that in my case the units are pixels due to setting things up for 2D rendering (see my original post for specifics on how I setup GL) but in general this wouldn’t be true.

I took a look at one of the letters, ‘W’, and noticed that the returned black box sizes in GLYPHMETRICSFLOAT were pixels. For example, lets say the black box height was 0.8. I was able to apply a y scaling of 100 and verify that the letter drew 80 pixels tall on the screen.

The GDI and OpenGL black box sizes aren’t identical because OpenGL scales the sizes to be between 0 and 1 and GDI gives the size which actually matches the font size you’re using to render. If I create a 14pt GDI font, select it in a DC, and then call GetGlyphOutline, it will return info for that specific size, whereas with GL, its returning a size between 0 and 1.

I can’t speak to how wine does it, but this seems to be the way Windows does it. I also haven’t tested my code on wine as I don’t use it, but that might be an interesting exercise.

Anyway, for a given font, using the technique I posted before does seem to work. Taking the black box size in GDI divided by the black box size in GL gives a scaling factor for each letter which results in the GL rendered text matching the GDI text size. The only thing which is missing appears to be the anti-aliasing which GDI does for you so if you wanted that you’d need to do that yourself using GL.

As for the proper placement and spacing, see my earlier post which explains this. You undo the built-in translation provided by the display list and you then apply translation using the GDI GLYPHMETRICS info.

Using this technique I can render 2 lines of text, one with GL and the other with GDI and they’re sized/spaced/placed identically. The only difference is the anti-aliasing which I mentioned.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.