Need suggestions... (might be slightly OT)

The following code is what I use to “print” text to the screen in my OpenGL programs (or I will be, since I’m still working on this).

Anyone have any suggestions on how to improve the OpenGL part of this? I don’t want to preallocate the texture memory for it and then subload the texture, because that would limit my string size (and I don’t want to have a screen sized texture).

Thanks.

HDC hFontDC=NULL;
HBITMAP hFontBitmap=NULL;
HFONT hFont=NULL;

void Font_Print(float x, float y, char *String, …)
{
char *Text=NULL, *Ptr=NULL;
BITMAPINFO BitmapInfo;
RECT Rect;
SIZE Size;
unsigned char *Bitmap;
int i;
va_list ap;

Text=(char *)malloc(strlen(String)*2);

if(Text==NULL)
return;

va_start(ap, String);
vsprintf(Text, String, ap);
va_end(ap);

hFontDC=CreateCompatibleDC(NULL);

hFont=CreateFont(32, 0, 0, 0, FW_BLACK, FALSE, FALSE, FALSE, DEFAULT_CHARSET, OUT_DEFAULT_PRECIS, CLIP_DEFAULT_PRECIS, ANTIALIASED_QUALITY, DEFAULT_PITCH, NULL);
SelectObject(hFontDC, hFont);

memset(&Size, 0, sizeof(SIZE));

for(Ptr=Text, i=0;;*Ptr++, i++)
{
if(*Ptr==’
‘| |*Ptr==’\0’)
{
SIZE StringSize;

  	GetTextExtentPoint32(hFontDC, Ptr-i, i, &StringSize);

  	if(StringSize.cx>Size.cx)
  		Size.cx=StringSize.cx;

  	Size.cy+=StringSize.cy;

  	if(*Ptr=='\0')
  		break;

  	i=0;
  }

}

memset(&BitmapInfo, 0, sizeof(BITMAPINFO));
BitmapInfo.bmiHeader.biSize=sizeof(BITMAPINFOHEADER);
BitmapInfo.bmiHeader.biWidth=Size.cx;
BitmapInfo.bmiHeader.biHeight=Size.cy;
BitmapInfo.bmiHeader.biBitCount=8;
BitmapInfo.bmiHeader.biPlanes=1;

hFontBitmap=CreateDIBSection(hFontDC, &BitmapInfo, DIB_PAL_COLORS, &Bitmap, NULL, 0);
SelectObject(hFontDC, hFontBitmap);

SetBkColor(hFontDC, DIBINDEX(0));
SetTextColor(hFontDC, DIBINDEX(255));

SetRect(&Rect, 0, 0, Size.cx, Size.cy);
DrawText(hFontDC, Text, -1, &Rect, DT_NOCLIP);

// No texture object, just texture OpenGL 1.0 style!
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, GL_ALPHA8, Size.cx, Size.cy, 0, GL_ALPHA, GL_UNSIGNED_BYTE, Bitmap);
glEnable(GL_TEXTURE_RECTANGLE_EXT);
glBegin(GL_TRIANGLE_STRIP);
glTexCoord2f(0.0f, (float)Size.cy);
glVertex2f(x, y+(float)Size.cy);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(x, y);
glTexCoord2f((float)Size.cx, (float)Size.cy);
glVertex2f(x+(float)Size.cx, y+(float)Size.cy);
glTexCoord2f((float)Size.cx, 0.0f);
glVertex2f(x+(float)Size.cx, y);
glEnd();
glDisable(GL_TEXTURE_RECTANGLE_EXT);

DeleteObject(hFont);
DeleteObject(hFontBitmap);
DeleteDC(hFontDC);

free(Text);
}

Edit:
BTW, feel free to use this code if you want! I know some might find it useful (make a unicode version and display chinese, japanese, etc).

Edit again:
Forgot to fix a few things in the code…

[This message has been edited by NitroGL (edited 06-20-2003).]

I thought I’d throw this code into my project to replace my old font-rendering code, but I’m having problems compiling because of the GL_TEXTURE_RECTANGLE_EXT’s in the glTexParameteri(…) calls. The newest version of glext.h I found on the sgi site didn’t define these, and I googled to no avail. Where do I get the definition for this identifier?

By the way, thanks a lot, Nitro, for releasing this code. I’d seen people on this board mention creating a separate DC and using GDI calls for font rendering, but was less than eager to learn all the necessary GDI font functions. I can’t express the depths of my appreciation for saving me countless hours slogging through MSDN

What? You call that thing every frame for every bit of text?

The code has bugs. You need to deselect the font and bitmap before deleting them. The you delete the DC.

I would not call something like this once per frame.

I know it isn’t done the best way (was really only a test to begin with), but it seems fast enough (600+ FPS with a simple program on my 9700).

Why would I need to deselect the objects if the DC is getting destoryed?

Originally posted by scurvyman:
I thought I’d throw this code into my project to replace my old font-rendering code, but I’m having problems compiling because of the GL_TEXTURE_RECTANGLE_EXT’s in the glTexParameteri(…) calls. The newest version of glext.h I found on the sgi site didn’t define these, and I googled to no avail. Where do I get the definition for this identifier?

The enums are the same as the NV version of the extension, so you can just use those. I just use the EXT name and check for both EXT and NV strings (in my own program that is).

Another approach is to take a texture with the ascii char set in it and build a textured quad for each letter then store it in a display list. Then you call it anywhere with something like this:

/* Position The Text (0,0 - Bottom Left) */
glTranslated( x, y, 0 );

/* Choose The Font Set (0 or 1) */
glListBase( base - 32 + ( 128 * set ) );

sprintf(string,“write out x %i”,x);
/* Write The Text To The Screen */
glCallLists( strlen( string ), GL_BYTE, string );

Originally posted by NitroGL:
I know it isn’t done the best way (was really only a test to begin with), but it seems fast enough (600+ FPS with a simple program on my 9700).

Fast enough? 600FPS to print some text?! How long does your function take in, say, milliseconds? Put some timing code in there to see just how long it takes on your machine.
All that GDI stuff is not going to be good for your health. Bear in mind the final texture upload is going to be slower than transforming the quads as in azcoder’s suggestion (which may not even need to be sent across the bus because they’re display listed).
With a significant amount of text on screen, you should see a significant performance improvement.

Ok mistar smarty-pants ( ), how would you do a dynamic string render with unicode support? The way my code does it is the way that was suggested for doing unicode on-screen text.

For unicode you can’t have one texture, because they would be too big (4096^2 with 16^2 glyphs), but you would have the advantage of just having a bunch of quads. And multiple textures wouldn’t be a very good idea, because it would just take up the same memory space as one big texture.

BTW, 600FPS isn’t just the text itself, I had a 3D model too (a sphere I think).

Originally posted by scurvyman:
I’d seen people on this board mention creating a separate DC and using GDI calls for font rendering, but was less than eager to learn all the necessary GDI font functions.

Well, i’m not sure what was that all about the seperate DC, but i tried to use GDI for a font rendering in a window with OpenGL RC. Advantages: less pita with the coding, fast text rendering. Disadvantage: if you’re actually using double-buffered mode for OpenGL, then you get font blinking all the time, as GDI font rendering functions are for a single buffered DC(actually i was calling 'em after SwapBuffers). Dunno how to fix it tho…

knackered: 600 fps means that less than 1.16 milliseconds is spent for the text painting. I’d say this is pretty good (maybe even incredibly good). azcoder’s approach also has the disadvantage that you have to put a lot of effort into the code if you want to cope with kerning (especially important for italic fonts).

EDIT: typo

[This message has been edited by stefan (edited 06-24-2003).]

Well, this is pretty insane.
Firstly, 1.16 milliseconds is relevant only if we know these things:

  1. what the spec of his machine is (cpu, bus speed etc.). We know he used a radeon9700.
  2. how many triangles in the scene he was rendering, minus the text
  3. how big his viewport was
  4. what the font size was
  5. how much text in each call to his Font_Print function
  6. how many times he called the Font_Print function per frame

Without this information, how can you say that 1.16 milliseconds is “pretty good”? Would it be pretty good for a zx spectrum?

Just time your Font_Print function, and I’ll be happy.
I don’t believe that issuing GDI calls in a tight render loop is going to be the best option for performance.
Unicode? Err, don’t care.

For unicode you can’t have one texture, because they would be too big (4096^2 with 16^2 glyphs)

Well no games use that many characters. Usually when you use Chinese or Japan language, you only use a subpart of the language.

In the case of Japan languages, it’s even worse : the 3 alphabets are often mixed together to get “nicer” rendering of the text, as in manga cartoons.

ok, i’ve measured the time of my GDI-text-drawing in OpenGL double-buffered app

HW was: Cel 465MHz, Radeon9000Pro, resolution = 800x600x32.

Rendering loop includes glClear (color+depth clearing), SwapBuffers, and performance calculations using QueryPerformanceCounter plus text drawing with the fontheight = 12, output strings were:

“We are ROLLING!!! 8) %f ms”
“average %f ms”

where average frame time was calculated for 1000 frames.

So the average result was equal to 1.5 ms

[b]Well, this is pretty insane.
Firstly, 1.16 milliseconds is relevant only if we know these things:

  1. what the spec of his machine is (cpu, bus speed etc.). We know he used a radeon9700.[/b]

Athlon XP 2000+, 512MB DDR RAM, EPoX 8RDA+, and 9700 Pro in AGP8x.

2. how many triangles in the scene he was rendering, minus the text

The program runs about 53.3MB in video memory, that includes all textures and model data (using ARB_VBO).
Total triangle count is 28388.

3. how big his viewport was

800x600x64bpp, it’s a 32bit/64bit double buffer scheme (64bit back, 32bit front), I use it for HDR rendering.

4. what the font size was

32pixels tall (don’t know the point size of that)

5. how much text in each call to his Font_Print function

25-30 chars

6. how many times he called the Font_Print function per frame

I try to keep all my text in one call, so only once per frame.

[b]Without this information, how can you say that 1.16 milliseconds is “pretty good”? Would it be pretty good for a zx spectrum?

Just time your Font_Print function, and I’ll be happy.
I don’t believe that issuing GDI calls in a tight render loop is going to be the best option for performance.
Unicode? Err, don’t care. [/b]

My code runs about 1.5ms to 2.0ms in my HDR application (averages about 40-60FPS, fillrate limited).

Edit:
I should say that this isn’t the program I get 600FPS in.

[This message has been edited by NitroGL (edited 06-25-2003).]