HW Accel GL_ALPHA Pbuffers?

Are they possible?

I have a text rendering system that I want to optimize by using render-to-texture to dynamically pack my strings into a large texture. My fonts are all alpha only textures, which I did because its 1/4 the space of an RGBA texture.

Nvidia’s Pixel Format tool only shows that my GeForce2GTS supports RGBA Pbuffers, and I was wondering if they are just not showing non-RGBA pbuffers? If I’m using render-to-texture, am I still limited to the pbuffer formats, or can I do any HW accelerated texture format? (like GL_ALPHA)

Also, is it possible to use render to a texture using a different part of the same texture as the source pixels? I was thinking about a scheme for repacking my strings when they become too fragmented and it involves something like this. Right now, I’m thinking about keeping one line of the text buffer empty and then copying fragmented lines to the empty line thus compacting them. Then the old fragmented line becomes the free line. Not sure if this is the best algorithm to do this or not.

Thanks all,
Stephen

want you want to do is compute a texture which will represent the strings that are displayed in your app ???

won’t it be simplier to compute those textures using the cpu ?

If you are suggesting immediate mode - right now my text system uses immediate model (glVertex2f and glTexCoord2f) to render the font characters, but it is eating up way too much of the CPU time because I am displaying lots of text. So what I want to do is render the strings to a texture and just draw one quad per string.

I have already profiled display lists and various vertex arrays (incld VBOs) for the text, and I’ve found that most of these methods are slower than immediate mode unless you have more than 5-10 characters and it also depends how often you update the text. Profiling also shows that display lists are ungodly slow on all cards if you just have a few characters. Furthermore, by profiling on different cards on the GeForce and Radeon line, I discovered that for low end Nvidia cards, the DLs are stored in AGP (or main) memory and require lots of CPU cycles to transfer them each frame (ATIs drivers store them in video memory even on Radeon 7200 if my memory serves me correctly).

The best solution i can see to this problem is rendering strings of text into a larger texture (to avoid texture swapping performance problems) and then render from that texture to the screen… I’ve seen other people doing this before. Since the strings are relatively static, most change over the order of maybe 30 seconds, this is a net win situation.

Edit - I’m not sure if the DLs are being stored in AGP or main memory on low end Nvidia cards, but they are using an ungodly amount of CPU cycles to render the DL, so I’m guessing they’re stored in the queue of drawing commands to the card or they’re stored in main memory, but they’re not being DMA pulled cause that wouldn’t use CPU cycles.

[This message has been edited by Stephen_H (edited 08-05-2003).]