Dynamic rendering w/o legacy (glBegin/glEnd).

Hello,

i’ve implemented a simple system to render text on the display, using the freetype-library to load the glyph-bitmaps. At the moment, i am just calculating everything, writing it into ram, loading it into vram with glBufferData and then drawing it directly. There are 4 floats per vertex, 6 vertices per char -> 120 floats (480 bytes) for only the word “Hello”. Well, now i thought, how i can do this faster/better. I googled a little bit and found glBufferSubData, which does not allocate a ‘new’ buffer but just uses the old one. For this i just had to fill the buffer once with a reasonable amount of (garbage) data and then would just update the first x bytes (how many i need in this frame) and draw. This would get rid of the allocation every frame. Then i found glMapBuffer, which gives me a pointer to write to (in ram). Thats even better. Just mapping, writing, unmapping, drawing. But thats still not perfect, since i have to unmap the buffer before i draw and map the buffer every frame, which costs me one allocation in ram (probably, not sure though). I would like something like buffer-mapping, where i just keep the mapping and tell GL what part of data i changed to flush this part, and then use it to draw. This would get rid of the “map” and “unmap” every frame.

Well, i am sure, i am not the first one who gets to this point and i will not be the last one. The problem is, that, eventually, i have to draw a hell lot of text and boxes (GUI) and since most of that stuff changes fairly often, i need a way to update data frequently without the “overhead” of hundreds of gl-calls. Is there anybody who can show me the right direction, what i can do and what i definitely don’t want to do?

PS: Am i right, that i have to “fill” the buffer once with garbage to “set” the size i need (i.e. just use glBufferData to fill X bytes of garbage from ram to vram, to make sure, i have X bytes allocated in vram) before i can use glBufferSubData or glMapBuffer? (Well, BufferSubData could probably know it itself if the buffer is to small, but since MapBuffer is just a pointer to ram, there is no way that it can know how many bytes i used, correct)

You can call glBufferData with NULL to do the initial fill. On modern OpenGL it’s better to use glBufferStorage because stock glBufferData allows you to change the size of the buffer with another subsequent glBufferData call, whereas glBufferStorage doesn’t; that means that with glBufferStorage the driver may be better able to optimize the storage for faster access.

With glBufferStorage you can then use persistent mapping to achieve your goal of map once/draw many times.

You should never, ever, ever attempt to read from a buffer mapped for this purpose. This may be either directly reading (e.g float x = buffer[0]) or hidden reads which may not be immediately obvious (e.g buffer[0] += 4.0f).

Don’t focus too much on memory sizes and memory allocations. Memory usage is something to get paranoid about if you’re still in the 1970s. Today it’s often the case that burning some extra memory is an acceptable tradeoff in exchange for performance (e.g modern hardware is highly unlikely to support 24-bit colours and will punt you through an intermediate software stage if you try to use them, so burn the extra byte per colour, go to 32-bit, and get the extra performance that comes from not needing the intermediate stage). Your objective is to get the data from your program and into the buffer as fast as possible, and if that comes at the cost of burning some extra memory, then do so.

Well, i’ve read about this function, but i guess, i was confused by the “immutable”, i thought i could not change it’s data after setting it once.

Thats great. I guess i have to read a little bit more through the documentation then.

I’d never do this. I have no need to read the data i’ve just written. (And i know, += and even ++ involves a read :P)

[QUOTE=mhagain;1264570]Don’t focus too much on memory sizes and memory allocations. Memory usage is something to get paranoid about if you’re still in the 1970s. Today it’s often the case that burning some extra memory is an acceptable tradeoff in exchange for performance (e.g modern hardware is highly unlikely to support 24-bit colours and will punt you through an intermediate software stage if you try to use them, so burn the extra byte per colour, go to 32-bit, and get the extra performance that comes from not needing the intermediate stage). Your objective is to get the data from your program and into the buffer as fast as possible, and if that comes at the cost of burning some extra memory, then do so.[/QUOTE]Yes, thats correct. I actually don’t worry about a bit of wasted memory, but i don’t know how costly a allocation is, so i am trying to prevent to many allocations per frame. And since it will be a big-world game (with dynamic loading and unloading of parts of the world), i have to look a little bit to don’t waste too much memory.

Aside from the buffer-management issues, you can halve the memory usage by using 16-bit fixed-point values instead of 32-bit floats.

Another possibility for drawing axis-aligned rectangles is to use instanced rendering, where each triangle pair is an instance. This also halves the data size (you only need to supply 2 corners of each rectangle).