offscreen rendering to mpeg video?

Hello,

I’m using mesa’s offscreen rendering facility to render
frames (and currently writing them out to disk as tga files). However, I would like to instead write them out
to an mpeg video file. There are some mpeg libraries
around (like libfame for instance), but I’m not sure how
to convert my buffered rgba data to the yuv12 format
it requires- maybe there is a better library to use? At
this point I can dump all the tga’s to disk and then use
a batch mpeg creation utility to generate mpegs, but
this method is unacceptably slow (with all the disk writes
for image files, and then only starting to create the
mpeg after all rendering and simulation calculations
have completed…)

Has anyone been succesful with something like this? I’m
open to suggestions, please help!

scott olsson

Well, since noone replied to my question, and
I’ve since figured it out myself, I’ll run
down how I eventually did this (it’s pretty
nifty).

For starters, I used Mesa’s offscreen rendering facility (which is similarly supported, although using different function
calls, in new versions of opengl). When you
offscreen render, it renders to a buffer
of WIDTHHEIGHT4 unsigned bytes (4 since
I was in rgba mode) which are linear mapped
to the pixels going bottom (left) to top.

I used libfame to do the actual mpeg packing,
which has a pretty simple interface (only 3
function calls to learn), the problem with
it though is that you have to first convert
your rgba data to yuv12 format. This is
where it gets a bit more interesting.

yv12 stores pixel data in a luminance/chrominance color space, rather
than rgb. (luminance meaning how bright
something is, chrominance meaning it’s
color value). Since human vision is more
perceptive to changes in luminance than
to changes in chrominance, you can subsample
the chrominance values. In particular, yv12
samples only ever other pixel (in x) and
ever other line (in y), so one would say
that the chrominance is downsampled by
2x2. The luminance is not subsampled.

Okay, so what does that garbage mean? Well,
you have to run through your image buffer
(from offscreen rendering context) which
is linear mapped in bottom-up fashion, and
convert rgb’s to yuv data (which is stored
in top-down fashion)- yuv12 however, is not
stored with luminance (Y) and chrominance
(Cr, Cb) values interleaved. Instead, they
are mapped in planes. SO, the first WIDTH*HEIGHT unsigned bytes of your yuv12 buffer will be all the luminance (Y) values
that you calculate from your rgba data, then
the next (WIDTH/2 * HEIGHT/2) will be
chrominance (U) values, and then another
(WIDTH/2 * HEIGHT/2) for chrominance (V)
values. You can find the rgb to yuv
conversion formulas on the web easily
enough- but remember, you will need
to add a factor of 255 into the conversion,
since your opengl rgb values are clamped
between 0 and 1.

Hope this helps anyone else trying to do
this- have fun!

scott olsson