Playing video using opengl hardware!

hey!

I’ve been working on research for a video application which is of course using opengl to put data buffers on screen. So far I’ve been using a in-memory approach and I’ve stored all data uncompressed in RAM.

This is ok for shorter clips and its great for realtime editing.

Now, I want to implement a play-mode which just plays clips, what is the best approach? I need some ideas and tips on how to design the application.

I’ve got some ideas of using a mult-threading approach but it would be interesting to see what you people think!

Thanks!

// Joda

At least one popular Linux movie player already does this.

Three words: GL_NV_pixel_data_range or GL_APPLE_client_storage and GL_APPLE_texture_range.

thanks …

please explain more about the technique, I know there are a lot about this on nvidia developer sites but all information is welcome.

why is it good? how do I use it best for playback of long movies?

thanks

// joda

while(!done)
{
    render_next_frame_of_movie_to_buffer();
    copy_buffer_to_texture();
    draw_big_quad();
    swap_buffers();
}

NV_pixel_data_range and APPLE_texture_range can be used to make the copy_buffer_to_texture bit much faster by giving you memory the graphics card has fast access to.

Normally, OpenGL stores one copy of your data in driver memory and another in VRAM. This means when you update a texture from application memory, you copy app->driver->card. APPLE_client_storage means that you keep a copy of your data and the driver doesn’t, so you get a straight app->card copy.

NV_pixel_data_range is available on NVidia cards on Linux & Windows, APPLE_texture_range is available on Mac OS X 10.2.x, APPLE_client_storage is available for some MESA drivers on Linux, and on Mac OS X 10.1.x and above.

Is there any perfomance tests around? At most Im handling 2k x 2k textures, will try this later, found some interesting docs at nvidia.com.

Is it possible to decompress JPEG data from memory using the graphicsboard? something similar to what my old sgi o2 has? would be great!

Thanks for the tips!!!

// Joda

while(!done)
{
    render_next_frame_of_movie_to_buffer();
    copy_buffer_to_texture();
    draw_big_quad();
    swap_buffers();
}

NV_pixel_data_range and APPLE_texture_range can be used to make the copy_buffer_to_texture bit much faster by giving you memory the graphics card has fast access to.

Normally, OpenGL stores one copy of your data in driver memory and another in VRAM. This means when you update a texture from application memory, you copy app->driver->card. APPLE_client_storage means that you keep a copy of your data and the driver doesn’t, so you get a straight app->card copy.

NV_pixel_data_range is available on NVidia cards on Linux & Windows, APPLE_texture_range is available on Mac OS X 10.2.x, APPLE_client_storage is available for some MESA drivers on Linux, and on Mac OS X 10.1.x and above.

Originally posted by joda:
Is there any perfomance tests around?

The bottleneck is usually in the movie decompression rather than in the rendering.

Is it possible to decompress JPEG data from memory using the graphicsboard? something similar to what my old sgi o2 has? would be great!

no. S3TC is the standard compression for textures these days, doubt that’s much use for video.