I’ve been working on research for a video application which is of course using opengl to put data buffers on screen. So far I’ve been using a in-memory approach and I’ve stored all data uncompressed in RAM.
This is ok for shorter clips and its great for realtime editing.
Now, I want to implement a play-mode which just plays clips, what is the best approach? I need some ideas and tips on how to design the application.
I’ve got some ideas of using a mult-threading approach but it would be interesting to see what you people think!
NV_pixel_data_range and APPLE_texture_range can be used to make the copy_buffer_to_texture bit much faster by giving you memory the graphics card has fast access to.
Normally, OpenGL stores one copy of your data in driver memory and another in VRAM. This means when you update a texture from application memory, you copy app->driver->card. APPLE_client_storage means that you keep a copy of your data and the driver doesn’t, so you get a straight app->card copy.
NV_pixel_data_range is available on NVidia cards on Linux & Windows, APPLE_texture_range is available on Mac OS X 10.2.x, APPLE_client_storage is available for some MESA drivers on Linux, and on Mac OS X 10.1.x and above.
NV_pixel_data_range and APPLE_texture_range can be used to make the copy_buffer_to_texture bit much faster by giving you memory the graphics card has fast access to.
Normally, OpenGL stores one copy of your data in driver memory and another in VRAM. This means when you update a texture from application memory, you copy app->driver->card. APPLE_client_storage means that you keep a copy of your data and the driver doesn’t, so you get a straight app->card copy.
NV_pixel_data_range is available on NVidia cards on Linux & Windows, APPLE_texture_range is available on Mac OS X 10.2.x, APPLE_client_storage is available for some MESA drivers on Linux, and on Mac OS X 10.1.x and above.