144 fps Uncompressed Video Playback

Hi,

So am pretty amateur in OpenGL, and this is probably well beyond my current competency, but I am currently trying to playback 144 fps video (stored in uncompressed TIFFs (other storage containers can be used)). My current method just uses fixed functions in OpenGL and the image loader in OpenCV, but this doesn’t allow me to reach the frame rates I need (probably texture upload problems)

If anyone has suggestions for a proper workflow/method to handle this it would be GREATLY APPRECIATED!

Details:
Images are Full HD (1920x1080 24bit)
Uncompressed is ideal, though visually lossless compression can be considered
Videos are only 3 seconds long (i.e. 432 frames).

Why are you trying to use OpenGL to display a picture? Wouldn’t it make more sense to use whatever GUI system your OS comes with?

For 144 images per second at 1920x1080x24 uncompressed you’re going to need almost 1 gigabyte per second sustained read speed on your disk subsystem.

I think your primary bottleneck is probably not where you think it is…

1920108024bit*144 fps = 0.86 Go/sec
Ou cant load this in real time (without a raid of ssd or an hight perfomance one). So load all your frame to RAM, then to VRAM (or stream). After this you’ll be able to play it!

I am using OpenGL because I need precise control of frame rate, and I vary frame rate and source with user input, I also prefer uncompressed because I am doing this for psychophysical research and compression artifacts are detrimental to my goals, and I figured decompression would just take more time. If there is a fast method of decompression that is visually lossless, I am all for using that. I am not locked to any method. If you have a suggestion for software that can play 144fps video without dropping frames I am all ears :slight_smile:

I will certainly try this. Is there a preferable method/format of storing textures in RAM?

[QUOTE=bob;1264381]1920108024bit*144 fps = 0.86 Go/sec
Ou cant load this in real time (without a raid of ssd or an hight perfomance one). So load all your frame to RAM, then to VRAM (or stream). After this you’ll be able to play it![/QUOTE]

I will certainly try this. Is there a preferable method/format of storing textures in RAM?

On most desktop platforms you should store as 32-bit BGRA.

Yes, that means more storage compared to 24-bit, but you’ll get a faster (on some hardware/platform combos much faster) pixel transfer this way (because the driver won’t have to do any intermediate pixel format conversions of it’s own), so assuming that you satisfactorily deal with the disk IO bottleneck, that’s the next problem you’re going to hit, and it’s better to prevent it outright rather than try to solve it after the fact.

Preloading may be a problem. See, your big issue is that your data is really, really huge. Your total data size is about 2.5GB (at 24-bpp). Even if you preloaded it all into RAM at once, there’s some chance that the OS will page some of it back to the harddrive. Even if your computer has enough RAM to store it. Which is obviously unhelpful in your case.

The best way to avoid this is to upload all of these images to textures right away. The problem there is that, for normal rendering, you’d be looking at nearly 3.5GB of space at 32-bpp. So you’ll need a pretty memory-rich videocard (at least 4GB, but possibly more, depending on overhead). You may also need to compile your application as a 64-bit app.

If that’s not a viable solution for you, you could try to do some custom texture gimmicks to make 24-bpp texturing possible. Basically, you split each image into 3 textures: one for each of the 3 color channels. They’d each store 8-bpp data. In your shader, you’d fetch from each one and combine the three components. This would best be done by pre-processing your image data, extracting out each channel either into its own file or into a quick-and-dirty binary format that’s easy to read, so that it’s easier to upload to textures.

That should save you memory overall, allowing you to get back to 2.5GB of data.

[QUOTE=Alfonse Reinheart;1264386]Preloading may be a problem. See, your big issue is that your data is really, really huge. Your total data size is about 2.5GB (at 24-bpp). Even if you preloaded it all into RAM at once, there’s some chance that the OS will page some of it back to the harddrive. Even if your computer has enough RAM to store it. Which is obviously unhelpful in your case.

The best way to avoid this is to upload all of these images to textures right away. The problem there is that, for normal rendering, you’d be looking at nearly 3.5GB of space at 32-bpp. So you’ll need a pretty memory-rich videocard (at least 4GB, but possibly more, depending on overhead). You may also need to compile your application as a 64-bit app.

If that’s not a viable solution for you, you could try to do some custom texture gimmicks to make 24-bpp texturing possible. Basically, you split each image into 3 textures: one for each of the 3 color channels. They’d each store 8-bpp data. In your shader, you’d fetch from each one and combine the three components. This would best be done by pre-processing your image data, extracting out each channel either into its own file or into a quick-and-dirty binary format that’s easy to read, so that it’s easier to upload to textures.

That should save you memory overall, allowing you to get back to 2.5GB of data.[/QUOTE]

I actully just quickly tried preloading everything into RAM by creating a Mat array with OpenCV and loading my images into that array, then passing one of these textures to glTexImage2D on each draw, with this I was able to sustain 120 fps with my uncompressed TIFFS ( thanks for the suggestion bob ). I will try this with 144fps soon enough, but 120 may suit my needs well enough. Thanks for everyone chipping in :slight_smile:

Given the memory and bandwidth issues of this particular application, I’d consider using YCbCr (or similar) for (amortised) 12-bpp (i.e. one 8-bit luma texture at 1920x1080 and 2x 8-bit chroma textures at 960x540) and converting to RGB in the fragment shader.

I know the OP wanted to avoid compression, but I’d be hard-pressed to believe that the difference between 1920x1080 and 960x540 is perceptible for chroma.

Beyond that, if the data set is fixed in advance, it should be possible to come up with a compression technique which is both lossless for the data in question and well within the capabilities of modern hardware. But this may be too complex for someone new to GLSL.

[QUOTE=scooperly;1264372]So am pretty amateur in OpenGL, and this is probably well beyond my current competency, but I am currently trying to playback 144 fps video (stored in uncompressed TIFFs (other storage containers can be used)). My current method just uses fixed functions in OpenGL and the image loader in OpenCV, but this doesn’t allow me to reach the frame rates I need (probably texture upload problems)

If anyone has suggestions for a proper workflow/method to handle this it would be GREATLY APPRECIATED!

Details:
Images are Full HD (1920x1080 24bit)
Uncompressed is ideal, though visually lossless compression can be considered
Videos are only 3 seconds long (i.e. 432 frames).[/QUOTE]

Ok, so you’re talking 3.3GB of data, even if you go 4 Bpp (RGBA). Got a GPU with 4GB? No sense in over engineering this. Could potentially just load the whole thing on the GPU and then animate it naively without compression, assuming it will support 1.1+GB/sec of sustained R/W bandwidth.

Also, before you get to far, are you sure you have a monitor that has a 144 Hz vertical scan interval (i.e. V-Sync rate)?

[QUOTE=Dark Photon;1264403]Ok, so you’re talking 3.3GB of data, even if you go 4 Bpp (RGBA). Got a GPU with 4GB? No sense in over engineering this. Could potentially just load the whole thing on the GPU and then animate it naively without compression, assuming it will support 1.1+GB/sec of sustained R/W bandwidth.

Also, before you get to far, are you sure you have a monitor that has a 144 Hz vertical scan interval (i.e. V-Sync rate)?[/QUOTE]

I am using a ASUS VG248QE with the G-Sync modification to allow me to refresh at my render rate. While I have yet to externally validate the G-Sync refresh rate with a photo-diode and oscilloscope (Nvidia is surprisingly quite about the tech specs), I have reason to believe that it can handle these rates yes.

I can also post my results of the refresh rate/ frame rate validation once I complete it (perhaps in a different post).