Low-latency software for remote collaborative post production

Fastvideo company is a team of professionals in GPU image processing, realtime camera applications, digital cinema, high performance imaging solutions. Fastvideo has been helping production companies for quite a long time and recently we've implemented low-latency software to offer collaborative post production.

Today, with restrictions on in-person collaboration, delays in shipping and limitations on travel, single point of ingest and delivery for an entire production becomes vitally important. The main goal is to offer all services both on-premises and remotely. We believe that in the near future we will see virtual and distributed post production finishing.

When you are shooting a movie at tight schedule and you need to accelerate your post production workflow, then remote collaborative approach is a right solution. You don't need to have all professionals on-site, via remote approach you can collaborate at realtime wherever your teammates are located. Industry trend to remote production solutions is clear and it happens not just due to the coronavirus. The idea to accelerate post via remote operation is viable and companies strive to remove various limitations of conventional workflow - now the professionals could choose a place and a time to work remotely on post production.


remote collaborative post production software

Business goals to achieve at remote collaborative post production

  • You will share content in realtime for collaborative workflows in post production
  • Lossless or visually lossless encoding/decoding guarantee high image quality
  • Reduced travel and rent costs for the team due to remote color grading and reviewing
  • Remote work will allow to choose the best professionals for the production
  • Your team will work on multiple projects (time saving and multi-tasking)

Goals from technical viewpoint

  • Low latency software
  • Fast and reliable data transmission over internal or public network
  • High image quality, precise color reproduction, maximum bit depth (lossy/lossless codecs)

Task to be solved

Post industry needs low-latency, high quality video encode/decode solution for remote work according to the following pipeline:

  • Capture baseband video streams via HD-SDI or 3G-SDI frame grabber (Blackmagic DeckLink 8K Pro, AJA Kona 4 or Kona 5)
  • Live encoding with J2K codec that supports 10-bit YUV 422 or 10/12-bit 444 RGB
  • Send the encoded material via TCP/UDP packets to a receiver/decoder - point-to-point transmission over ethernet or public internet
  • Decode from stream at source colorspace/bit-depth/resolution/subsampling - Rec.709/Rec.2020, 10-bit 422 YUV or 10/12-bit 444 RGB
  • Send stream to baseband video playout device (Blackmagic/AJA frame grabber) to display 10-bit YUV 422 or 10/12-bit 444 RGB material on external display
  • Latency requirements: sub 200 ms

Basic hardware layout: Video Source (Baseband Video) -> Capture device -> J2K Encoder on GPU -> Facility Firewall (IPsec VPN) -> Public Internet -> Remote Firewall (IPsec VPN) -> J2K Decoder on GPU -> Output device -> Video Display (Baseband Video)


  • SDI Frame Grabbers: Blackmagic DeckLink 8K Pro, AJA Kona 4, AJA Kona 5
  • NVIDIA GPU: GeForce RTX 2060, Quadro RTX 4000 or better
  • OS: Windows-10 or Linux Ubuntu/CentOS
  • Frame Size: 1920×1080 (DCI 2K)
  • Frame Rates: 23.976, 24, 25, 29.97, 30 fps
  • Bit-depth: 8/10/12 (encode - ingest), 8/10/12 (decode - display)
  • Pixel formats: RGB, v210, R12L
  • Color Spaces for 8/10-bit YUV or 8/10/12-bit RGB: Rec.709, DCI-P3, P3-D65, Rec.2020 (optional)
  • Audio: 2-channel PCM or more

How to encode/decode J2K images fast?

By default, FFmpeg-based software solutions are working with J2K codec from libavcodec (mj2k), which is very slow. You can test that scenario to see that it's far from being realtime or low-latency. It's not surprizing, as soon as J2K algorithm has very high computational complexity. If we implement multiple threads/processes on CPU, the performance of J2K solution from libavcodec is still unsuffcient. This is the problem even for 8-bit frames with 2K resolution, though for 4K (12-bit, 60 fps) images the performance is much worse.

The reason why FFmpeg is not fast at that task is obvious - it's working on CPU and it is not optimized to be high performance software. Here you can compare J2K benchmarks for J2K encoding and decoding for OpenJPEG, Jasper, Kakadu, J2k-Codec, CUJ2K, Fastvideo codecs to check the performance for images with 2K and 4K resolutions (lossy/lossless algorithms).

Maximum performance for J2K encoding and decoding at streaming applications could be achieved in multithreaded batch mode. This is a must to ensure massive parallel processing according to JPEG2000 algorithm. If we do batch processing, it means that we need to collect several images, which is not good for latency. If we implement batch with multithreading, it improves the performance, but the latency gets worse. This is actually a trade-off between performance and latency. For example, at remote color grading application we need minimum latency, so we will have to process each J2K frame separately, without batch and without multithreading.

Other info from Fastvideo about J2K and digital cinema applications

Contact Form

This form collects your name and email. Check out our Privacy Policy on how we protect and manage your personal data.