Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Adaptive Streaming #223

Open
andre-fuchs opened this issue Nov 1, 2023 · 4 comments
Open

Feature Request: Adaptive Streaming #223

andre-fuchs opened this issue Nov 1, 2023 · 4 comments

Comments

@andre-fuchs
Copy link

andre-fuchs commented Nov 1, 2023

It bugs me that the native HTML implementation of a video player does not support adaptive streaming. I end up using Vimeo all the time. But the wagtailmedia package might be the place to prepare a video for adaptive streaming in combination with HLS.js on the frontend side. Here are the required steps:

Backend

  1. Upload video file via the existing React based upload form
  2. Convert the uploaded video file into multiple variants with different resolutions and/or bit rates.
  3. Split each variant of the video file into small segments
  4. Create a master playlist file that references the different variants of the video, and create individual playlist files that reference the segments of each variant
  5. Store the video segments media files and playlist files
  6. Store information about the video file in database, such as the hierarchy and locations of all segments and playlist files

Frontend

  1. Serve the video files via a template tag that returns the master playlist
  2. Not sure if these playlists have to be files or could be generated on the fly via template tags as well

Encoder

An encoder might be the bottleneck here as FFmpeg is not available via some hosters (or at least my preferred hoster to be honest). Are there any Python packages that could natively replace services that support HLS? ... like Zencoder, Coconut, Mux and AWS Transcoder.

Laziness

The conversion and splitting of the video file and the generation of the playlists could be done on demand like the Wagtail image renditions following Django’s laziness philosophy. For longer video files on weaker machines that might take a long time.

A client side JavaScript encoder as part of the React upload form might be the solution to both problems here, encoder and time.

Overall this could be a killer feature of the Wagtail CMS. I offer my help here with anything apart from React if there is an reliable encoder that could be integrated.

@thibaudcolas
Copy link
Member

👋 just noting RFC 72: Background workers feels very relevant as far as the processing needed of the video files.

@andre-fuchs
Copy link
Author

👋 just noting RFC 72: Background workers feels very relevant as far as the processing needed of the video files.

This would solve the conversion part, of course! Amazing how Wagtail is evolving. I am game to help with the development of this adaptive streaming feature. I would need some guidance though. Would you implement adaptive streaming via the wagtailmedia package? It uses a Media() model for both video and audio. HTTP Live Streaming could be interesting for both. ffmpeg converts both video and/or audio for HLS, I think. I have never done this before, to be honest.

@evilmonkey19
Copy link

evilmonkey19 commented Feb 18, 2024

If you need any help, I am willing as well to help! Depending on the platform they use one strategy or another. In twitch.tv they convert to HLS, but others like Youtube convert the video to MPEG-DASH. It is important to notice that, due to how much bandwidth it consumes to send the video to multiple people, you usually upload the chunks to platforms such as Akamai (the largest), Cloudflare or Amazon Cloudfront; basically CDNs.

By the way, there is a project that it is a wrapper around FFMpeg https://pypi.org/project/pyffmpeg/. Perhaps it is not the best solution, but i don't thing a native solution using python is better. The main problem is that video is one of the heaviest computations always, so the more low-level implemented it is, the better. Usually, FFMpeg tries to use some hardware accelerators if there are any, such as graphic cards https://developer.nvidia.com/ffmpeg.

@Stormheg
Copy link

Stormheg commented Mar 5, 2024

This sounds like a fun sprint topic for Wagtail Space! https://www.wagtail.space/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants