intro
I've been working on a personal file sharing project, and one feature I want to add to this project is video streaming for large videos. I wanted a place where I could keep my stream replays but that had reasonable cost. Serving large 4+ hour videos can be very costly if you're not streaming it, as you may imagine.
So, what I wanted to do was enable a way for me to upload a raw video, and then have the server convert that into an hls stream that we can stream back to the client. This is a lot more cost-efficient than trying to stream a huge raw video back to the client.
One problem is that I'm using a simple axum server, and I don't want the API to be overwhelmed by trying to process videos in the same upload request. So, we'll create a second server that's exclusively for processing videos. That allows the API and the video processor to scale separately of each other. But, we'll also need a way to trigger processing jobs. We'll build a queue using Postgres and it's SKIP LOCKED mechanism for some concurrency.
One UX issue: The user can't watch the video until it's hls stream is ready, but the processing doesn't happen in the upload request. So, what we'll need to do is give the ability for the user to subscribe to updates about their video processing status. We can solve this using Postgres listen/notify events with a websocket endpoint on our API.
All in all, this dev log will go through these parts:
- creating a video processor for converting a raw video into an hls stream using ffmpeg
- creating a queue system using Postgres
- creating a video processing server using gRPC
- creating a websocket endpoint for listening to processing updates
Additionally, we'll create some UI in all the appropriate parts
convert raw video to an hls stream
This is probably the simplest part of this whole series, as we're going to take advantage of ffmpeg to do most of the heavy lifting for us. We'll just create a Rust binary that calls ffmpeg externally.
_Note: There are crates that provide ffmpeg bindings, but we don't really need to do anything other than making a fairly standard ffmpeg call
There's not much to this, it just uses the standard library and expects ffmpeg
to be available on the host
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
When modifying the inputs/outputs and running this, you should see some .m3u8
and .ts
files get generated in our
output directory. We can serve these files on the frontend using hls.js
This HTML snippet is from an askama template I made. It renders with a video_id
and stream_url
. The stream_url
specifically points to the endpoint in my axum server that serves the the hls stream files.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
In my axum server, I just serve the directory the hls files live in
|
|
If you want to serve specific videos, I recommend saving your playlist files (.m3u8
) with some kind of video ID.
Then you can create a fairly simple handler that serves those specific files. Here's a handler that renders the template above
with the appropriate variables.
|
|
|
|
|
|
And that's basically it for this part. In the next posts we'll work this functionality into our video processing server to refine it a bit more. Maybe handling errors better?
Until the next episode...
If you want a more complete example, check out the source code for my file nest project. It's where I'm using a lot of this functionality.