Logo for Sneaky Crow, LLC (vector art of a crow with a green baseball cap on)

Brain Juice

intro

I've been working on a personal file sharing project, and one feature I want to add to this project is video streaming for large videos. I wanted a place where I could keep my stream replays but that had reasonable cost. Serving large 4+ hour videos can be very costly if you're not streaming it, as you may imagine.

So, what I wanted to do was enable a way for me to upload a raw video, and then have the server convert that into an hls stream that we can stream back to the client. This is a lot more cost-efficient than trying to stream a huge raw video back to the client.

One problem is that I'm using a simple axum server, and I don't want the API to be overwhelmed by trying to process videos in the same upload request. So, we'll create a second server that's exclusively for processing videos. That allows the API and the video processor to scale separately of each other. But, we'll also need a way to trigger processing jobs. We'll build a queue using Postgres and it's SKIP LOCKED mechanism for some concurrency.

One UX issue: The user can't watch the video until it's hls stream is ready, but the processing doesn't happen in the upload request. So, what we'll need to do is give the ability for the user to subscribe to updates about their video processing status. We can solve this using Postgres listen/notify events with a websocket endpoint on our API.

All in all, this dev log will go through these parts:

  1. creating a video processor for converting a raw video into an hls stream using ffmpeg
  2. creating a queue system using Postgres
  3. creating a video processing server using gRPC
  4. creating a websocket endpoint for listening to processing updates

Additionally, we'll create some UI in all the appropriate parts

convert raw video to an hls stream

This is probably the simplest part of this whole series, as we're going to take advantage of ffmpeg to do most of the heavy lifting for us. We'll just create a Rust binary that calls ffmpeg externally.

_Note: There are crates that provide ffmpeg bindings, but we don't really need to do anything other than making a fairly standard ffmpeg call

There's not much to this, it just uses the standard library and expects ffmpeg to be available on the host

1
async fn process_video() {
2
    let input_path = "some_path/to/video.mp4";
3
    let output_path = "wherever_you/want/the/stream_files/"
4
    let output = std::process::Command::new("ffmpeg")
5
        .args(&[
6
            "-i",
7
            input_path,
8
            "-c:v",
9
            "libx264",
10
            "-c:a",
11
            "aac",
12
            "-f",
13
            "hls",
14
            "-hls_time",
15
            "10",
16
            "-hls_list_size",
17
            "0",
18
            &output_path,
19
        ])
20
        // This ffmpeg command:
21
        // - Takes an input video file specified by `input_path`
22
        // - Transcodes the video to H.264 codec (-c:v libx264)
23
        // - Transcodes the audio to AAC codec (-c:a aac)
24
        // - Outputs an HLS stream (-f hls)
25
        // - Sets the duration of each segment to 10 seconds (-hls_time 10)
26
        // - Creates an unlimited number of segments in the playlist (-hls_list_size 0)
27
        // - Saves the output to the path specified by `output_path`
28
        .output()
29
        // TODO: Make this better error handling
30
        .expect("Could not process video into HLS stream");
31
}

When modifying the inputs/outputs and running this, you should see some .m3u8 and .ts files get generated in our output directory. We can serve these files on the frontend using hls.js

This HTML snippet is from an askama template I made. It renders with a video_id and stream_url. The stream_url specifically points to the endpoint in my axum server that serves the the hls stream files.

1
<section>
2
    <video id="{{ video_id }}" controls height="600"></video>
3
    <script src="https://cdnjs.cloudflare.com/ajax/libs/hls.js/0.5.14/hls.min.js"></script>
4
    <script>
5
        var video = document.getElementById("{{ video_id }}");
6
        if (Hls.isSupported()) {
7
            var hls = new Hls({
8
                debug: true,
9
            });
10
            hls.loadSource("{{ stream_url }}");
11
            hls.attachMedia(video);
12
            hls.on(Hls.Events.MEDIA_ATTACHED, function () {
13
                video.muted = true;
14
                video.play();
15
            });
16
        }
17
        // hls.js is not supported on platforms that do not have Media Source Extensions (MSE) enabled.
18
        // When the browser has built-in HLS support (check using `canPlayType`), we can provide an HLS manifest (i.e. .m3u8 URL) directly to the video element through the `src` property.
19
        // This is using the built-in support of the plain video element, without using hls.js.
20
        else if (video.canPlayType("application/vnd.apple.mpegurl")) {
21
            video.src = "{{ stream_url }}";
22
            video.addEventListener("canplay", function () {
23
                video.play();
24
            });
25
        }
26
    </script>
27
</section>

In my axum server, I just serve the directory the hls files live in

1
// ... inside the Router
2
.nest_service("/uploads", ServeDir::new(uploads_path))

If you want to serve specific videos, I recommend saving your playlist files (.m3u8) with some kind of video ID. Then you can create a fairly simple handler that serves those specific files. Here's a handler that renders the template above with the appropriate variables.

1
pub async fn handle_stream_video(Query(params): Query<WatchQuery>) -> WatchTemplate {
2
    WatchTemplate {
3
        video_id: params.v.clone(),
4
        stream_url: format!("/uploads/{}.m3u8", params.v),
5
    }
6
}

And that's basically it for this part. In the next posts we'll work this functionality into our video processing server to refine it a bit more. Maybe handling errors better?

Until the next episode...


If you want a more complete example, check out the source code for my file nest project. It's where I'm using a lot of this functionality.