One of our projects that we are working on is an application that allows you to track the building progress from the construction site using data received from CCTV cameras. The user can view the last received frame, timelapse video for the entire data collection period, compare frames taken at different times, overlay a 3D model on a real frame from a construction site, etc.
So, today we’ll tell you how to create a video timelapse from a sequence of snapshots and provide customers with video playlists optimized for browser playback.
Unlike a single-file video, a video playlist consists of a playlist file and segments. Segments are just chunks of a whole video, and the playlist file describes the playback order and duration of the segments.
The video playlist has undeniable advantages:
Obviously, the choice between these two options for our case is beyond doubt.
In order to generate a video timelapse from frames, we need:
ffmpeg \ -y \ -r 25 \ -pattern_type glob -i "*.jpg" \ -vf "scale=w=1280:h=720:force_original_aspect_ratio=decrease" \ -c:v libx264 \ -preset ultrafast \ -tune zerolatency \ -force_key_frames "expr:gte(t,n_forced*1)" -hls_time 5 \ -hls_segment_filename playlist_segment_%d.ts \ -f hls playlist.m3u8
Overwrite output files without asking.
Set framerate (25 frames per second).
-pattern_type glob -i "*.jpg"
Take all .jpg files in the local directory to create a video playlist.
Set video resolution 1280x720, automatically decrease resolution if needed.
Set output codec.
Choose codec preset from veryfast (best speed) to veryslow (best quality).
Set optimization for fast encoding.
Force a key frame every 5 seconds.
Cut segments by a duration equal to 5 seconds.
Set mask of segment filename (%d is replaced by segment index).
Set HLS protocol and output format.
Set the name of the playlist file.
After running the terminal command, ffmpeg will collect all the .jpg frames in the current directory in order and generate a video playlist:
So, we have already figured out how to create a timelapse video. But what if you need to supplement an existing timelapse?
You can get all the images and generate another video playlist with a new segment. In some cases, this can be a very resource intensive problem. It was the same in our case: we store all the frames and video playlists in the S3 storage, and each time it would take a very long time to get all the frames and the existing video playlist, just to add a new segment. However, there is a less resource-intensive way that will also save us from having to store images for already generated playlist segments.
Let's first see what the playlist file we got in the previous chapter looks like.
Actually, the playlist file is essentially a text file, so we can open it with any text editor:
The file header indicating extended M3U format and must be the first line of the file.
Indicates the compatibility version of the playlist file.
Indicates the sequence number of the first URL that appears in the playlist file.
Specifies the maximum segment duration.
Indicates discontinuity between the preceding and following segments.
Track information and other additional properties.
Indicates that no more segments will be added to the file.
As we can see, the playlist file for the most part describes the sequence and duration of the segments. Perfect! You probably already guessed that we can add it ourselves, without using ffmpeg.
In this case, we can generate a new segment as a separate video playlist using the almost identical command from the previous chapter. It differs only in that it will only generate one segment.
ffmpeg \ -y \ -r 25 \ -pattern_type glob -i "*.jpg" \ -vf "scale=w=1280:h=720:force_original_aspect_ratio=decrease" \ -c:v libx264 \ -preset ultrafast \ -tune zerolatency \ -hls_segment_filename new_segment.ts \ -f hls segment_playlist.m3u8
And to update the main video playlist, you just need to insert the segment's metadata into the main playlist file. You may get it from the segment's playlist file or generate it manually.
And that's it! It works very simply: if there is an entry for a segment in the playlist file and the segment file is available at the specified path, it is played.
In the examples above, we used the segment filename to tell the video player which segment to play. In this case, the video player will search for the segment in the same directory/URL as the playlist file itself.
Basically, the highlighted line is the path to the segment, so we can conditionally specify any path, like this:
Or even like this:
The last example can be useful if you store the video playlist in private S3 storage: you can use pre-signed URLs as paths to segment in order to avoid difficulties in accessing segments when playing the video playlist in a browser.
Today we figured out how the video playlist works, how to create the timelapse from frames, and how to deal with it in the context of a web application. Take care of yourself and stay tuned!
Online test-bed for video playlists: https://hls-js.netlify.app/demo/
Video playlists with React: https://www.npmjs.com/package/react-player
m3u(8)-playlist syntax: https://docs.fileformat.com/audio/m3u/
One of our ongoing projects, Neptyne, introduces an Excel-like grid written in React. We used a library to apply virtual scroll to it, but we stumbled upon a problem with fixed rows and columns inside the grid. Here I would like to describe this problem, how it occurs, and how we handled it.
Django ORM is a very abstract and flexible API. But if you do not know exactly how it works, you will likely end up with slow and heavy views, if you have not already. So, this article provides practical solutions to N+1 and high loading time issues. For clarity, I will create a simple view that demonstrates common ORM query problems and shows frequently used practices.
When we use css-sprites it's important to make browser cache them for longest period possible. On other hand, we need to refresh them when they are updated. This is especially visible when all icons are stored in single sprite. When it's outdated - entire site becomes ugly.