Skip to content

LauraLivers/video_automatisation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

58 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

video automatisation

Dark Version Colorful Version

2025 HS DIGCRE - laura livers

proof-of-concept semester project for the module Digital Creativity
at University of Applied Sciences Lucerne

Summary

This framework is designed to automate various video editing tasks,
specifically for creating content intended for social media. It is
aimed at individuals in the creative industry, with a particular focus on musicians.

Initial Conditions
this code assumes the following conditions are met:

  • several takes of the same or similar scene
  • each video contains the same background music
  • background music is available as soundfile in ideally uncompressed format (eg. wav)

Functions

the following order represents the proposed workflow described in main.py

00 Setup

shorten_video(input, output, seconds)
    seconds: creates video of length seconds from the start onwards
if different part of video is needed use bash command eg.
    ffmpeg -i 'input.mov' -ss 00:00:10 -to 00:01:04 -c:v libx264 -c:a aac -preset ultrafast -y 'output.mov'

adjust_exposure(input, output, brightness, contrast, gamma)
quick adjustments to lighten or darken the footage
    brightness : takes values between -1 and 1, where 0 represents the original
    contrast : takes values between 0 and 1, where 1.0 represents the original
    gamma : takes values bigger than 0, where 1.0 represents the original

01 Video Cutting

important: run at the same time!

extract_beats_from_song(audio_file)
uses librosa.beat_track to create a beat_sequence,

depending on input music, results might be more interesting when using
other approaches such as onset_detect(), tempogram() etc. refer to Librosa Documentation
for more inspiration

cut_videos_by_song_beats(video_folder, beat_sequence, song_file, output_dir)
extracts video background audio and correlates it with beat_sequence cutting each video at the
detected beats. This way we don't have to care about length of input video or alignment:
If the song is playing in the background the code will take care of it.
notice: somewhere during this process a lag is introduced. If timing is an issue (eg. for lip-syncing)
change the total_lag variable on line 75 to the number of seconds the result lags at the end. It is
not the most proficient solution, but it works.

concatenate_clips_randomly(clips_by_beat, beat_sequence, output_file, song_file)
reconstructs the beat_sequence in order, randomly shuffling the takes

02 Color Grading

apply_teal_orange(video_input, video_output, intensity)
hollywood-style filter where shadows become teal and highlights orange
    intensity: takes values between 0 and 1, default=0.8
for more extreme effect, apply several times

apply_black_white(video_input, video_output)
converts RGB to GREY, combine with adjust_exposure() for usable results

03 Background Extraction

process_video_with_video_background()
uses Google Mediapipe to detect background and replaces it with another background video.
The internal functions are to stabilize the mask and feather the edges.
notice: As this is computationally very expensive ensure steps 1,2 & 4 are done and won't have to be repeated!

04 FX

apply_slow_motion(input_video, output_video, slow_down_factor)
    slow_down_factor: takes positive floats, 1.0 represents original
factor > 1 = slow down, factor < 1 = speed up, don't use to extreme values

rgb_trail(video_input, video_output, red_lag, green_lag, blue_lag)
splits video into RGB channels and applies different lags to each, creating a trailing effect
    red_lag, green_lag, blue_lag: takes int as frame unit for lag
for more occurrences change percentage in line 50
for longer effect change fps * x in line 52

apply_fade(video_input, video_output, fade_duration)
    fade_duration: takes int for fade duration at beginning and end
only fade from and to black. For cross-fades consult ffmpeg Documentation

05 Lyrics

sync_lyrics_manually(lyrics, video_input, video_output, color)
uses list of tuples to place lyrics using moviepy.TextClip. Default: center of the video
    lyrics: (start, end, 'lyrics')
    color: RGB, RGBA, Hex format

sync_lyrics_grid_to_video(lyrics, video_input, video_output, color, grid_size, first_letter_scale)
places lyrics within imaginary grid and scales first letter of each word
    color: RGB, RGBA, Hex format
    grid_size: (row, col) ensure col is at least longest_word + 1, or letter will be written outside of frame
    first_letter_scale: float

Finishing touches

before publishing anything, check the audio-quality. During testing sometimes the quality differed a lot from the original.
If this happens run bash:
ffmpeg -i 'input_video.mov' -i 'audio_file.wav' -map 1:a:0, -c:v copy -c:a aac -strict experimental -y 'output_video.mov'
at the very end.
Don't forget to shorten the video again.

Video 1: Teal-Orange, dark background, 1% chance RGB Video 2 : Teal-Orange, 9% chance RGB, light background

Releases

No releases published

Packages

No packages published

Languages