Result Video : https://pixeldrain.com/u/n9waRcpR
This program uses the Wav2Lip algorithm to generate realistic lip movements from an audio signal. The program takes an input video file and an input audio file, and generates a new video file with the lip movements synchronized to the audio.
- Python 3.6 or higher
- PyTorch 1.0 or higher
- OpenCV
- MoviePy
- FFmpeg
- Clone the repository to your local machine.
- Install the required packages using
require.py
. - Download the pre-trained model checkpoint file from the official Wav2Lip repository.
- Place the checkpoint file in the
checkpoints
directory.
To use the program, run the execute.py
script with the following arguments:
python main.py --face <path_to_input_video> --audio <path_to_input_audio> --outfile <path_to_output_video>
The --face
argument specifies the path to the input video file. The --audio
argument specifies the path to the input audio file. The --outfile
argument specifies the path to the output video file.
By default, the program uses the pre-trained model checkpoint file located in the checkpoints
directory. If you want to use a different checkpoint file, you can specify the path to the checkpoint file using the --checkpoint_path
argument.
python main.py --face <path_to_input_video> --audio <path_to_input_audio> --outfile <path_to_output_video> --checkpoint_path <path_to_checkpoint_file>
- The program may not work well with low-quality or noisy audio.
- The program may not work well with non-standard lip movements (e.g. exaggerated or cartoonish lip movements).
This program is based on the Wav2Lip algorithm developed by Rudrabha Mukhopadhyay. The program also uses the OpenCV, MoviePy, and FFmpeg libraries.