Skip to content

aritra1804/LipSync

Repository files navigation

Wav2Lip Lipsyncing Program

Result Video : https://pixeldrain.com/u/n9waRcpR

This program uses the Wav2Lip algorithm to generate realistic lip movements from an audio signal. The program takes an input video file and an input audio file, and generates a new video file with the lip movements synchronized to the audio.

Requirements

  • Python 3.6 or higher
  • PyTorch 1.0 or higher
  • OpenCV
  • MoviePy
  • FFmpeg

Installation

  1. Clone the repository to your local machine.
  2. Install the required packages using require.py.
  3. Download the pre-trained model checkpoint file from the official Wav2Lip repository.
  4. Place the checkpoint file in the checkpoints directory.

Usage

To use the program, run the execute.py script with the following arguments:

python main.py --face <path_to_input_video> --audio <path_to_input_audio> --outfile <path_to_output_video>

The --face argument specifies the path to the input video file. The --audio argument specifies the path to the input audio file. The --outfile argument specifies the path to the output video file.

By default, the program uses the pre-trained model checkpoint file located in the checkpoints directory. If you want to use a different checkpoint file, you can specify the path to the checkpoint file using the --checkpoint_path argument.

python main.py --face <path_to_input_video> --audio <path_to_input_audio> --outfile <path_to_output_video> --checkpoint_path <path_to_checkpoint_file>

Limitations

  • The program may not work well with low-quality or noisy audio.
  • The program may not work well with non-standard lip movements (e.g. exaggerated or cartoonish lip movements).

Credits

This program is based on the Wav2Lip algorithm developed by Rudrabha Mukhopadhyay. The program also uses the OpenCV, MoviePy, and FFmpeg libraries.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published