2024-07-03 20:32:47 +00:00
< h1 align = "center" > LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control< / h1 >
< div align = 'center' >
2024-07-04 07:40:32 +00:00
< a href = 'https://github.com/cleardusk' target = '_blank' > < strong > Jianzhu Guo< / strong > < / a > < sup > 1†< / sup >  
< a href = 'https://github.com/KwaiVGI' target = '_blank' > < strong > Dingyun Zhang< / strong > < / a > < sup > 1,2< / sup >  
< a href = 'https://github.com/KwaiVGI' target = '_blank' > < strong > Xiaoqiang Liu< / strong > < / a > < sup > 1< / sup >  
2024-07-10 15:39:49 +00:00
< a href = 'https://scholar.google.com/citations?user=t88nyvsAAAAJ&hl' target = '_blank' > < strong > Zhizhou Zhong< / strong > < / a > < sup > 1,3< / sup >  
2024-07-04 07:40:32 +00:00
< a href = 'https://scholar.google.com.hk/citations?user=_8k1ubAAAAAJ' target = '_blank' > < strong > Yuan Zhang< / strong > < / a > < sup > 1< / sup >  
2024-07-03 20:32:47 +00:00
< / div >
< div align = 'center' >
2024-07-04 07:40:32 +00:00
< a href = 'https://scholar.google.com/citations?user=P6MraaYAAAAJ' target = '_blank' > < strong > Pengfei Wan< / strong > < / a > < sup > 1< / sup >  
< a href = 'https://openreview.net/profile?id=~Di_ZHANG3' target = '_blank' > < strong > Di Zhang< / strong > < / a > < sup > 1< / sup >  
2024-07-03 20:45:32 +00:00
< / div >
< div align = 'center' >
< sup > 1 < / sup > Kuaishou Technology  < sup > 2 < / sup > University of Science and Technology of China  < sup > 3 < / sup > Fudan University 
2024-07-03 20:32:47 +00:00
< / div >
2024-07-17 09:58:38 +00:00
< div align = 'center' >
< small > < sup > †< / sup > Corresponding author< / small >
2024-07-17 09:57:35 +00:00
< / div >
2024-07-03 20:32:47 +00:00
< br >
< div align = "center" >
<!-- <a href='LICENSE'><img src='https://img.shields.io/badge/license - MIT - yellow'></a> -->
2024-07-07 18:57:11 +00:00
< a href = 'https://arxiv.org/pdf/2407.03168' > < img src = 'https://img.shields.io/badge/arXiv-LivePortrait-red' > < / a >
< a href = 'https://liveportrait.github.io' > < img src = 'https://img.shields.io/badge/Project-LivePortrait-green' > < / a >
< a href = 'https://huggingface.co/spaces/KwaiVGI/liveportrait' > < img src = 'https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue' > < / a >
2024-07-17 09:57:35 +00:00
< a href = "https://github.com/KwaiVGI/LivePortrait" > < img src = "https://img.shields.io/github/stars/KwaiVGI/LivePortrait" > < / a >
2024-07-03 20:32:47 +00:00
< / div >
< br >
< p align = "center" >
< img src = "./assets/docs/showcase2.gif" alt = "showcase" >
2024-07-04 07:40:32 +00:00
< br >
🔥 For more results, visit our < a href = "https://liveportrait.github.io/" > < strong > homepage< / strong > < / a > 🔥
2024-07-03 20:32:47 +00:00
< / p >
## 🔥 Updates
2024-07-19 15:39:05 +00:00
- **`2024/07/19`**: ✨ We support 🎞️ portrait video editing (aka v2v)! More to see [here ](assets/docs/changelog/2024-07-19.md ).
2024-07-17 08:57:33 +00:00
- **`2024/07/17`**: 🍎 We support macOS with Apple Silicon, modified from [jeethu ](https://github.com/jeethu )'s PR [#143 ](https://github.com/KwaiVGI/LivePortrait/pull/143 ).
2024-07-10 15:39:49 +00:00
- **`2024/07/10`**: 💪 We support audio and video concatenating, driving video auto-cropping, and template making to protect privacy. More to see [here ](assets/docs/changelog/2024-07-10.md ).
- **`2024/07/09`**: 🤗 We released the [HuggingFace Space ](https://huggingface.co/spaces/KwaiVGI/liveportrait ), thanks to the HF team and [Gradio ](https://github.com/gradio-app/gradio )!
- **`2024/07/04`**: 😊 We released the initial version of the inference code and models. Continuous updates, stay tuned!
- **`2024/07/04`**: 🔥 We released the [homepage ](https://liveportrait.github.io ) and technical report on [arXiv ](https://arxiv.org/pdf/2407.03168 ).
2024-07-03 20:32:47 +00:00
2024-07-19 15:39:05 +00:00
## Introduction 📖
2024-07-04 01:17:52 +00:00
This repo, named **LivePortrait** , contains the official PyTorch implementation of our paper [LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control ](https://arxiv.org/pdf/2407.03168 ).
2024-07-03 20:32:47 +00:00
We are actively updating and improving this repository. If you find any bugs or have suggestions, welcome to raise issues or submit pull requests (PR) 💖.
2024-07-19 15:39:05 +00:00
## Getting Started 🏁
2024-07-03 20:32:47 +00:00
### 1. Clone the code and prepare the environment
```bash
git clone https://github.com/KwaiVGI/LivePortrait
cd LivePortrait
# create env using conda
2024-07-17 09:13:00 +00:00
conda create -n LivePortrait python==3.9
2024-07-03 20:32:47 +00:00
conda activate LivePortrait
2024-07-17 08:57:33 +00:00
2024-07-19 15:39:05 +00:00
# install dependencies with pip
# for Linux and Windows users
2024-07-03 20:32:47 +00:00
pip install -r requirements.txt
2024-07-19 15:39:05 +00:00
# for macOS with Apple Silicon users
2024-07-17 08:57:33 +00:00
pip install -r requirements_macOS.txt
2024-07-03 20:32:47 +00:00
```
2024-07-17 08:57:33 +00:00
**Note:** make sure your system has [FFmpeg ](https://ffmpeg.org/download.html ) installed, including both `ffmpeg` and `ffprobe` !
2024-07-12 06:25:16 +00:00
2024-07-03 20:32:47 +00:00
### 2. Download pretrained weights
2024-07-10 15:39:49 +00:00
2024-07-12 06:34:28 +00:00
The easiest way to download the pretrained weights is from HuggingFace:
2024-07-10 15:39:49 +00:00
```bash
2024-07-14 14:52:50 +00:00
# first, ensure git-lfs is installed, see: https://docs.github.com/en/repositories/working-with-files/managing-large-files/installing-git-large-file-storage
git lfs install
2024-07-17 08:57:33 +00:00
# clone and move the weights
2024-07-17 09:13:00 +00:00
git clone https://huggingface.co/KwaiVGI/LivePortrait temp_pretrained_weights
2024-07-17 08:57:33 +00:00
mv temp_pretrained_weights/* pretrained_weights/
rm -rf temp_pretrained_weights
2024-07-10 15:39:49 +00:00
```
2024-07-12 06:34:28 +00:00
Alternatively, you can download all pretrained weights from [Google Drive ](https://drive.google.com/drive/folders/1UtKgzKjFAOmZkhNK-OYT0caJ_w2XAnib ) or [Baidu Yun ](https://pan.baidu.com/s/1MGctWmNla_vZxDbEp2Dtzw?pwd=z5cn ). Unzip and place them in `./pretrained_weights` .
Ensuring the directory structure is as follows, or contains:
2024-07-03 20:32:47 +00:00
```text
pretrained_weights
├── insightface
│ └── models
│ └── buffalo_l
│ ├── 2d106det.onnx
│ └── det_10g.onnx
└── liveportrait
├── base_models
│ ├── appearance_feature_extractor.pth
│ ├── motion_extractor.pth
│ ├── spade_generator.pth
│ └── warping_module.pth
├── landmark.onnx
└── retargeting_models
└── stitching_retargeting_module.pth
```
### 3. Inference 🚀
2024-07-10 15:39:49 +00:00
#### Fast hands-on
2024-07-03 20:32:47 +00:00
```bash
2024-07-17 08:57:33 +00:00
# For Linux and Windows
2024-07-03 20:32:47 +00:00
python inference.py
2024-07-17 08:57:33 +00:00
# For macOS with Apple Silicon, Intel not supported, this maybe 20x slower than RTX 4090
PYTORCH_ENABLE_MPS_FALLBACK=1 python inference.py
2024-07-03 20:32:47 +00:00
```
2024-07-19 15:39:05 +00:00
If the script runs successfully, you will get an output mp4 file named `animations/s6--d0_concat.mp4` . This file includes the following results: driving video, input image or video, and generated result.
2024-07-03 20:32:47 +00:00
< p align = "center" >
< img src = "./assets/docs/inference.gif" alt = "image" >
< / p >
Or, you can change the input by specifying the `-s` and `-d` arguments:
```bash
2024-07-19 15:39:05 +00:00
# source input is an image
2024-07-03 20:32:47 +00:00
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4
2024-07-19 15:39:05 +00:00
# source input is a video ✨
python inference.py -s assets/examples/source/s13.mp4 -d assets/examples/driving/d0.mp4
2024-07-03 20:32:47 +00:00
# more options to see
python inference.py -h
```
2024-07-19 15:39:05 +00:00
#### Driving video auto-cropping 📢📢📢
To use your own driving video, we **recommend** : ⬇️
2024-07-10 15:39:49 +00:00
- Crop it to a **1:1** aspect ratio (e.g., 512x512 or 256x256 pixels), or enable auto-cropping by `--flag_crop_driving_video` .
- Focus on the head area, similar to the example videos.
- Minimize shoulder movement.
- Make sure the first frame of driving video is a frontal face with **neutral expression** .
Below is a auto-cropping case by `--flag_crop_driving_video` :
```bash
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d13.mp4 --flag_crop_driving_video
```
2024-07-19 15:39:05 +00:00
If you find the results of auto-cropping is not well, you can modify the `--scale_crop_driving_video` , `--vy_ratio_crop_driving_video` options to adjust the scale and offset, or do it manually.
2024-07-10 15:39:49 +00:00
2024-07-12 06:45:24 +00:00
#### Motion template making
You can also use the auto-generated motion template files ending with `.pkl` to speed up inference, and **protect privacy** , such as:
2024-07-10 15:39:49 +00:00
```bash
2024-07-19 15:39:05 +00:00
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d5.pkl # portrait animation
python inference.py -s assets/examples/source/s13.mp4 -d assets/examples/driving/d5.pkl # portrait video editing
2024-07-10 15:39:49 +00:00
```
### 4. Gradio interface 🤗
2024-07-03 20:32:47 +00:00
2024-07-12 07:43:20 +00:00
We also provide a Gradio < a href = 'https://github.com/gradio-app/gradio' > < img src = 'https://img.shields.io/github/stars/gradio-app/gradio' > < / a > interface for a better experience, just run by:
2024-07-03 20:32:47 +00:00
```bash
2024-07-19 15:39:05 +00:00
# For Linux and Windows users (and macOS with Intel??)
2024-07-03 20:32:47 +00:00
python app.py
2024-07-17 08:57:33 +00:00
2024-07-19 15:39:05 +00:00
# For macOS with Apple Silicon users, Intel not supported, this maybe 20x slower than RTX 4090
2024-07-17 08:57:33 +00:00
PYTORCH_ENABLE_MPS_FALLBACK=1 python app.py
2024-07-03 20:32:47 +00:00
```
2024-07-10 15:39:49 +00:00
You can specify the `--server_port` , `--share` , `--server_name` arguments to satisfy your needs!
2024-07-12 09:57:01 +00:00
🚀 We also provide an acceleration option `--flag_do_torch_compile` . The first-time inference triggers an optimization process (about one minute), making subsequent inferences 20-30% faster. Performance gains may vary with different CUDA versions.
```bash
# enable torch.compile for faster inference
python app.py --flag_do_torch_compile
```
2024-07-17 08:57:33 +00:00
**Note**: This method is not supported on Windows and macOS.
2024-07-12 09:57:01 +00:00
2024-07-10 15:39:49 +00:00
**Or, try it out effortlessly on [HuggingFace ](https://huggingface.co/spaces/KwaiVGI/LivePortrait ) 🤗**
2024-07-03 20:32:47 +00:00
### 5. Inference speed evaluation 🚀🚀🚀
We have also provided a script to evaluate the inference speed of each module:
```bash
2024-07-17 08:57:33 +00:00
# For NVIDIA GPU
2024-07-03 20:32:47 +00:00
python speed.py
```
Below are the results of inferring one frame on an RTX 4090 GPU using the native PyTorch framework with `torch.compile` :
| Model | Parameters(M) | Model Size(MB) | Inference(ms) |
|-----------------------------------|:-------------:|:--------------:|:-------------:|
| Appearance Feature Extractor | 0.84 | 3.3 | 0.82 |
| Motion Extractor | 28.12 | 108 | 0.84 |
| Spade Generator | 55.37 | 212 | 7.59 |
| Warping Module | 45.53 | 174 | 5.21 |
2024-07-10 15:39:49 +00:00
| Stitching and Retargeting Modules | 0.23 | 2.3 | 0.31 |
*Note: The values for the Stitching and Retargeting Modules represent the combined parameter counts and total inference time of three sequential MLP networks.*
## Community Resources 🤗
Discover the invaluable resources contributed by our community to enhance your LivePortrait experience:
2024-07-03 20:32:47 +00:00
2024-07-10 15:39:49 +00:00
- [ComfyUI-LivePortraitKJ ](https://github.com/kijai/ComfyUI-LivePortraitKJ ) by [@kijai ](https://github.com/kijai )
- [comfyui-liveportrait ](https://github.com/shadowcz007/comfyui-liveportrait ) by [@shadowcz007 ](https://github.com/shadowcz007 )
2024-07-17 08:57:33 +00:00
- [LivePortrait In ComfyUI ](https://www.youtube.com/watch?v=aFcS31OWMjE ) by [@Benji ](https://www.youtube.com/@TheFutureThinker )
2024-07-10 15:39:49 +00:00
- [LivePortrait hands-on tutorial ](https://www.youtube.com/watch?v=uyjSTAOY7yI ) by [@AI Search ](https://www.youtube.com/@theAIsearch )
- [ComfyUI tutorial ](https://www.youtube.com/watch?v=8-IcDDmiUMM ) by [@Sebastian Kamph ](https://www.youtube.com/@sebastiankamph )
2024-07-11 05:24:39 +00:00
- [Replicate Playground ](https://replicate.com/fofr/live-portrait ) and [cog-comfyui ](https://github.com/fofr/cog-comfyui ) by [@fofr ](https://github.com/fofr )
2024-07-03 20:32:47 +00:00
2024-07-10 15:39:49 +00:00
And many more amazing contributions from our community!
2024-07-03 20:32:47 +00:00
2024-07-19 15:39:05 +00:00
## Acknowledgements 💐
2024-07-03 20:32:47 +00:00
We would like to thank the contributors of [FOMM ](https://github.com/AliaksandrSiarohin/first-order-model ), [Open Facevid2vid ](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis ), [SPADE ](https://github.com/NVlabs/SPADE ), [InsightFace ](https://github.com/deepinsight/insightface ) repositories, for their open research and contributions.
## Citation 💖
If you find LivePortrait useful for your research, welcome to 🌟 this repo and cite our work using the following BibTeX:
```bibtex
2024-07-10 15:39:49 +00:00
@article {guo2024liveportrait,
2024-07-03 20:32:47 +00:00
title = {LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control},
2024-07-10 15:39:49 +00:00
author = {Guo, Jianzhu and Zhang, Dingyun and Liu, Xiaoqiang and Zhong, Zhizhou and Zhang, Yuan and Wan, Pengfei and Zhang, Di},
journal = {arXiv preprint arXiv:2407.03168},
year = {2024}
2024-07-03 20:32:47 +00:00
}
```