diff --git a/readme.md b/readme.md index 149dbd6..6b5ec2e 100644 --- a/readme.md +++ b/readme.md @@ -1,182 +1,94 @@ -

LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control

+# LivePortrait for Nuke -
- Jianzhu Guo 1†  - Dingyun Zhang 1,2  - Xiaoqiang Liu 1  - Zhizhou Zhong 1,3  - Yuan Zhang 1  -
+## Introduction 📖 -
- Pengfei Wan 1  - Di Zhang 1  -
-
- 1 Kuaishou Technology  2 University of Science and Technology of China  3 Fudan University  -
+This project integrates [**LivePortrait**: Efficient Portrait Animation with Stitching and Retargeting Control](https://liveportrait.github.io/) to **The Foundry's Nuke**, enabling artists to easily create animated portraits through advanced facial expression and motion transfer. + +**LivePortrait** leverages a series of neural networks to extract information, deform, and blend reference videos with target images, producing highly realistic and expressive animations. + +By integrating **LivePortrait** into Nuke, artists can enhance their workflows within a familiar environment, gaining additional control through Nuke's curve editor and custom knob creation. + +This implementation provides a self-contained package as a series of **Inference** nodes. This allows for easy installation on any Nuke 14+ system, **without requiring additional dependencies** like ComfyUI or conda environments. + +The current version supports video-to-image animation transfer. Future developments will expand this functionality to include video-to-video animation transfer, eyes and lips retargeting, an animal animation model, and support for additional face detection models. -
- - - - + +[![author](https://img.shields.io/badge/by:_Rafael_Silva-red?logo=linkedin&logoColor=white)](https://www.linkedin.com/in/rafael-silva-ba166513/) +[![license](https://img.shields.io/badge/license-MIT-blue)](LICENSE) +
-

showcase
- 🔥 For more results, visit our homepage 🔥 + 🔥 For more results, visit the project homepage 🔥

+## Compatibility -## 🔥 Updates -- **`2024/07/10`**: 💪 We support audio and video concatenating, driving video auto-cropping, and template making to protect privacy. More to see [here](assets/docs/changelog/2024-07-10.md). -- **`2024/07/09`**: 🤗 We released the [HuggingFace Space](https://huggingface.co/spaces/KwaiVGI/liveportrait), thanks to the HF team and [Gradio](https://github.com/gradio-app/gradio)! -- **`2024/07/04`**: 😊 We released the initial version of the inference code and models. Continuous updates, stay tuned! -- **`2024/07/04`**: 🔥 We released the [homepage](https://liveportrait.github.io) and technical report on [arXiv](https://arxiv.org/pdf/2407.03168). +**Nuke 15.1+**, tested on **Linux**. +## Features -## Introduction -This repo, named **LivePortrait**, contains the official PyTorch implementation of our paper [LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control](https://arxiv.org/pdf/2407.03168). -We are actively updating and improving this repository. If you find any bugs or have suggestions, welcome to raise issues or submit pull requests (PR) 💖. +- **Fast** inference and animation transfer +- **Flexible** advanced options for animation control +- **Seamless integration** into Nuke's node graph and curve editor +- **Separated** network nodes for **customization** and workflow experimentation +- **Easy installation** using Nuke's Cattery system -## 🔥 Getting Started -### 1. Clone the code and prepare the environment -```bash -git clone https://github.com/KwaiVGI/LivePortrait -cd LivePortrait -# create env using conda -conda create -n LivePortrait python==3.9.18 -conda activate LivePortrait -# install dependencies with pip -pip install -r requirements.txt -``` +## Limitations -**Note:** make sure your system has [FFmpeg](https://ffmpeg.org/) installed! +> Maximum resolution for image output is currently 256x256 pixels (upscaled to 512x512 pixels), due to the original model's limitations. -### 2. Download pretrained weights -The easiest way to download the pretrained weights is from HuggingFace: -```bash -# first, ensure git-lfs is installed, see: https://docs.github.com/en/repositories/working-with-files/managing-large-files/installing-git-large-file-storage -git lfs install -# clone the weights -git clone https://huggingface.co/KwaiVGI/liveportrait pretrained_weights -``` +## Installation -Alternatively, you can download all pretrained weights from [Google Drive](https://drive.google.com/drive/folders/1UtKgzKjFAOmZkhNK-OYT0caJ_w2XAnib) or [Baidu Yun](https://pan.baidu.com/s/1MGctWmNla_vZxDbEp2Dtzw?pwd=z5cn). Unzip and place them in `./pretrained_weights`. +1. Download and unzip the latest release from [here](https://github.com/rafaelperez/LivePortrait-for-Nuke/releases). +2. Copy the extracted `Cattery` folder to `.nuke` or your plugins path. +3. In the toolbar, choose **Cattery > Update** or simply **restart** Nuke. -Ensuring the directory structure is as follows, or contains: -```text -pretrained_weights -├── insightface -│ └── models -│ └── buffalo_l -│ ├── 2d106det.onnx -│ └── det_10g.onnx -└── liveportrait - ├── base_models - │ ├── appearance_feature_extractor.pth - │ ├── motion_extractor.pth - │ ├── spade_generator.pth - │ └── warping_module.pth - ├── landmark.onnx - └── retargeting_models - └── stitching_retargeting_module.pth -``` +**LivePortrait** will then be accessible under the toolbar at **Cattery > Stylization > LivePortrait**. -### 3. Inference 🚀 -#### Fast hands-on -```bash -python inference.py -``` +## Quick Start -If the script runs successfully, you will get an output mp4 file named `animations/s6--d0_concat.mp4`. This file includes the following results: driving video, input image, and generated result. +LivePortrait requires two inputs: -

- image -

+- **Image** (target face) +- **Video reference** (animation to be transferred) -Or, you can change the input by specifying the `-s` and `-d` arguments: +Open the included `demo.nk` file for a working example. +A self-contained gizmo will be provided in the next release. -```bash -python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4 -# disable pasting back to run faster -python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4 --no_flag_pasteback +## Release Notes -# more options to see -python inference.py -h -``` +**Latest version:** 1.0 -#### Driving video auto-cropping +- [x] Initial release +- [x] Video to image animation transfer +- [x] Integrated into Nuke's node graph +- [x] Advanced options for animation control +- [x] Easy installation with Cattery package -📕 To use your own driving video, we **recommend**: - - Crop it to a **1:1** aspect ratio (e.g., 512x512 or 256x256 pixels), or enable auto-cropping by `--flag_crop_driving_video`. - - Focus on the head area, similar to the example videos. - - Minimize shoulder movement. - - Make sure the first frame of driving video is a frontal face with **neutral expression**. -Below is a auto-cropping case by `--flag_crop_driving_video`: -```bash -python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d13.mp4 --flag_crop_driving_video -``` +## License and Acknowledgments -If you find the results of auto-cropping is not well, you can modify the `--scale_crop_video`, `--vy_ratio_crop_video` options to adjust the scale and offset, or do it manually. +**LivePortrait.cat** is licensed under the MIT License, and is derived from https://github.com/KwaiVGI/LivePortrait. -#### Motion template making -You can also use the auto-generated motion template files ending with `.pkl` to speed up inference, and **protect privacy**, such as: -```bash -python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d5.pkl -``` +While the MIT License permits commercial use of **LivePortrait**, the dataset used for its training and some of the underlying models may be under a non-commercial license. -**Discover more interesting results on our [Homepage](https://liveportrait.github.io)** 😊 +This license **does not cover** the underlying pre-trained model, associated training data, and dependencies, which may be subject to further usage restrictions. -### 4. Gradio interface 🤗 +Consult https://github.com/KwaiVGI/LivePortrait for more information on associated licensing terms. -We also provide a Gradio interface for a better experience, just run by: +**Users are solely responsible for ensuring that the underlying model, training data, and dependencies align with their intended usage of LivePortrait.cat.** -```bash -python app.py -``` - -You can specify the `--server_port`, `--share`, `--server_name` arguments to satisfy your needs! - -🚀 We also provide an acceleration option `--flag_do_torch_compile`. The first-time inference triggers an optimization process (about one minute), making subsequent inferences 20-30% faster. Performance gains may vary with different CUDA versions. -```bash -# enable torch.compile for faster inference -python app.py --flag_do_torch_compile -``` -**Note**: This method has not been fully tested. e.g., on Windows. - -**Or, try it out effortlessly on [HuggingFace](https://huggingface.co/spaces/KwaiVGI/LivePortrait) 🤗** - -### 5. Inference speed evaluation 🚀🚀🚀 -We have also provided a script to evaluate the inference speed of each module: - -```bash -python speed.py -``` - -Below are the results of inferring one frame on an RTX 4090 GPU using the native PyTorch framework with `torch.compile`: - -| Model | Parameters(M) | Model Size(MB) | Inference(ms) | -|-----------------------------------|:-------------:|:--------------:|:-------------:| -| Appearance Feature Extractor | 0.84 | 3.3 | 0.82 | -| Motion Extractor | 28.12 | 108 | 0.84 | -| Spade Generator | 55.37 | 212 | 7.59 | -| Warping Module | 45.53 | 174 | 5.21 | -| Stitching and Retargeting Modules | 0.23 | 2.3 | 0.31 | - -*Note: The values for the Stitching and Retargeting Modules represent the combined parameter counts and total inference time of three sequential MLP networks.* ## Community Resources 🤗 @@ -194,7 +106,7 @@ And many more amazing contributions from our community! ## Acknowledgements We would like to thank the contributors of [FOMM](https://github.com/AliaksandrSiarohin/first-order-model), [Open Facevid2vid](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis), [SPADE](https://github.com/NVlabs/SPADE), [InsightFace](https://github.com/deepinsight/insightface) repositories, for their open research and contributions. -## Citation 💖 +## Citation If you find LivePortrait useful for your research, welcome to 🌟 this repo and cite our work using the following BibTeX: ```bibtex @article{guo2024liveportrait,