Bring portraits to life!
Go to file
2024-10-07 06:58:43 -04:00
.vscode feat: launch LivePortrait 2024-07-04 04:32:47 +08:00
assets doc: update readme and changelog 2024-07-12 14:45:24 +08:00
pretrained_weights feat: launch LivePortrait 2024-07-04 04:32:47 +08:00
src trace all remaining network modules 2024-10-07 06:58:43 -04:00
.gitignore update gitignore 2024-10-07 05:39:23 -04:00
app.py feat: gradio acceleration (#123) 2024-07-12 17:57:01 +08:00
inference.py wip 2024-07-16 21:35:23 -04:00
LICENSE Initial commit 2024-07-04 00:15:03 +08:00
nuke_live_portrait.py trace all remaining network modules 2024-10-07 06:58:43 -04:00
readme.md update Readme.md 2024-10-07 06:58:43 -04:00
requirements.txt feat: add several features (#89) 2024-07-10 23:39:49 +08:00
speed.py fix: compile on V100 2024-07-13 15:19:48 +08:00

LivePortrait for Nuke

Introduction 📖

This project integrates LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control to The Foundry's Nuke, enabling artists to easily create animated portraits through advanced facial expression and motion transfer.

LivePortrait leverages a series of neural networks to extract information, deform, and blend reference videos with target images, producing highly realistic and expressive animations.

By integrating LivePortrait into Nuke, artists can enhance their workflows within a familiar environment, gaining additional control through Nuke's curve editor and custom knob creation.

This implementation provides a self-contained package as a series of Inference nodes. This allows for easy installation on any Nuke 14+ system, without requiring additional dependencies like ComfyUI or conda environments.

The current version supports video-to-image animation transfer. Future developments will expand this functionality to include video-to-video animation transfer, eyes and lips retargeting, an animal animation model, and support for additional face detection models.

author license

showcase
🔥 For more results, visit the project homepage 🔥

Compatibility

Nuke 15.1+, tested on Linux.

Features

  • Fast inference and animation transfer
  • Flexible advanced options for animation control
  • Seamless integration into Nuke's node graph and curve editor
  • Separated network nodes for customization and workflow experimentation
  • Easy installation using Nuke's Cattery system

Limitations

Maximum resolution for image output is currently 256x256 pixels (upscaled to 512x512 pixels), due to the original model's limitations.

Installation

  1. Download and unzip the latest release from here.
  2. Copy the extracted Cattery folder to .nuke or your plugins path.
  3. In the toolbar, choose Cattery > Update or simply restart Nuke.

LivePortrait will then be accessible under the toolbar at Cattery > Stylization > LivePortrait.

Quick Start

LivePortrait requires two inputs:

  • Image (target face)
  • Video reference (animation to be transferred)

Open the included demo.nk file for a working example. A self-contained gizmo will be provided in the next release.

Release Notes

Latest version: 1.0

  • Initial release
  • Video to image animation transfer
  • Integrated into Nuke's node graph
  • Advanced options for animation control
  • Easy installation with Cattery package

License and Acknowledgments

LivePortrait.cat is licensed under the MIT License, and is derived from https://github.com/KwaiVGI/LivePortrait.

While the MIT License permits commercial use of LivePortrait, the dataset used for its training and some of the underlying models may be under a non-commercial license.

This license does not cover the underlying pre-trained model, associated training data, and dependencies, which may be subject to further usage restrictions.

Consult https://github.com/KwaiVGI/LivePortrait for more information on associated licensing terms.

Users are solely responsible for ensuring that the underlying model, training data, and dependencies align with their intended usage of LivePortrait.cat.

Community Resources 🤗

Discover the invaluable resources contributed by our community to enhance your LivePortrait experience:

And many more amazing contributions from our community!

Acknowledgements

We would like to thank the contributors of FOMM, Open Facevid2vid, SPADE, InsightFace repositories, for their open research and contributions.

Citation

If you find LivePortrait useful for your research, welcome to 🌟 this repo and cite our work using the following BibTeX:

@article{guo2024liveportrait,
  title   = {LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control},
  author  = {Guo, Jianzhu and Zhang, Dingyun and Liu, Xiaoqiang and Zhong, Zhizhou and Zhang, Yuan and Wan, Pengfei and Zhang, Di},
  journal = {arXiv preprint arXiv:2407.03168},
  year    = {2024}
}