From ec8ac4cdf9821d903af5cdaee668625ab95d6b9f Mon Sep 17 00:00:00 2001
From: Komiljon Mukhammadiev <92161283+Mrkomiljon@users.noreply.github.com>
Date: Wed, 10 Jul 2024 13:07:37 +0900
Subject: [PATCH] Update readme.md
---
readme.md | 62 ++++++++++++++-----------------------------------------
1 file changed, 15 insertions(+), 47 deletions(-)
diff --git a/readme.md b/readme.md
index e9cb802..1eb1fde 100644
--- a/readme.md
+++ b/readme.md
@@ -1,46 +1,20 @@
-
LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control
-
-
-
-
-
-
- 1 Kuaishou Technology 2 University of Science and Technology of China 3 Fudan University
-
-
-
-
-
+ Webcam Live Portrait
- 🔥 For more results, visit our homepage 🔥
+ 🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥
## 🔥 Updates
-- **`2024/07/04`**: 🔥 We released the initial version of the inference code and models. Continuous updates, stay tuned!
-- **`2024/07/04`**: 😊 We released the [homepage](https://liveportrait.github.io) and technical report on [arXiv](https://arxiv.org/pdf/2407.03168).
+- **`2024/07/10`**: 🔥 I released the initial version of the inference code for webcam. Continuous updates, stay tuned!
+
## Introduction
-This repo, named **LivePortrait**, contains the official PyTorch implementation of our paper [LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control](https://arxiv.org/pdf/2407.03168).
-We are actively updating and improving this repository. If you find any bugs or have suggestions, welcome to raise issues or submit pull requests (PR) 💖.
+This repo, named **Webcam Live Portrait**, contains the official PyTorch implementation of author paper [LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control](https://arxiv.org/pdf/2407.03168).
+I am actively updating and improving this repository. If you find any bugs or have suggestions, welcome to raise issues or submit pull requests (PR) 💖.
## 🔥 Getting Started
### 1. Clone the code and prepare the environment
@@ -56,7 +30,7 @@ pip install -r requirements.txt
```
### 2. Download pretrained weights
-Download our pretrained LivePortrait weights and face detection models of InsightFace from [Google Drive](https://drive.google.com/drive/folders/1UtKgzKjFAOmZkhNK-OYT0caJ_w2XAnib) or [Baidu Yun](https://pan.baidu.com/s/1MGctWmNla_vZxDbEp2Dtzw?pwd=z5cn). We have packed all weights in one directory 😊. Unzip and place them in `./pretrained_weights` ensuring the directory structure is as follows:
+Download pretrained LivePortrait weights and face detection models of InsightFace from [Google Drive](https://drive.google.com/drive/folders/1UtKgzKjFAOmZkhNK-OYT0caJ_w2XAnib) or [Baidu Yun](https://pan.baidu.com/s/1MGctWmNla_vZxDbEp2Dtzw?pwd=z5cn). We have packed all weights in one directory 😊. Unzip and place them in `./pretrained_weights` ensuring the directory structure is as follows:
```text
pretrained_weights
├── insightface
@@ -84,13 +58,18 @@ python inference.py
If the script runs successfully, you will get an output mp4 file named `animations/s6--d0_concat.mp4`. This file includes the following results: driving video, input image, and generated result.
-
+
-Or, you can change the input by specifying the `-s` and `-d` arguments:
+
+https://github.com/Mrkomiljon/Webcam_Live_Portrait/assets/92161283/7c4daf41-838d-4eb8-a762-9188cd337ee6
+
+
+
+Or, you can change the input by specifying the `-s` and `-d` arguments come from webcam:
```bash
-python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4
+python inference.py -s assets/examples/source/MY_photo.jpg
# or disable pasting back
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4 --no_flag_pasteback
@@ -99,7 +78,6 @@ python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/
python inference.py -h
```
-**More interesting results can be found in our [Homepage](https://liveportrait.github.io)** 😊
### 4. Gradio interface
@@ -132,13 +110,3 @@ Below are the results of inferring one frame on an RTX 4090 GPU using the native
## Acknowledgements
We would like to thank the contributors of [FOMM](https://github.com/AliaksandrSiarohin/first-order-model), [Open Facevid2vid](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis), [SPADE](https://github.com/NVlabs/SPADE), [InsightFace](https://github.com/deepinsight/insightface) repositories, for their open research and contributions.
-## Citation 💖
-If you find LivePortrait useful for your research, welcome to 🌟 this repo and cite our work using the following BibTeX:
-```bibtex
-@article{guo2024live,
- title = {LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control},
- author = {Jianzhu Guo and Dingyun Zhang and Xiaoqiang Liu and Zhizhou Zhong and Yuan Zhang and Pengfei Wan and Di Zhang},
- year = {2024},
- journal = {arXiv preprint:2407.03168},
-}
-```