mirror of
https://github.com/KwaiVGI/LivePortrait.git
synced 2025-03-15 14:02:12 +00:00
doc: update readme
This commit is contained in:
parent
5c2cd63937
commit
102f458f27
@ -248,6 +248,9 @@ And many more amazing contributions from our community!
|
|||||||
## Acknowledgements 💐
|
## Acknowledgements 💐
|
||||||
We would like to thank the contributors of [FOMM](https://github.com/AliaksandrSiarohin/first-order-model), [Open Facevid2vid](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis), [SPADE](https://github.com/NVlabs/SPADE), [InsightFace](https://github.com/deepinsight/insightface) and [X-Pose](https://github.com/IDEA-Research/X-Pose) repositories, for their open research and contributions.
|
We would like to thank the contributors of [FOMM](https://github.com/AliaksandrSiarohin/first-order-model), [Open Facevid2vid](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis), [SPADE](https://github.com/NVlabs/SPADE), [InsightFace](https://github.com/deepinsight/insightface) and [X-Pose](https://github.com/IDEA-Research/X-Pose) repositories, for their open research and contributions.
|
||||||
|
|
||||||
|
## Ethics Considerations 🛡️
|
||||||
|
Portrait animation technologies come with social risks, particularly the potential for misuse in creating deepfakes. To mitigate these risks, it’s crucial to follow ethical guidelines and adopt responsible usage practices. At present, the synthesized results contain visual artifacts that may help in detecting deepfakes. Please note that we do not assume any legal responsibility for the use of the results generated by this project.
|
||||||
|
|
||||||
## Citation 💖
|
## Citation 💖
|
||||||
If you find LivePortrait useful for your research, welcome to 🌟 this repo and cite our work using the following BibTeX:
|
If you find LivePortrait useful for your research, welcome to 🌟 this repo and cite our work using the following BibTeX:
|
||||||
```bibtex
|
```bibtex
|
||||||
|
@ -100,6 +100,9 @@ def squeeze_tensor_to_numpy(tensor):
|
|||||||
|
|
||||||
def dct2device(dct: dict, device):
|
def dct2device(dct: dict, device):
|
||||||
for key in dct:
|
for key in dct:
|
||||||
|
if isinstance(dct[key], torch.Tensor):
|
||||||
|
dct[key] = dct[key].to(device)
|
||||||
|
else:
|
||||||
dct[key] = torch.tensor(dct[key]).to(device)
|
dct[key] = torch.tensor(dct[key]).to(device)
|
||||||
return dct
|
return dct
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user