π‘ We also have other video generation projects that may interest you β¨.
Open-Sora Plan: Open-Source Large Video Generation Model
Bin Lin, Yunyang Ge and Xinhua Cheng etc.
MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators
Shenghai Yuan, Jinfa Huang and Yujun Shi etc.
ChronoMagic-Bench: A Benchmark for Metamorphic Evaluation of Text-to-Time-lapse Video Generation
Shenghai Yuan, Jinfa Huang and Yongqi Xu etc.
- β³β³β³ Release the full code & datasets & weights.
[2024.12.26]
π We release the cache inference code for ConsisID powered by TeaCache. Thanks @LiewFeng for his help.[2024.12.24]
π We release the parallel inference code for ConsisID powered by xDiT. Thanks @feifeibear for his help.[2024.12.22]
π€ ConsisID will be merged into diffusers in the next version. So for now, please usepip install git+https://github.com/SHYuanBest/ConsisID_diffusers.git
to install diffusers dev version. And we have reorganized the code and weight configs, so it's better to update your local files if you have cloned them previously.[2024.12.09]
π₯We release the test set and metric calculation code used in the paper, now your can measure the metrics on your own machine. Please refer to this guide for more details.[2024.12.08]
π₯The code for data preprocessing is out, which is used to obtain the training data required by ConsisID, supporting multi-id annotation. Please refer to this guide for more details.[2024.12.04]
Thanks @shizi for providing π€Windows-ConsisID and π£Windows-ConsisID, which make it easy to run ConsisID on Windows.[2024.12.01]
π₯ We provide full text prompts corresponding to all the videos on project page. Click here to get and try the demo.[2024.11.30]
π€ We have fixed the huggingface demo, welcome to try it.[2024.11.29]
πββοΈ The current code and weights are our early versions, and the differences with the latest version in arxiv can be viewed here. And we will release the full code in the next few days.[2024.11.28]
Thanks @camenduru for providing Jupyter Notebook and @Kijai for providing ComfyUI Extension ComfyUI-ConsisIDWrapper. If you find related work, please let us know.[2024.11.27]
πββοΈ Due to policy restrictions, we only open-source part of the dataset. You can download it by clicking here. And we will release the data processing code in the next few days.[2024.11.26]
π₯ We release the arXiv paper for ConsisID, and you can click here to see more details.[2024.11.22]
π₯ All code & datasets are coming soon! Stay tuned π!
Identity-Preserving Text-to-Video Generation. (Some best prompts here)
or you can click here to watch the video.
!pip install git+https://github.com/SHYuanBest/ConsisID_diffusers.git
import torch
from diffusers import ConsisIDPipeline
from diffusers.pipelines.consisid.consisid_utils import prepare_face_models, process_face_embeddings_infer
from diffusers.utils import export_to_video
from huggingface_hub import snapshot_download
snapshot_download(repo_id="BestWishYsh/ConsisID-preview", local_dir="BestWishYsh/ConsisID-preview")
face_helper_1, face_helper_2, face_clip_model, face_main_model, eva_transform_mean, eva_transform_std = (
prepare_face_models("BestWishYsh/ConsisID-preview", device="cuda", dtype=torch.bfloat16)
)
pipe = ConsisIDPipeline.from_pretrained("BestWishYsh/ConsisID-preview", torch_dtype=torch.bfloat16)
pipe.to("cuda")
# ConsisID works well with long and well-described prompts. Make sure the face in the image is clearly visible (e.g., preferably half-body or full-body).
prompt = "The video captures a boy walking along a city street, filmed in black and white on a classic 35mm camera. His expression is thoughtful, his brow slightly furrowed as if he's lost in contemplation. The film grain adds a textured, timeless quality to the image, evoking a sense of nostalgia. Around him, the cityscape is filled with vintage buildings, cobblestone sidewalks, and softly blurred figures passing by, their outlines faint and indistinct. Streetlights cast a gentle glow, while shadows play across the boy's path, adding depth to the scene. The lighting highlights the boy's subtle smile, hinting at a fleeting moment of curiosity. The overall cinematic atmosphere, complete with classic film still aesthetics and dramatic contrasts, gives the scene an evocative and introspective feel."
image = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/refs%2Fpr%2F406/diffusers/consisid/consisid_input.png?download=true"
id_cond, id_vit_hidden, image, face_kps = process_face_embeddings_infer(
face_helper_1,
face_clip_model,
face_helper_2,
eva_transform_mean,
eva_transform_std,
face_main_model,
"cuda",
torch.bfloat16,
image,
is_align_face=True,
)
video = pipe(
image=image,
prompt=prompt,
num_inference_steps=50,
guidance_scale=6.0,
use_dynamic_cfg=False,
id_vit_hidden=id_vit_hidden,
id_cond=id_cond,
kps_cond=face_kps,
generator=torch.Generator("cuda").manual_seed(42),
)
export_to_video(video.frames[0], "output.mp4", fps=8)
Highly recommend trying out our web demo by the following command, which incorporates all features currently supported by ConsisID. We also provide online demo in Hugging Face Spaces.
python app.py
python infer.py --model_path BestWishYsh/ConsisID-preview
warning: It is worth noting that even if we use the same seed and prompt but we change a machine, the results will be different.
ConsisID has high requirements for prompt quality. You can use GPT-4o to refine the input text prompt, an example is as follows (original prompt: "a man is playing guitar.")
a man is playing guitar.
Change the sentence above to something like this (add some facial changes, even if they are minor. Don't make the sentence too long):
The video features a man standing next to an airplane, engaged in a conversation on his cell phone. he is wearing sunglasses and a black top, and he appears to be talking seriously. The airplane has a green stripe running along its side, and there is a large engine visible behind his. The man seems to be standing near the entrance of the airplane, possibly preparing to board or just having disembarked. The setting suggests that he might be at an airport or a private airfield. The overall atmosphere of the video is professional and focused, with the man's attire and the presence of the airplane indicating a business or travel context.
Some sample prompts are available here.
ConsisID requires about 44 GB of GPU memory to decode 49 frames (6 seconds of video at 8 FPS) with output resolution 720x480 (W x H), which makes it not possible to run on consumer GPUs or free-tier T4 Colab. The following memory optimizations could be used to reduce the memory footprint. For replication, you can refer to this script.
Feature (overlay the previous) | Max Memory Allocated | Max Memory Reserved |
---|---|---|
- | 37 GB | 44 GB |
enable_model_cpu_offload | 22 GB | 25 GB |
enable_sequential_cpu_offload | 16 GB | 22 GB |
vae.enable_slicing | 16 GB | 22 GB |
vae.enable_tiling | 5 GB | 7 GB |
# turn on if you don't have multiple GPUs or enough GPU memory(such as H100)
pipe.enable_model_cpu_offload()
pipe.enable_sequential_cpu_offload()
pipe.vae.enable_slicing()
pipe.vae.enable_tiling()
warning: it will cost more time in inference and may also reduce the quality.
xDiT is a Scalable Inference Engine for Diffusion Transformers (DiTs) on multi-GPU Clusters. It has successfully provided low-latency parallel inference solutions for a variety of DiTs models. For example, to generate a video with 6 GPUs, you can use the following command:
cd tools/parallel_inference
bash run.sh
# run_usp.sh
TeaCache is a training-free caching approach that estimates and leverages the fluctuating differences among model outputs across timesteps, thereby accelerating the inference. For example, you can use the following command:
cd tools/cache_inference
bash run.sh
We recommend the requirements as follows.
# 0. Clone the repo
git clone --depth=1 https://github.com/PKU-YuanGroup/ConsisID.git
cd ConsisID
# 1. Create conda environment
conda create -n consisid python=3.11.0
conda activate consisid
# 3. Install PyTorch and other dependencies using conda
# CUDA 11.8
conda install pytorch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 pytorch-cuda=11.8 -c pytorch -c nvidia
# CUDA 12.1
conda install pytorch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 pytorch-cuda=12.1 -c pytorch -c nvidia
# 4. Install pip dependencies
pip install -r requirements.txt
The weights are available at π€HuggingFace and π£WiseModel, and will be automatically downloaded when runing app.py
and infer.py
, or you can download it with the following commands.
# way 1
# if you are in china mainland, run this first: export HF_ENDPOINT=https://hf-mirror.com
cd util
python download_weights.py
# way 2
# if you are in china mainland, run this first: export HF_ENDPOINT=https://hf-mirror.com
huggingface-cli download --repo-type model \
BestWishYsh/ConsisID-preview \
--local-dir ckpts
# way 3
git lfs install
git clone https://www.wisemodel.cn/SHYuanBest/ConsisID-Preview.git
Once ready, the weights will be organized in this format:
π¦ ckpts/
βββ π data_process/
βββ π face_encoder/
βββ π scheduler/
βββ π text_encoder/
βββ π tokenizer/
βββ π transformer/
βββ π vae/
βββ π configuration.json
βββ π model_index.json
Please refer to this guide for how to obtain the training data required by ConsisID. If you want to train a text to image and video generation model. You need to arrange all the dataset in this format:
π¦ datasets/
βββ π captions/
β βββ π dataname_1.json
β βββ π dataname_2.json
βββ π dataname_1/
β βββ π refine_bbox_jsons/
β βββ π track_masks_data/
β βββ π videos/
βββ π dataname_2/
β βββ π refine_bbox_jsons/
β βββ π track_masks_data/
β βββ π videos/
βββ ...
βββ π total_train_data.txt
First, setting hyperparameters:
- environment (e.g., cuda): deepspeed_configs
- training arguments (e.g., batchsize): train_single_rank.sh or train_multi_rank.sh
Then, we run the following bash to start training:
# For single rank
bash train_single_rank.sh
# For multi rank
bash train_multi_rank.sh
We found some plugins created by community developers. Thanks for their efforts:
- ComfyUI Extension. ComfyUI-ConsisIDWrapper (by @Kijai).
- Jupyter Notebook. Jupyter-ConsisID (by @camenduru).
- Windows Docker. π€Windows-ConsisID and π£Windows-ConsisID (by @shizi).
- Diffusres. Diffusers-ConsisID (thanks @arrow, @yiyixuxu, @hlky and @stevhliu for their help).
- xDiT. xDiT-ConsisID (thanks @feifeibear for his help).
- TeaCache. TeaCache-ConsisID (thanks @LiewFeng for his help).
If you find related work, please let us know.
We release the subset of the data used to train ConsisID. The dataset is available at HuggingFace, or you can download it with the following command. Some samples can be found on our Project Page.
huggingface-cli download --repo-type dataset \
BestWishYsh/ConsisID-preview-Data \
--local-dir BestWishYsh/ConsisID-preview-Data
We release the data used for evaluation in ConsisID, which is available at HuggingFace. Please refer to this guide for how to evaluate customized model.
- This project wouldn't be possible without the following open-sourced repositories: Open-Sora Plan, CogVideoX, EasyAnimate, CogVideoX-Fun, IP-Adapter, PhotoMaker, UniPortrait.
- The majority of this project is released under the Apache 2.0 license as found in the LICENSE file.
- The CogVideoX-5B model (Transformers module) is released under the CogVideoX LICENSE.
- The service is a research preview. Please contact us if you find any potential violations. ([email protected])
If you find our paper and code useful in your research, please consider giving a star β and citation π.
@article{yuan2024identity,
title={Identity-Preserving Text-to-Video Generation by Frequency Decomposition},
author={Yuan, Shenghai and Huang, Jinfa and He, Xianyi and Ge, Yunyuan and Shi, Yujun and Chen, Liuhan and Luo, Jiebo and Yuan, Li},
journal={arXiv preprint arXiv:2411.17440},
year={2024}
}