Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Must initialize from damo-vilab/text-to-video-ms-1.7b? #17

Open
Sutongtong233 opened this issue Apr 9, 2024 · 6 comments
Open

Must initialize from damo-vilab/text-to-video-ms-1.7b? #17

Sutongtong233 opened this issue Apr 9, 2024 · 6 comments

Comments

@Sutongtong233
Copy link

Hi, I found that here:
https://github.com/Picsart-AI-Research/StreamingT2V/blame/c1b8068bcbcdbbfa0dd0df3371d3c93a1f5132de/t2v_enhanced/model_init.py#L71C7-L71C7

def init_streamingt2v_model(ckpt_file, result_fol):
       ...
      cli = CustomCLI(VideoLDM)

It seems that VideoLDM must initialized from damo-vilab/text-to-video-ms-1.7b (in config pipeline_repo: damo-vilab/text-to-video-ms-1.7b). Moreover, ckpt_file, which is the path to Stream_t2v.ckpt, just add to sys.argv, but not used in any function.
I am confused about this piece of code and hope to get your explanation, thanks :)

@hpoghos
Copy link
Collaborator

hpoghos commented Apr 9, 2024

Hi @Sutongtong233,
The streamingt2v model is loaded at line 105,
you can check that it indeed uses the streaming_t2v.ckpt

@zeroCAY
Copy link

zeroCAY commented Apr 10, 2024

I also have the question, I have the streaming_t2v.ckpt actually, but it still ask to download /laion/CLIP-ViT-H-14-laion2B-s32B-b79K from huggingface.
Because my server can't connect to huggingface for direct download, is this download behavior necessary? Is it possible to run the whole process with just streaming_t2v.ckpt ?
image

@Mike001-wq
Copy link

Mike001-wq commented Apr 10, 2024

I also have the question, I have the streaming_t2v.ckpt actually, but it still ask to download /laion/CLIP-ViT-H-14-laion2B-s32B-b79K from huggingface. Because my server can't connect to huggingface for direct download, is this download behavior necessary? Is it possible to run the whole process with just streaming_t2v.ckpt ? image

I solve this problem by change image_embedder.py Line 75:
model, _, _ = open_clip.create_model_and_transforms(
arch,
device=torch.device("cpu"),
# pretrained=version,
pretrained="./damo-vilab/open_clip_pytorch_model.bin"
)
pretrained parameter points to your file's path, which is downloaded from huggingface.

@zeroCAY
Copy link

zeroCAY commented Apr 11, 2024

I also have the question, I have the streaming_t2v.ckpt actually, but it still ask to download /laion/CLIP-ViT-H-14-laion2B-s32B-b79K from huggingface. Because my server can't connect to huggingface for direct download, is this download behavior necessary? Is it possible to run the whole process with just streaming_t2v.ckpt ? image

I solve this problem by change image_embedder.py Line 75: model, _, _ = open_clip.create_model_and_transforms( arch, device=torch.device("cpu"), # pretrained=version, pretrained="./damo-vilab/open_clip_pytorch_model.bin" ) pretrained parameter points to your file's path, which is downloaded from huggingface.

thanks,I solve the problem by using your method~

@kunkun-zhu
Copy link

Hi @Sutongtong233
Have you solved the problem?
I also have the question。“OSError: Cannot load model damo-vilab/text-to-video-ms-1.7b: model is not cached locally”,but I can't connect to huggingface.

@ffhelly
Copy link

ffhelly commented May 7, 2024

嗨,你解决了问题吗? 我也有问题。“OSError:无法加载模型 damo-vilab/text-to-video-ms-1.7b:模型未缓存本地”,但我无法连接到 huggingface。

从镜像站拉一份到本地。或者开科技下载

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants