You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
python evaluate.py --answer_qs \
--model_name bliva_vicuna \
--img_path images/example.jpg \
--question "what is this image about?"
, and got this error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/rafael/repos/BLIVA/bliva/models/__init__.py", line 144, in load_model_and_preprocess
model = model_cls.from_pretrained(model_type=model_type)
File "/home/rafael/repos/BLIVA/bliva/models/base_model.py", line 70, in from_pretrained
model = cls.from_config(model_cfg)
File "/home/rafael/repos/BLIVA/bliva/models/bliva_vicuna7b.py", line 759, in from_config
model = cls(
File "/home/rafael/repos/BLIVA/bliva/models/bliva_vicuna7b.py", line 70, in __init__
self.llm_tokenizer = LlamaTokenizer.from_pretrained(llm_model, use_fast=False, truncation_side="left")
File "/home/rafael/anaconda3/envs/bliva/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1813, in from_pretrained
resolved_vocab_files[file_id] = cached_file(
File "/home/rafael/anaconda3/envs/bliva/lib/python3.9/site-packages/transformers/utils/hub.py", line 429, in cached_file
resolved_file = hf_hub_download(
File "/home/rafael/anaconda3/envs/bliva/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn
validate_repo_id(arg_value)
File "/home/rafael/anaconda3/envs/bliva/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 164, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'path to vicuna checkpoint'.
Bliva vicuna is defined in bliva_vicuna7b.yaml. However, llm_model checkpoint is not defined. What model do you recommend to use in each case?
I tried llm_model with mlpc-lab/BLIVA_Vicuna, but still doesn't work. Any suggestion?
The text was updated successfully, but these errors were encountered:
Thank you for your interest in our work. Yeah unfortunately, it not integrated to huggingface by simply calling its name. We will support this soon. Currently you need to download the weight from the huggingface by git cloning that repo aka downloading the weight locally and specify this model path in the yaml file.
I have the same problem.
I followed the instructions on github
I cloned the repository
I installed the dependencies with pip install -e .
I downloaded the model from huggingface using wget and put it into a new subfolder called model
I edited the config in ./bliva/configs/models/bliva_vicuna7b.yaml
I put the absolute path there which is /home/username/VQA/BLIVA/model/
I tried with filename of the weights and without. I also tried with a trailing slash and without.
I always get the following error when I run evaluate.py:
OSError: Incorrect path_or_model_id: 'path to vicuna checkpoint'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
Thank you for your interest in our work. Unfortunately wget is not the proper way to download from Huggingface data repo. The proper way to do it is as here: #19 (comment) Also make sure your path of weight file is up to the pth ending which is bliva_vicuna7b.pth
tried to run the evaluate the following example
, and got this error:
Bliva vicuna is defined in
bliva_vicuna7b.yaml
. However,llm_model
checkpoint is not defined. What model do you recommend to use in each case?I tried
llm_model
withmlpc-lab/BLIVA_Vicuna
, but still doesn't work. Any suggestion?The text was updated successfully, but these errors were encountered: