You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
The only thing that is different from the instruction of the installation is I used Vicuna v.1.5, it downloads the weights locally from HF. In the bliva_vicuna7b.yaml, llm_model: I set it to the locally downloaded weights folder", not sure if this the source of the problem. Otherwise, I used an image of the same size 224. Here is my traceback:
File "/home/user/BLIVA/evaluate.py", line 93, in <module> main(args) File "/home/user/BLIVA/evaluate.py", line 85, in main eval_one(image, question, model) File "/home/user/BLIVA/evaluate.py", line 46, in eval_one outputs = model.generate({"image": image, "prompt": question}) File "/home/user/conda_env/bliva/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/home/user/BLIVA/bliva/models/bliva_vicuna7b.py", line 382, in generate outputs = self.llm_model.generate( File "/home/user/conda_env/bliva/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/home/user/conda_env/bliva/lib/python3.9/site-packages/transformers/generation/utils.py", line 1447, in generate self._validate_generated_length(generation_config, input_ids_length, has_default_max_length) File "/home/user/conda_env/bliva/lib/python3.9/site-packages/transformers/generation/utils.py", line 1166, in _validate_generated raise ValueError( ValueError: Input length of input_ids is 0, butmax_length is set to -39. This can lead to unexpected behavior. You should consider length or, better yet, setting max_new_tokens.
Thanks in advance
`
The text was updated successfully, but these errors were encountered:
Hello,
The only thing that is different from the instruction of the installation is I used Vicuna v.1.5, it downloads the weights locally from HF. In the bliva_vicuna7b.yaml, llm_model: I set it to the locally downloaded weights folder", not sure if this the source of the problem. Otherwise, I used an image of the same size 224. Here is my traceback:
File "/home/user/BLIVA/evaluate.py", line 93, in <module> main(args) File "/home/user/BLIVA/evaluate.py", line 85, in main eval_one(image, question, model) File "/home/user/BLIVA/evaluate.py", line 46, in eval_one outputs = model.generate({"image": image, "prompt": question}) File "/home/user/conda_env/bliva/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/home/user/BLIVA/bliva/models/bliva_vicuna7b.py", line 382, in generate outputs = self.llm_model.generate( File "/home/user/conda_env/bliva/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/home/user/conda_env/bliva/lib/python3.9/site-packages/transformers/generation/utils.py", line 1447, in generate self._validate_generated_length(generation_config, input_ids_length, has_default_max_length) File "/home/user/conda_env/bliva/lib/python3.9/site-packages/transformers/generation/utils.py", line 1166, in _validate_generated raise ValueError( ValueError: Input length of input_ids is 0, but
max_lengthis set to -39. This can lead to unexpected behavior. You should consider length
or, better yet, settingmax_new_tokens
.Thanks in advance
`
The text was updated successfully, but these errors were encountered: