-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CANNOT run. Seems something wrong with the model? #4
Comments
You're using the GGML model format but the log indicates that you're using the SolutionEither convert your model to GGUF format or install an earlier version of
see details from https://pypi.org/project/llama-cpp-python/ Warning Starting with version 0.1.79 the model format has changed from ggmlv3 to gguf. Old model files can be converted using the convert-llama-ggmlv3-to-gguf.py script in llama.cpp |
(ggml) [lzb@VKF-NLP-GPU-01 ggml-server-example]$
(ggml) [lzb@VKF-NLP-GPU-01 ggml-server-example]$ python3 -m llama_cpp.server --model models/wizardLM-7B.ggmlv3.q4_0.bin
gguf_init_from_file: invalid magic number 67676a74
error loading model: llama_model_loader: failed to load model from models/wizardLM-7B.ggmlv3.q4_0.bin
llama_load_model_from_file: failed to load model
Traceback (most recent call last):
File "/home/lzb/.conda/envs/ggml/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/lzb/.conda/envs/ggml/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/lzb/.conda/envs/ggml/lib/python3.8/site-packages/llama_cpp/server/main.py", line 96, in
app = create_app(settings=settings)
File "/home/lzb/.conda/envs/ggml/lib/python3.8/site-packages/llama_cpp/server/app.py", line 337, in create_app
llama = llama_cpp.Llama(
File "/home/lzb/.conda/envs/ggml/lib/python3.8/site-packages/llama_cpp/llama.py", line 340, in init
assert self.model is not None
AssertionError
The text was updated successfully, but these errors were encountered: