-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
configuration confusion and error on ws #1
Comments
@hksk you probably connected to a wrong server. the settings in your continue vscode extension should be configured to use the continue server (which starts when your continue vscode extension starts), not ggml-server (llama_cpp.server). something like this:
(assume the continue server and ggml-server are in the same machine, if not, ggml-server need to start with 0.0.0.0:8000) |
@longyee What you've said is correct, and @hksk sorry for not seeing this earlier. We had just changed up the config file format at the time, but you can now open up |
Hi guys, I just trying to run this and I got few errors:
btw im running the model:
python3 -m llama_cpp.server --model models/wizardLM-7B.ggmlv3.q4_0.bin --host 0.0.0.0
because the server are running in other computer.
thanks for your project, seems great!
The text was updated successfully, but these errors were encountered: