-
Notifications
You must be signed in to change notification settings - Fork 660
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
M2 macbook air Internal Error #73
Comments
I got the same HTTP 500 error at front-end and server |
@SIKtt even then it does not work for me |
same here |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Terminal -
INFO: Started server process [2986]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://localhost:3001/ (Press CTRL+C to quit)
llama-gpt-llama-gpt-ui-mac-1 | making request to http://host.docker.internal:3001/v1/models
INFO: 127.0.0.1:49485 - "GET /v1/models HTTP/1.1" 200 OK
llama-gpt-llama-gpt-ui-mac-1 | making request to http://host.docker.internal:3001/v1/models
INFO: 127.0.0.1:49488 - "GET /v1/models HTTP/1.1" 200 OK
llama-gpt-llama-gpt-ui-mac-1 | {
llama-gpt-llama-gpt-ui-mac-1 | id: '/models/llama-2-7b-chat.bin',
llama-gpt-llama-gpt-ui-mac-1 | name: 'Llama 2 7B',
llama-gpt-llama-gpt-ui-mac-1 | maxLength: 12000,
llama-gpt-llama-gpt-ui-mac-1 | tokenLimit: 4000
llama-gpt-llama-gpt-ui-mac-1 | } 'You are a helpful and friendly AI assistant. Respond very concisely.' 0.5 '' [ { role: 'user', content: 'hi' } ]
ggml_metal_graph_compute: command buffer 0 failed with status 5
GGML_ASSERT: /private/var/folders/nw/8b8162fj3sq3667m79wm49cw0000gn/T/pip-install-hcpw_ie5/llama-cpp-python_7f3b8275091343838f2dd60c58213caf/vendor/llama.cpp/ggml-metal.m:1094: false
llama-gpt-llama-gpt-ui-mac-1 | [TypeError: fetch failed] {
llama-gpt-llama-gpt-ui-mac-1 | cause: [SocketError: other side closed] {
llama-gpt-llama-gpt-ui-mac-1 | name: 'SocketError',
llama-gpt-llama-gpt-ui-mac-1 | code: 'UND_ERR_SOCKET',
llama-gpt-llama-gpt-ui-mac-1 | socket: {
llama-gpt-llama-gpt-ui-mac-1 | localAddress: '172.18.0.2',
llama-gpt-llama-gpt-ui-mac-1 | localPort: 58896,
llama-gpt-llama-gpt-ui-mac-1 | remoteAddress: '192.168.65.254',
llama-gpt-llama-gpt-ui-mac-1 | remotePort: 3001,
llama-gpt-llama-gpt-ui-mac-1 | remoteFamily: 'IPv4',
llama-gpt-llama-gpt-ui-mac-1 | timeout: undefined,
llama-gpt-llama-gpt-ui-mac-1 | bytesWritten: 587,
llama-gpt-llama-gpt-ui-mac-1 | bytesRead: 0
llama-gpt-llama-gpt-ui-mac-1 | }
llama-gpt-llama-gpt-ui-mac-1 | }
llama-gpt-llama-gpt-ui-mac-1 | }
Full Docker Version Terminal Log - https://ctxt.io/2/AABQSslxFQ
The text was updated successfully, but these errors were encountered: