Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama reports an error when running the AI model using GPU #4279

Closed
xiaomo0925 opened this issue May 9, 2024 · 1 comment
Closed

Ollama reports an error when running the AI model using GPU #4279

xiaomo0925 opened this issue May 9, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@xiaomo0925
Copy link

What is the issue?

When I use the command :
’docker run --gpus all -d -v f:/ai/ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama‘
the following error will occur,
“docker:Error response from daemon:failed to create task for container:failed to create shim task:OCIruntime create failed:runc create failed:unable to start container process:error during container init:error runninghook #0:error running hook:exit status 1,stdout:stderr:Auto-detectedmode as 'legacy nvidia-container-cli:initialization error:load library failed:libnvidia-ml.so.1:cannot open shared object file:no such file or directory:unknown.”How should we handle this issue?

OS

No response

GPU

Nvidia

CPU

Intel

Ollama version

No response

@xiaomo0925 xiaomo0925 added the bug Something isn't working label May 9, 2024
@dhiltgen
Copy link
Collaborator

This error appears to be coming from Docker or the Nvidia container runtime. It looks like it happens before ollama starts running.

Please make sure you have GPU support configured and working in with Docker. https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

A simple test to verify things are working properly at the Docker + Nvidia level without Ollama involved is:

% docker run --gpus all ubuntu nvidia-smi
Tue May 21 23:54:36 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.105.01   Driver Version: 515.105.01   CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
| 35%   31C    P8    N/A /  19W |      1MiB /  4096MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants