-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama reports an error when running the AI model using GPU #4279
Labels
bug
Something isn't working
Comments
This error appears to be coming from Docker or the Nvidia container runtime. It looks like it happens before ollama starts running. Please make sure you have GPU support configured and working in with Docker. https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html A simple test to verify things are working properly at the Docker + Nvidia level without Ollama involved is:
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
What is the issue?
When I use the command :
’docker run --gpus all -d -v f:/ai/ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama‘
the following error will occur,
“docker:Error response from daemon:failed to create task for container:failed to create shim task:OCIruntime create failed:runc create failed:unable to start container process:error during container init:error runninghook #0:error running hook:exit status 1,stdout:stderr:Auto-detectedmode as 'legacy nvidia-container-cli:initialization error:load library failed:libnvidia-ml.so.1:cannot open shared object file:no such file or directory:unknown.”How should we handle this issue?
OS
No response
GPU
Nvidia
CPU
Intel
Ollama version
No response
The text was updated successfully, but these errors were encountered: