-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
XLA_GPU
and no GPU utilization
#38
Comments
Sorry I have no idea, looks like a TensorFlow question. |
I had a similar issue and was able to resolve it by using conda to install tensorflow (and its dependencies/cudnn etc) instead of pip |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
When I run
I see the process use GPU memory in
nvidia-smi
, but there is 0% GPU utilization and training is super slow. When I look at the devices returned bylibml/utils.py:get_available_gpus
, thelocal_device_protos
are allXLA_GPU
instead ofGPU
. Any ideas on what might be going on here and how to fix? Presumably this is some kind of version issue?(Apologies that this is a more general TF question, but I wasn't able to find a working fix by Googling)
The text was updated successfully, but these errors were encountered: