-
Notifications
You must be signed in to change notification settings - Fork 128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: GPU support on apple silicon #196
Comments
Hi Jens, I did just get it to work on a M3 Max MacBook Pro.
When running Platform 'METAL' is experimental and not all JAX functionality may be correctly supported! systemMemory: 128.00 GB [0 1 2 3 4 5 6 7 8 9] Similarly, when I run 2024-01-31 09:44:33,393 Running colabfold 1.5.5 (a00ce1bcc477491d7693e3816d21ea3fc2cf40fd) WARNING: You are welcome to use the default MSA server, however keep in mind that it's a 2024-01-31 09:44:33.410904: W pjrt_plugin/src/mps_client.cc:563] WARNING: JAX Apple GPU support is experimental and not all JAX functionality is correctly supported! systemMemory: 128.00 GB 2024-01-31 09:44:34,990 Running on GPU Hope this will also work for you. Best |
Thanks Philipp! |
That's a new one! lol |
At least no errors, but pTMs and ipTMs of 0.99 or similar compared to pTMs/ipTMs of around 0.4 when using the Tesla A100. Also the runtime was 2023.9s vs 65.8s for 159 amino acids |
I am trying to use the GPUs on my M2 Max MacBook Pro for ColabFold predictions. I installed jax metal 0.0.4. This gave me an incompatibility with haiku. After upgrading haiku to 0.0.9, I now get an error: "python3.10[7216:52122] -[MPSGraphExecutable initWithMLIRBytecode:executableDescriptor:]: unrecognized selector sent to instance 0x2b5871a90"
as well as "Could not predict FFAR2_HUMAN. Not Enough GPU memory? Caught an unknown exception!"
Has anyone got localcolabfold running using Apple silicon GPUs?
Thanks!
Jens
CSSB Hamburg
The text was updated successfully, but these errors were encountered: