-
Notifications
You must be signed in to change notification settings - Fork 669
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model does not run correctly on CUDA Capability 7.x GPUs #59
Comments
if you're interested in stats, works fine on NVIDIA GeForce RTX 3080, Driver Version: 535.183.01 |
Could you be specific about what was not working on your side on the V100? How to recognize that there is a problem? Is non-sense structure THE indicator of numerical inaccuracies? The mentioned post with non-sense structures on Quadro 4000 and RTX 2060S were not done in your docker environment... Because on my side, predictions look perfect on an old Quadro P3000 6GB for several proteins and ligand complexes (i.e. on a 6 years-old Thinkpad laptop and a mobile GPU with compute < 8.0),. Also works great on RTX-3090. Other than non-sense structures, what other observation could indicate that we have numerical inaccuracy? Is there a controlled test we could do to identify potential numerical inaccuracy in our setup? |
The nonsense structure is the indicator of the problem here - output will look almost random. The problem appears related to bfloat16, which is not supported on older GPU. We will continue to investigate next week. Interesting to know that it does work on some older GPU, thanks for the report. Even if the major issue under investigation here isn't present, please note we have not done any large scale numerical verification of outputs on devices other than A100/H100. |
Thank you for the precision @joshabramson. I will watch for "exploded" structures, and report the specifics if ever it happens on one of my GPUs. The P3000 definitely does not support natively BF16 (CUDA capability 6.1). I guess it emulates it via float32 compute. Since it is quite probable that several people will try to run AF3 on their available hardware, here are some details of my setup where it works perfect so far. Number of tokens (12 runs so far on that GPU) : 167-334 tokens, so largest bucket size tested was 512. Largest test: Typical inference speed for < 256 tokens : 150-190 seconds per seed (so typically less than 3 minutes for < 256 tokens) GPU : Quadro P3000, Pascal architecture, Computer Capability = 6.1 (ThinkPad P71 laptop) Docker : default setup, NOT using unified memory
nvidia-smi
nvcc -V
deviceQuery
neofetch
|
We ran the "2PV7" example from the docs on all GPU models available on our cluster with the following results:
Specifically, a ranking score of -99 corresponds to noise/explosion, and a ranking score of 0.67 corresponds to a visually compelling output structure. Update (20.11): added driver/cuda versions reported by nvidia-smi. |
Thanks @jurgjn, this is incredibly useful information! These are the GPU capabilities (see https://developer.nvidia.com/cuda-gpus) for the GPUs mentioned:
Looks like anything with GPU capability < 8.0 produces bad results. |
Just to add one more piece of info, I am using a RTX A6000 (capability 8.6) and so far all looks well. |
RTX A5000 (capability 8.6) works well too |
Could more people test with capability 6.x? Based on the result above from @smg3d, it looks that maybe only capability 7.x is broken, while 6.x (and >8.0) might be fine. I.e. current theory:
|
I wonder if it could be a driver effect? I noticed several people are mentioning they are using older driver. Might be useful to know which driver and Cuda @jurgjn was using on his system. I was using Driver 560.35.03 and Cuda V12.6.77 (Actually just upgraded to driver 565 today). |
I could now try AF3 on a Quadro P4000 (Pascal) and like @smg3d reported for P3000, on this GPU it works. This test was done with the same driver and cuda versions (565.57.01, cuda_12.5.r12.5) as the tests on RTX 2060S (Turing) and Quadro RTX 4000 (Turing). |
V100 also meet "exploded" structures NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 |
Quadro RTX 8000 also got exploding structures Driver Version 555.42.06, CUDA Version 12.5 |
I can confirm that it runs well on P100 (capability 6.0). So far it has been confirmed that it runs well on the following 6.x Capability:
And so far there has been no reports of "exploded structures" on 6.x Capability. |
- Add explicit check for compute capability < 6.0 - Keep check for range [7.0, 8.0) - Update error message to clarify working versions (6.x and 8.x) - Addresses issue google-deepmind#59
I think it would be good for users to be able to use AlphaFold3 on Pascal GPUs (without requiring them to modify code). The data on this issue strongly suggest that the "exploded structures" problem does not affect Pascal GPUs (compute capability 6.x). Moreover, there are still several clusters with P100s, and these often have 0 or very short wait time (compared to the A100s). For example, on one of the Canadian national clusters, AF3 jobs on P100 currently start immediately, whereas jobs on the A100 (on the same cluster) often have 10-30 minutes wait time in the queue. So for a single inference job on small-medium size protein complexes, we get our predictions back much faster with the P100, despite the inference being ~5x slower (358 sec vs 73 sec on the tested dimer). I tested and submitted a small PR to allow Pascal GPUs to run without raising the error message. |
I got a nice looking structure for the 2PV7 example on an
|
Following up on the previous comment, I ran some docking simulations on our old cluster, which is a mix of "RTX 2080 Ti" and "GTX 1080 Ti" nodes. All the ~20 jobs on the 1080s worked OK, all of the ~20 jobs on the 2080s gave exploded structures and ranking_scores of -99. Looks like the 2080s have compute capability 7.5 and the 1080s have compute capability 6.1, so this fits with the "7.0 <= CC < 8.0 is bad" theory. |
Thanks for all the reports and suggestions here. Update from our side: We identified where the issue with bfloat16 V float32 is for V100, after fixing that structures are no longer exploded, but
We are investigating these issues with the XLA team, but in the meantime we do not believe V100s are safe to use even without exploding structures. We also tested P100s, which have capability less than 7, and there we can run without any changes (other than turning flash attention implementation to 'xla') up to 1024 tokens, and with no regression in accuracy compared to A100. However, given the issues we see on V100, we have reservations about removing any restrictions on gpu versions just yet. Users are free to remove the hard error from the code themselves. |
Awesome, thank you for the update and for digging into this! Is it easy to say what the bfloat16 "partial fix" for V100s is? In case we wanted to try doing some testing on other 7<=CC<8 GPUs? |
The partial fix is to convert any bfloat16 params to float32 directly after loading them, and to set |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
Originally I came here to make sure the code / model can run distributed over 2 or more GPUs, because my 2 RTX 4000 Ada "only" have 20GB each. (combined the 40GB of an A100) Remains the question if we could please update the installation documentation to be less intimidating by mentioning lower end hardware and not only $30.000+ irons, e.g. for students to get their feet wet. Thanks |
Updates (and some good news):
|
Does anyone know if inference works on the Apple M4 chip? (Or any Apple M series GPU, for that matter.) |
I appreciate all the helpful resources on this thread.
Thank you!! |
Hi @OrangeyO2,
A40 has compute capability 8.6 and uses the Ampere architecture (A100 also uses Ampere) so it should be fine. That being said, we haven't done any large-scale tests on that particular GPU type.
P100 is compute capability 6.0 (Pascal), A40 is 8.6 (Ampere), L40 is 8.9 (Ada Lovelace). As such, I would recommend A40 or L40 as they will be significantly faster than the P100. They are likely to be ok, but I recommend you run some accuracy tests. |
Thanks for the info and for keeping us updated with the status! Are there some generic accuracy tests one can run on different GPU types (that were not specified abode) to make sure that this V100 issue is not taking place? Does the V100 issue basically lead to random-looking output no matter the input, or just in specific cases? |
@Augustin-Zidek |
Has anyone succeeded on Tesla T4 (capability 7.5). Driver Version: 550.54.15,CUDA Version: 12.4 . --flash_attention_implementation=xla Is there any way to keep it from predicting "exploded structures" |
I did test it on our T4 cluster (with CUDA 12.6 and |
Please avoid the partial fix mentioned above if possible as it can give less accurate output than expected. We are working on a complete fix and will update on timelines very soon. |
Hi, thanks again for this great tool - is there any news regarding how us users can make sure that our GPUs are ok accuracy-wise? Is the issue discussed here related just to random/obviously wrong structures predicted? Or is this GPU-accuracy-issue more nuanced than that? I am looking for a benchmark to verify the validity of different GPU models |
We are pretty sure CUDA capability 7 GPUs all face the same issue, and should not currently be used. CUDA capability 6 or >=8 are fine. As per comments above, there are some hacks that can move away from exploding structures for cc 7 gpus, but then numerical accuracy is not on par with what we expect. Please await the full fix for cc 7 gpus, which is coming soon. |
Great, thanks @joshabramson |
Hi all, in 781d8d0 we've added instructions to workaround the issue with CUDA Capability 7.x GPUs. The workaround is to set the environment variable We've tested this fix on V100 to confirm it works, but have not tested on other CUDA Capability 7.x GPUs. Can you please give this workaround a try and let us know if it resolves the issue for you? |
It would be great to have some external validation of this workaround - we believe it should work on a range of devices but need to have that confirmed. |
Thanks for sharing this fix! I have gotten "meaningful looking" results on multiple RTX2080ti GPUs (CC 7.5). Here "meaningful looking" is judged by iptm, ranking_score, 3D models. Definitely not the same bad behavior as without the fix. I haven't done a large-scale careful comparison, though. And my interpretation of the fix was to add
to my sbatch script |
While I'm neither a great expert (just a bioinformatics student) nor some big org I would have loved to try AlphaFold 3. Long ago I applied for Google Glass 2 and it would have been nice to get ANY feedback instead of hearing of the cancellation in the press half a year or more later. Also, I'm wondering if I'd need to take potential AF3 experience directly from studies to grave or what the licensing perspectives are. Thanks |
Hi, We tested the fix on RTX2080 ti on an ubuntu 24.04 and it looks good. |
Thanks to those who ran with this flag - we just wanted to check the structures now look reasonable across a few more device types. We will look into a fix that goes into jax permanently rather than having to add a flag, but that will take longer (and require a jax version upgrade). As this is no longer believed to be a major issue, we will unpin the issue, but will leave it open until we tag a new version early next year, because it will still affect people who haven't updated their code. |
Driver Version: 550.54.15,CUDA Version: 12.4 . Using the 2pv7 sequence as a test, the ranking score is slightly lower than 0.67。 seed sample ranking_score |
maybe more seeds would be better |
A note from us at Google DeepMind:
We have now tested accuracy on V100 and there are serious issues with the output (looks like random noise). Users have reported similar issues with RTX 2060S and RTX Quadro 4000.
For now the only supported and tested devices are A100 and H100.
The text was updated successfully, but these errors were encountered: