You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you for all the work you've done on parallelizing alphafold2. I'm currently using ParaFold for MSA and template search on CPUs (with -f flag). However, when I tried export CUDA_VISIBLE_DEVICES="" and the -g flag (-g or -g False) in the run_alphafold.sh code, it still looks like the GPU is doing the work.
source $HOME/software/micromamba/etc/profile.d/micromamba.sh
micromamba activate $HOME/software/micromamba/envs/alphafold
export CUDA_VISIBLE_DEVICES=""
for file in ${PRJ_DIR}/01_fasta_dir/*.fasta; do
run_alphafold.sh -d $database -i $file \
-o ${PRJ_DIR}/02_AF2_search_output \
-m model_1 -p monomer_ptm \
-f # with "-g" or "-g False" (I'm not sure which one is the correct usage)
done
And I checked the usage of GPU.
$ nvidia-smi
Thu Apr 18 13:15:18 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.113.01 Driver Version: 535.113.01 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 2080 Ti Off | 00000000:17:00.0 Off | N/A |
| 36% 34C P8 7W / 250W | 543MiB / 11264MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA GeForce RTX 2080 Ti Off | 00000000:25:00.0 Off | N/A |
| 24% 30C P0 21W / 250W | 0MiB / 11264MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 216735 C python 156MiB |
+---------------------------------------------------------------------------------------+
And I checked the usage of CPUs. It seems only 5 CPUs was running.
$ cpu4user.sh
%cpu %mem user
======================
7361.8 7.5 fengxiao
527.4 0 liyulong # this is me
9.9 0 root
$ top -u liyulong
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
220070 liyulong 25 5 721252 111572 1284 R 531.4 0.0 7:28.39 jackhmmer
185580 liyulong 20 0 923760 51784 20584 S 3.2 0.0 0:58.62 node
220092 liyulong 20 0 163708 4160 1588 R 2.6 0.0 0:01.12 top
188285 liyulong 20 0 130768 6624 4492 S 0.6 0.0 2:01.42 wget
185455 liyulong 20 0 1026756 123144 23964 S 0.3 0.0 0:55.26 node
218404 liyulong 20 0 988700 81968 23308 S 0.3 0.0 0:06.10 node
218686 liyulong 25 5 123.3g 1.9g 311612 S 0.3 0.1 0:23.86 run_alphafold.p
160206 liyulong 20 0 130244 1800 944 S 0.0 0.0 0:00.35 screen
160207 liyulong 20 0 116764 3364 1680 S 0.0 0.0 0:00.11 bash
160337 liyulong 20 0 116764 3344 1660 S 0.0 0.0 0:00.10 bash
160440 liyulong 20 0 127572 5020 2556 S 0.0 0.0 0:03.35 zsh
185443 liyulong 20 0 113184 1408 1212 S 0.0 0.0 0:00.00 sh
185563 liyulong 20 0 728112 34716 19720 S 0.0 0.0 1:05.09 node
185623 liyulong 20 0 128236 5812 2752 S 0.0 0.0 0:18.03 zsh
188637 liyulong 20 0 728112 35116 19756 S 0.0 0.0 1:07.38 node
207806 liyulong 20 0 127252 4560 2460 S 0.0 0.0 0:00.25 zsh
207961 liyulong 20 0 127660 5020 2512 S 0.0 0.0 0:00.48 zsh
214561 liyulong 25 5 113312 1632 1328 S 0.0 0.0 0:00.00 bash
218318 liyulong 20 0 160908 2488 1092 S 0.0 0.0 0:00.54 sshd
218319 liyulong 20 0 113316 1724 1416 S 0.0 0.0 0:00.08 bash
218455 liyulong 20 0 728112 33044 19548 S 0.0 0.0 1:01.50 node
218676 liyulong 25 5 113188 1524 1264 S 0.0 0.0 0:00.00 run_alphafold.s
220079 liyulong 20 0 107956 356 280 S 0.0 0.0 0:00.00 sleep
220158 liyulong 20 0 113184 1484 1264 S 0.0 0.0 0:00.00 cpuUsage.sh
220165 liyulong 20 0 107956 356 280 S 0.0 0.0 0:00.00 sleep
275815 liyulong 25 5 227996 19272 3352 S 0.0 0.0 0:36.77 pyth
Hello,
First of all, thank you for all the work you've done on parallelizing alphafold2. I'm currently using ParaFold for MSA and template search on CPUs (with -f flag). However, when I tried export CUDA_VISIBLE_DEVICES="" and the -g flag (-g or -g False) in the run_alphafold.sh code, it still looks like the GPU is doing the work.
And I checked the usage of GPU.
And I checked the usage of CPUs. It seems only 5 CPUs was running.
According to the Quick-start of ParaFold (https://parafold.sjtu.edu.cn/docs/quick-start/). It should use 20 CPUs for the blasting.
So I'm calling ParaFold in a wrong way?
Looking forward to your reply. Thank you.
Best regards,
Yulong Li
The text was updated successfully, but these errors were encountered: