Decreasing cpu bottleneck when running on gpu #2173
-
I am running code that is very similar to the quickstart code, but with 1000 training images from spacenet. I have created a singularity image out of the docker image so I can run raster vision on a gpu node of an hpc. My code runs very slowly, and when I use nvtop to look at the utilization, I see that the cpu utilization is at 100%, and the gpu utilization is at 0%. I saw in the documentation here that it is recommended to set a root temporary directory to avoid re-downloading files. Is this a common fix to cpu bottlenecks? Should the root temporary directory point to a path in the container? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 4 replies
-
Is the GPU even being detected? What does the The doc page you linked is talking specifically about multi-GPU environments, so shouldn't be relevant here. But to answer the question: yes, the path should be a path accessible from inside the container. |
Beta Was this translation helpful? Give feedback.
Low GPU utilization probably means most of the time is being spent reading the data.
num_workers=4
helps performance with the larger machine with more cores, and hurts performance with the smaller machine?