You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Does SpatialScope support distributed training using GPUs from different compute nodes (e.g., 4 gpus from two different nodes, 2 GPUs per node) which is common under a cluster environment (similar to https://pytorch.org/docs/stable/nn.html#module-torch.nn.parallel)? Thanks a lot!
Feng
The text was updated successfully, but these errors were encountered:
To whom it may concern:
The current training script uses gpus from a single host node (i.e., all 4 gpus are on the same machine):
Does SpatialScope support distributed training using GPUs from different compute nodes (e.g., 4 gpus from two different nodes, 2 GPUs per node) which is common under a cluster environment (similar to https://pytorch.org/docs/stable/nn.html#module-torch.nn.parallel)? Thanks a lot!
Feng
The text was updated successfully, but these errors were encountered: