Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Determinant of Identity Matrix on CUDA #1143

Open
k-bingcai opened this issue Mar 3, 2024 · 5 comments
Open

Determinant of Identity Matrix on CUDA #1143

k-bingcai opened this issue Mar 3, 2024 · 5 comments

Comments

@k-bingcai
Copy link

Hello,

I noticed that I cannot compute the determinant of an identity matrix using torch in R.

i.e. torch_eye(3)$cuda()$det()

It gives me this error:

Error in (function (self)  : 
  #ifdef __HIPCC__
  #define ERROR_UNSUPPORTED_CAST ;
  // corresponds to aten/src/ATen/native/cuda/thread_constants.h
  #define CUDA_OR_ROCM_NUM_THREADS 256
  // corresponds to aten/src/ATen/cuda/detail/OffsetCalculator.cuh
  #define MAX_DIMS 16
  #ifndef __forceinline__
  #define __forceinline__ inline __attribute__((always_inline))
  #endif
  #else
  //TODO use _assert_fail, because assert is disabled in non-debug builds
  #define ERROR_UNSUPPORTED_CAST assert(false);
  #define CUDA_OR_ROCM_NUM_THREADS 128
  #define MAX_DIMS 25
  #endif
  #define POS_INFINITY __int_as_float(0x7f800000)
  #define INFINITY POS_INFINITY
  #define NEG_INFINITY __int_as_float(0xff800000)
  #define NAN __int_as_float(0x7fffffff)

  typedef long long int int64_t;
  typedef unsigned int uint32_t;
  typedef signed char int8_t;
  typedef unsigned char uint8_t;  // NOTE: this MUST be "unsigned char"! "char" is equivalent to "signed char"
  typedef short int16_t;
  static_assert(sizeof(int64_t) == 8,

I'm not sure what to make out of it? I tried computing the same determinant in pyTorch and it worked fine. Is this a bug or is this something to be expected?

@dfalbel
Copy link
Member

dfalbel commented Mar 4, 2024

Hi @k-bingcai ,

I was not able to reproduce the issue. It might be an uncompatibility between the cuda version and torch.
Can you post your sessionInfo()? As well as your cuda version?

@k-bingcai
Copy link
Author

Hi @dfalbel,

Thanks for getting back! Here's my sessionInfo():

R version 4.3.1 (2023-06-16)
Platform: x86_64-conda-linux-gnu (64-bit)
Running under: Red Hat Enterprise Linux 8.8 (Ootpa)

Matrix products: default
BLAS/LAPACK: /nas/longleaf/home/bingcai/anaconda3/envs/multidfm/lib/libopenblasp-r0.3.21.so;  LAPACK version 3.9.0

locale:
 [1] LC_CTYPE=en_US.utf-8       LC_NUMERIC=C              
 [3] LC_TIME=en_US.utf-8        LC_COLLATE=en_US.utf-8    
 [5] LC_MONETARY=en_US.utf-8    LC_MESSAGES=en_US.utf-8   
 [7] LC_PAPER=en_US.utf-8       LC_NAME=C                 
 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=en_US.utf-8 LC_IDENTIFICATION=C       

time zone: America/New_York
tzcode source: system (glibc)

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] torch_0.12.0

loaded via a namespace (and not attached):
 [1] processx_3.8.2 bit_4.0.5      compiler_4.3.1 magrittr_2.0.3 cli_3.6.1     
 [6] Rcpp_1.0.11    bit64_4.0.5    coro_1.0.3     callr_3.7.3    ps_1.7.5      
[11] rlang_1.1.2   

The CUDA version is 12.2 (from nvidia-smi). If it helps, I had to manually create several broken symlinks during installation to get torch to use the GPU. The symlinks are:

ln -s libcudart-e409450e.so.11.0 libcudart.so.11.0
ln -s libcublas-f6acd947.so.11 libcublas.so.11
ln -s libnvToolsExt-847d78f2.so.1 libnvToolsExt.so.1

Hope that clarifies!

@dfalbel
Copy link
Member

dfalbel commented Mar 4, 2024

I'm pretty sure the problem is caused by a ABI compatibility issue between CUDA11 (used by torch) and CUDA12 that you have installed on that environment. I suggest you to install torch using the pre-built binaries, that include a compatible CUDA and CuDNN versions.

You can do so by running somehting like:

options(timeout = 600) # increasing timeout is recommended since we will be downloading a 2GB file.
# For Windows and Linux: "cpu", "cu117" are the only currently supported
# For MacOS the supported are: "cpu-intel" or "cpu-m1"
kind <- "cu118"
version <- available.packages()["torch","Version"]
options(repos = c(
  torch = sprintf("https://storage.googleapis.com/torch-lantern-builds/packages/%s/%s/", kind, version),
  CRAN = "https://cloud.r-project.org" # or any other from which you want to install the other R dependencies.
))
install.packages("torch")

@k-bingcai
Copy link
Author

Hello,

Thanks for the quick response! I'll try the proposed solution.

I have a rather naive question though: will the pre-built binaries work even though CUDA 12.2 is installed on the system? The documentation seems to suggest so (i.e. If you have CUDA installed, it doesn’t need to match the installation ‘kind’ chosen below.).

I am asking because the GPU is on a university-wide cluster and I cannot change the CUDA driver version...

@dfalbel
Copy link
Member

dfalbel commented Mar 5, 2024

With the pre-built binaries the globally installed cuda version doesn't matter, as the correct version is shipped within the package. That's actually a similar approach to what pytorch does.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants