-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stable expm #352
base: master
Are you sure you want to change the base?
Stable expm #352
Conversation
FWIW accelerate seems to have a lot of bugs, which is why numpy doesn't use it anymore. I believe it uses an ancient version of lapack. If you upload a test I can check if it fails on Linux linking against openblas. |
That would be much appreciated! I've been trying to get my system to build against openblas (installed via brew) or intel-mkl but it doesn't seem to work and the --src or sys crates are hard to get to work properly. if you clone my repo ( |
@matthagan15 I'm happy to try to help work through build errors, ping me (@ethanhs:ocf.berkeley.edu) on the rust science matrix and I can at least look at any build errors you are experiencing. I think I've had more luck with openblas fwiw. |
My results from running the expm_test:
|
Thank you so much! Yeah this is interesting, for example the "average entry error over epsilon" is the most important bit, it's basically taking the computed matrix exponential via expm, subtracting the "known" value from eigenvalues, and then taking the average absolute value of this difference over all entries. Your implementation gives around |
Ok sorry for the late follow up, but doing some testing of dense matrices up to 1024x1024 matrices reveals a one-norm difference (aka take the output of this expm and subtract the exact exponentiation of eigenvalues) of 3.5 × 10^-9, which is very good. For reference this is only 20x worse than scipy at the same dimension. I'd say that given this error is acceptable at this high of a dimension of a matrix that this should be sufficient for pretty much all but the absolute most accurate required computations, in which case I'd argue using something better than 64 bit floats would be a smarter move. If there is anything else the maintainers would like to see before merging please let me know! |
@matthagan15, your fork does not build as a dependency for me in a minimal rust project or standalone on the latest version of rust, while the main ndarray-linalg does. Let me know if you can not reproduce this. |
I just tested on my device:
then in Could you share some of the build errors you get? Might help me debug if theres something wrong with the branch. |
Odd, here is the error I get. Hopefully this is helpful:
|
Ah, you are building from the master branch not the |
Yep, you are right. My bad. Ya, the stable_expm branch looks good to me. |
Wonderful! If maintainers (I'm not sure who) have any concerns please let me know what needs to be changed to get this merged. |
Two new files added, one is source for implementing the standard expm routine used by matlab and scipy, and the other for testing this function against random sparse and dense matrices. There currently is a noticeable error per entry in the dense matrix regime that scales with the dimension of the underlying matrix, I have been unable to find the source of this error. I have ruled out my implementation as it is accurate to f64 accuracy with 1-sparse matrices, indicating that the algorithm is correct but something with dense matrices becomes off. I'm unsure if it is simply my machine (Apple M1 but I doubt it as scipy has a similar error but it is much smaller) or the BLAS/LAPACK library I am using (built in apple Accelerate, I think numpy uses openblas but I couldn't get it configured on my machine). I think this error should be investigated by others, as it is not crippling for matrices less than 100x100 but becomes unusable for scientific (which is why I wrote this) or hardcore engineering purposes beyond 150x150 matrices. This caveat is directly mentioned in the expm docstring. I have more data on this error and can share, I'm not sure what the best channel for this info would be though.