-
-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: spatial: serialize concurrent calls to QHull #20619
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Too bad about the code churn but putting the lock cleanup in a finally block does make sense.
Probably worth benchmarking singlethreaded performance since acquiring an uncontested |
Great, looks like this fixes a long-standing crash in
Could you skip this test on 32-bit platforms and when not enough memory is available @andfoy? Search the code base for Also, could you add |
Thanks for the pointers! @ngoldbaum @rgommers @ev-br, I'll make sure to fix the tests and some minor lock acquisition details |
The diff with whitespace hidden looks quite clean now. Probably about ready to go once the merge conflict is resolved? |
def test_threading(self): | ||
# This test was taken from issue 8856 | ||
# https://github.com/scipy/scipy/issues/8856 | ||
check_free_memory(30000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you really mean to set the limit to 30 GB? That means it won't be run on machine with 32 GB RAM - i.e., almost never.
I seem to be needing about 10 GB total:
$ ulimit -Sv 10000000
$ python dev.py test -t scipy.interpolate.tests.test_interpnd::TestLinearNDInterpolation::test_threading -m full
It crashes with 8 GB on my machine. I guess that makes sense, since the 32-bit test showed that a single array creation needed about 1.5 GB, and there are four threads here. So the test needs 6 GB, and then there's the memory taken by SciPy itself, other tests, and dependencies.
Ideally I think we set the limit to 10 GB or so, and tweak the test a bit so that it requires 0.5 GB per thread. That way it'll run for most contributors.
Could you please check if the test still reproducibly crashes when you slightly lower the array sizes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me check it again with lower bounds
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, the lowest values for which we can reliably reproduce the crash case imply a memory consumption of ~5GB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That seems fine - that'll mean it runs on most developer machines. Let's call this good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's get this in, seems like everything is working now. Thanks @andfoy!
Reference issue
Closes gh-8856
What does this implement/fix?
This PR introduces a Mutex mechanism to serialize QHull calls, thus preventing potential segfaults or undefined behaviour, since it is not thread-safe. See http://www.qhull.org/html/qh-code.htm#reentrant
Additional information
Depends on #20611