-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
database is locked
error when hiedb is run concurrently
#63
Comments
I don't think it is possible for a sqlite database to be written to concurrently, even with the WAL and WAL2 modes. In particular, the COMMIT steps require exclusive access to the database. https://sqlite.org/cgi/src/doc/begin-concurrent/doc/begin_concurrent.md |
That's correct. I don't want concurrent writes to the database, I want Currently the behavior is this:
The behavior I want is this:
|
I'm not sure For example, HLS implements retrying functionality here: https://github.com/haskell/haskell-language-server/blob/66cf40033cf6d1391622ee4350696656ea0c39d9/ghcide/session-loader/Development/IDE/Session.hs#L361 |
When I run
hiedb index .hiefiles
concurrently, the second process errors because the first has already locked the database:Context
We're using
ghciwatch
for development in combination withstatic-ls
for language intelligence.ghciwatch
handles loading and compiling modules inghci
for rapid reloads, andstatic-ls
leverageshiedb
to provide language intelligence. (Our project is too big forhaskell-language-server
.)We're using
ghciwatch
's lifecycle hooks to reindex the hiefiles on reloads:Because
ghciwatch
runs these commands asynchronously (so that it can keep reloading and responding to changes while the files are indexed), it's possible for multiplehiedb
processes to run at once (especially on the first load when 7500+ modules are being indexed), which has been triggering this bug.I have a few ideas for working around this in
ghciwatch
(only allowing one of each hook to run at once, killing old hook processes before launching new ones, etc.), but I think we should fix this inhiedb
as well.The text was updated successfully, but these errors were encountered: