-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
start without running queues throws error #25
Comments
Seems to be related to this one: here is some more output, now the manager runs but kills and never restarts workers bundle exec resque-pool resque-pool-manager[7889]: WINCH: gracefully stopping all workers $ ps -ef f | grep [r]esque running ubuntu maverik 64bit: |
This might be a different issue than your original bug, but something appears to be sending the WINCH signal to your pool manager. Are you doing that manually? |
no i am just starting the manager without args in dev mode and also deleted my bundle so no old gems are present. I searched in rp gem for WINCH but it is not used anywhere .. whow this is funny: WINCH get send to the process when i resize my terminal window ??? |
WINCH is an abbreviation for window [size] change |
Well that explains the WINCH issue (which is different from the original one). I had completely forgotten WINCH's original purpose. :) I'm copying nginx and Unicorn's signals handling, and WINCH will gracefully shutdown all workers (see "Signals" in the README). You can restart them with HUP. As to your original bug, yeah we should guard against workers being nil. |
this issue has been hitting me for while and I have been running resque-pool from inside screen to monitor it :( |
I noticed that this started happening when I upgraded New Relic's rpm_contrib from 1.0.13 to 2.1.4. I haven't looked into it more than that, but yanking rpm_contrib fixed the problem. I'd imagine their resque probes are doing something they shouldn't be. |
I've been running into this issue since we use resque-pool in development, and as such, our workers are reaped frequently. If it hasn't been fixed, I may take a crack at a pull request. |
When starting for the first time without running workers an error is thrown and stops the whole pool from running:
Possible solution, add 'if worker' to reaper message:
I am not sure, but i think last week i had it running local without this error. maybe it is the startup time of the workers which prevents the reaper from seeing them.
The text was updated successfully, but these errors were encountered: