You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I don't know what's the best way to address it. On one hand, this is a temporal issue in ScrapyCloud/JobQ, so this is something external to shub_workflow. On the other hand, there's been some instability on this API for quite a while, so it could be good to handle it, at least to avoid the jobs from being stuck.
I think if self.run_job() returns None in GraphManager, we should probably stop the waiting loop because we won't be tracking that task's associated job ID either way.
I wonder if getting the job ID from the already scheduled job could be possible, too, so GraphManager can recover from this situation without having to restart it manually.
The text was updated successfully, but these errors were encountered:
Situation
BaseScript._schedule_job()
andTask.run()
)None
job_id is then tracked among the running jobs (SeeGraphManager.run_pending_jobs()
)GraphManager.check_running_jobs()
)Logs
Affected jobs:
Proposal
I don't know what's the best way to address it. On one hand, this is a temporal issue in ScrapyCloud/JobQ, so this is something external to shub_workflow. On the other hand, there's been some instability on this API for quite a while, so it could be good to handle it, at least to avoid the jobs from being stuck.
I think if
self.run_job()
returns None in GraphManager, we should probably stop the waiting loop because we won't be tracking that task's associated job ID either way.I wonder if getting the job ID from the already scheduled job could be possible, too, so GraphManager can recover from this situation without having to restart it manually.
The text was updated successfully, but these errors were encountered: