Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Respect Telegram's throttle requests while generating timelapses #179

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

pidpawel
Copy link
Contributor

@pidpawel pidpawel commented Jun 23, 2022

I've been having occasional hiccups when it came to timelapses. I've been getting errors like: Jun 22 23:24:38 fluiddpi python[549]: telegram.error.RetryAfter: Flood control exceeded. Retry in 13.0 seconds so… I decided to add a kinda proper handling of this error instead of the naive implementation I've commited some time ago.

This implementation is sort-of generic and may be transplanted to other parts of code if you wish so (I haven't seen the need so far though). The retryable_notification is almost a decorator as it needs a callable/lambda as an argument and repeatedly tries to execute it until one of the conditions is met. The first argument decides how persistent should this mechanism be: if required is True then it will try sending the message no matter what and in case of a fail will block for the requested number of seconds. if required is False then it may proactively block the request at all if it knows that the required amount of time has not passed yet, but will not block the rest of the code.

As you can see in the screencast below the messages are kinda choppy, but it's close to the best what we can do without actually slowing the timelapse generation.
https://user-images.githubusercontent.com/322405/175297031-3380da45-7cc7-4942-8efd-e1f7d06d4f96.mp4

Hope you like it & have a wonderful day
Paweł

@aka13-404
Copy link
Collaborator

Hey! Thank you for you contribution. It's very cleanly done, and I appreciate your effort.

The problem that we have, is an "ideological" one.
What you do, is you queue the updates, and try to send them "as soon as possible", which can cause a lot of problems down the line. A similar problem already exists in the reverse direction - if the bot was offline, and you send 50 commands to the chat in that time, and the turn the bot back on, it will try and run ALL of them. This is a bigger issue, than one may think.

To the problem you are trying to solve: @nlef had the idea, no to try and send "when it is possible again", but rather if sending is impossible right now, queue everything up, but instead of sending "when possible", to first collapse similar tasks - for example, if you have 10 status updates queued, 9 out of them are useless already, and will be immediately replaced, so there is no reason to send them at all.

If you wish to take on such a big feature, you are more than welcome to, and we will be happy to see your ideas on that topic.
I am leaving this open of course, should you decide, that you want to proceed with it. I think this is a good place to have discussions on that. I will also open a corresponding issue on our project board, to track progress.

@aka13-404 aka13-404 added Feature Request New feature or request Accepted suggestion A good suggestion for future versions labels Jun 24, 2022
@aka13-404 aka13-404 added this to Long-Term suggestions in main development via automation Jun 24, 2022
@aka13-404 aka13-404 moved this from Long-Term suggestions to In Progress in main development Jun 24, 2022
@nlef nlef moved this from In Progress to Planned for implementation in next release in main development Jul 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Accepted suggestion A good suggestion for future versions Feature Request New feature or request
Projects
main development
Planned for implementation in next re...
Development

Successfully merging this pull request may close these issues.

None yet

3 participants