-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reflector late updates on Kubernetes 1.20 and older (known bug on k8s) and workaround #246
Comments
Workaround: From experience k8s 1.20 and older stops sending updates to watchers after 10 minutes. I would suggest setting the timeout to 600 and adjust as needed. I still recommend upgrading to the latest version of k8s available from your provider when possible. The above workaround should be used in case you're currently stuck with the version of k8s you have. Hope it helps! |
Hello! |
@shdwlkr can you post the logs and sample secrets for it? We're running a lot of clusters on 1.23.x and have not faced any issues so far |
Automatically marked as stale due to no recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Automatically closed stale item. |
Hi,
Due to a number of similar issues, I'm posting this one as a sticky:
On Kubernetes 1.21 and older (up to 1.14 I think) there was an issue with the API server pushing events late or not at all on idle watchers (watchers that have not received any changes). This results in reflector not being aware of changes until the connection times out and the watcher is reset.
Reflector does not poll for changes (due to clusters with large amount of secrets/namespaces etc.). It relies on k8s to push events to the subscribed watchers when changes occur.
If your cluster is on k8s version 1.20.x or older, I suggest you
upgrade to the latest version of k8s supported by your cloud or on-premise service
. This bug seems to have been fixed in k8s 1.21.xThe text was updated successfully, but these errors were encountered: