Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After a large number of watch connections are disconnected from a client at the same time, the new watch cannot work properly. #18879

Open
4 tasks
alterge1st opened this issue Nov 12, 2024 · 10 comments
Labels
help wanted priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. type/bug

Comments

@alterge1st
Copy link

Bug report criteria

What happened?

We used the in-process etcdserver of v3client. Then we created a client, created a watch connection to the same resource every second, without freeing them, and ran it for more than 1 minute. When the client maintains a large number of watch connections, we kill the client process. After the client process is killed, when other clients attempt to establish watch connections for the same resource, the new watch connections cannot obtain new event changes.

What did you expect to happen?

After the client is killed, the new watch connection for the same resource can properly listen to event changes.
And after analysis, the blocking problem exists. Although it is unreasonable for the client to establish a large number of watch connections with the same resource at the same time, can the etcd server do something to avoid the blocking?

How can we reproduce it (as minimally and precisely as possible)?

We created a large number of Watch connections to the same configmap resource in a loop through a process using code similar to the following:
main.txt
After running this program for 1 minute, kill the program. When you continue to run the kubectl get configmap -A -w command, after the configmap is modified, the configmap change cannot be watched.

Anything else we need to know?

After the client is killed, a large number of watch connections are disconnected. The code analysis shows that the Send() function of WatchCancelRequest in case ws := <-w.closingc of the (w *watchGrpcStream) run() method in etcd/client/v3/watch.go is blocked and unable to continue processing.
It is suspected that a large number of WatchCancelRequests cause the channel in watchGrpcStream to be fully occupied. As a result, new WatchResponse cannot be pushed into sws.ctrlStream. The WatchResponse obtained from ctrlStream and new WatchResponse are blocked in case pbresp := <-w.respc and case ws := <-w.closingc in (w *watchGrpcStream) run().

Etcd version (please run commands below)

$ etcd --version
# 3.5.11
$ etcdctl version
# 3.5.11

Etcd configuration (command line flags or environment variables)

paste your configuration here

Etcd debug information (please run commands below, feel free to obfuscate the IP address or FQDN in the output)

$ etcdctl member list -w table
# paste output here

$ etcdctl --endpoints=<member list> endpoint status -w table
# paste output here

Relevant log output

No response

@alterge1st
Copy link
Author

When the size of channel ctrlStream(ctrlStreamBufLen) is increased, we need to create more watch connections and disconnect them to reproduce the blocking problem.

@ahrtr
Copy link
Member

ahrtr commented Nov 12, 2024

We recently fixed a watch related goroutine leak issue, #18784

The fix will be included in 3.5.17, which is supposed to be released this week. Please try again with the new version once available.

@alterge1st
Copy link
Author

We recently fixed a watch related goroutine leak issue, #18784

The fix will be included in 3.5.17, which is supposed to be released this week. Please try again with the new version once available.

Okay, thanks, we'll try the new version

@alterge1st
Copy link
Author

We recently fixed a watch related goroutine leak issue, #18784

The fix will be included in 3.5.17, which is supposed to be released this week. Please try again with the new version once available.

Unfortunately, I modified my local code to follow the latest changes, but the problem persists.

@serathius
Copy link
Member

serathius commented Nov 12, 2024

Not following whether the issue is etcd or K8s related. In the repro you provide and discuss code for kubernetes API while the folowing debugging is about etcd. I would like to clarify this, because K8s apiserver demultiplexes watch connections to etcd. So the issue with client cancelation should not happen for K8s, as 100 watches opened to apiserver still opens only 1 watch to etcd.

@alterge1st
Copy link
Author

Not following whether the issue is etcd or K8s related. In the repro you provide and discuss code for kubernetes API while the folowing debugging is about etcd. I would like to clarify this, because K8s apiserver demultiplexes watch connections to etcd. So the issue with client cancelation should not happen for K8s, as 100 watches opened to apiserver still opens only 1 watch to etcd.

Kube-apiserver is integrated with etcd. The in-process server of etcd is used to directly invoke APIs instead of using etcd service ports. This problem occurs when the watch command of K8s is used. When another process is started to create a large number of watch requests for the configmap cyclically, killing the process will cause the watch command of the configmap by the Kubernetes to become invalid.

@serathius
Copy link
Member

The in-process server of etcd is used to directly invoke APIs instead of using etcd service ports.

This should not change the fact that apiserver will demultiplexes watch, or have you disabled watch cache?

@alterge1st
Copy link
Author

The in-process server of etcd is used to directly invoke APIs instead of using etcd service ports.

This should not change the fact that apiserver will demultiplexes watch, or have you disabled watch cache?

Yes, we did disable the watch cache.

@serathius serathius added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Nov 15, 2024
@alterge1st
Copy link
Author

If ctrlStreamBufLen is set to hundreds, will etcd run properly?

@ahrtr
Copy link
Member

ahrtr commented Nov 20, 2024

Could anyone create a test using etcd client SDK instead of k8s client-go to reproduce this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. type/bug
Development

No branches or pull requests

3 participants