-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Graceful shutdown of a stream for a single subscription #1201
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't look at the implementation yet, only docs and tests.
zio-kafka-test/src/test/scala/zio/kafka/consumer/ConsumerSpec.scala
Outdated
Show resolved
Hide resolved
zio-kafka-test/src/test/scala/zio/kafka/consumer/ConsumerSpec.scala
Outdated
Show resolved
Hide resolved
zio-kafka-test/src/test/scala/zio/kafka/consumer/ConsumerSpec.scala
Outdated
Show resolved
Hide resolved
zio-kafka-test/src/test/scala/zio/kafka/consumer/ConsumerSpec.scala
Outdated
Show resolved
Hide resolved
zio-kafka-test/src/test/scala/zio/kafka/consumer/ConsumerSpec.scala
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still need more time to digest this.
zio-kafka/src/main/scala/zio/kafka/consumer/SubscriptionStreamControl.scala
Outdated
Show resolved
Hide resolved
zio-kafka/src/main/scala/zio/kafka/consumer/SubscriptionStreamControl.scala
Outdated
Show resolved
Hide resolved
Hmm, should we instead of this: Consumer.runWithGracefulShutdown(Consumer.partitionedStreamWithControl(Subscription.topics("topic150"), Serde.string, Serde.string)) {
stream => ...
} offer this: Consumer.partitionedStreamWithGracefulShutdown(Subscription.topics("topic150"), Serde.string, Serde.string) {
(stream, _) => stream.flatMapPar(...)
} The second parameter would be the |
If I understand it correctly, the proposal allows for more use cases; with it you can also call |
Well, I mean compared to just the
|
If resume after |
Well, in both proposals you can call I don't think you want to do anything after stop, but it would give you more explicit control when to stop, instead of when the scope ends. We probably need to decide if we want to add pause/resume in the future. If we do, we should add the |
Hey :) Thanks for the great work! Here's some initial feedback: I'm not a big fan of the To me, functions/methods returning it should return a Tuple SubscriptionStreamControl[Stream[Throwable, Chunk[(TopicPartition, ZStream[R, Throwable, CommittableRecord[K, V]])]]] in favor of: (Stream[Throwable, Chunk[(TopicPartition, ZStream[R, Throwable, CommittableRecord[K, V]])], SubscriptionStreamControl) Made the change in a PR to show/study how, to me, it simplifies things: https://github.com/zio/zio-kafka/pull/1207/files |
zio-kafka/src/main/scala/zio/kafka/consumer/internal/RunloopAccess.scala
Outdated
Show resolved
Hide resolved
zio-kafka/src/main/scala/zio/kafka/consumer/internal/RunloopAccess.scala
Outdated
Show resolved
Hide resolved
Didn't finish my review yet. I still have some parts of the code to explore/understand, but I have to go. I'll finish it later 🙂 |
Thanks for the feedback Jules. Agreed about the extra concept that would be unwanted. Check out my latest interface proposal where there is only a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still reading the code...
zio-kafka/src/main/scala/zio/kafka/consumer/SubscriptionStreamControl.scala
Outdated
Show resolved
Hide resolved
zio-kafka/src/main/scala/zio/kafka/consumer/SubscriptionStreamControl.scala
Outdated
Show resolved
Hide resolved
zio-kafka/src/main/scala/zio/kafka/consumer/internal/Runloop.scala
Outdated
Show resolved
Hide resolved
I understand now that when graceful shutdown starts we're ending the subscribed streams. That should work nicely. Lets work out what will happen next to the runloop. The runloop would still be happily fetching records for that stream. When those are offered to the stream, We can do slightly better though. We're fetching and storing all these records in the queue for nothing, even potentially causing an OOM for systems that are tuned for the case where processing happens almost immediately. My proposal is to:
If you want, I can extend this PR with that proposal (or create a separate PR). |
@erikvanoosten If you have some time to implement those two things, by all means. |
@svroonland Done in commit 1218204. Now I am wondering, how can we test this? |
Change looks good. Totally forgot to implement this part. |
After consumer 1 is shutdown (using stopConsumption), rebalances happen and partitions from consumer 2 are assigned. These streams are never started, so the finalizer completing completedPromise is never called. Waiting for these to complete takes 3 minutes (default maxRebalanceDuration). In case that streams were assigned and no record was ever put in their queues, there's no need to wait for the stream to complete.
Co-authored-by: Erik van Oosten <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See notes below. I feel that this is going in the right direction. Will look again when these notes are addressed. For now 🤯
Next time I'll try to trace the shutdown sequence a bit better.
zio-kafka-test/src/test/scala/zio/kafka/consumer/ConsumerSpec.scala
Outdated
Show resolved
Hide resolved
zio-kafka/src/main/scala/zio/kafka/consumer/internal/Runloop.scala
Outdated
Show resolved
Hide resolved
Co-authored-by: Erik van Oosten <[email protected]>
This caused the "process outstanding commits after a graceful shutdown with aggregateAsync using `maxRebalanceDuration`" to fail.
With the stronger test I found some new issues with unexpected interruptions, which by a lucky shot I was able to fix with a Results were verified with a manual run with |
Ouch. That is not a fix, that is a kludge 😞 I really wonder what is going on here BTW, it would be nice if we can make more test stronger with |
Was able to create a minimized reproducer of the issue: zio/zio#9288 |
Implements functionality for gracefully stopping a stream for a single subscription: stop fetching records for the assigned topic-partitions but keep being subscribed so that offsets can still be committed. Intended to replace
stopConsumption
, which did not support multiple-subscription use cases.A new command
EndStreamsBySubscription
is introduced, which calls theend
method on thePartitionStreamControl
of streams matching a subscription. In the methodConsumer#runWithGracefulShutdown
we then wait for the user's stream to complete, before removing the subscription.Methods with this new functionality are offered besides existing methods, to keep compatibility but also because this should be considered experimental.
All the fiber and scope trickery proved to be very hard to get right (the lifetime of this PR is a testimony to that), and there may still be subtle issues here.This is now traced back to issue zio/zio#9288Implements some of #941.
We should deprecate
stopConsumption
before releasing.