Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Graceful shutdown of a stream for a single subscription #1201

Open
wants to merge 87 commits into
base: master
Choose a base branch
from

Conversation

svroonland
Copy link
Collaborator

@svroonland svroonland commented Mar 24, 2024

Implements functionality for gracefully stopping a stream for a single subscription: stop fetching records for the assigned topic-partitions but keep being subscribed so that offsets can still be committed. Intended to replace stopConsumption, which did not support multiple-subscription use cases.

A new command EndStreamsBySubscription is introduced, which calls the end method on the PartitionStreamControl of streams matching a subscription. In the method Consumer#runWithGracefulShutdown we then wait for the user's stream to complete, before removing the subscription.

Methods with this new functionality are offered besides existing methods, to keep compatibility but also because this should be considered experimental. All the fiber and scope trickery proved to be very hard to get right (the lifetime of this PR is a testimony to that), and there may still be subtle issues here. This is now traced back to issue zio/zio#9288

Implements some of #941.

We should deprecate stopConsumption before releasing.

@svroonland svroonland changed the title Subscription stream control Graceful shutdown of a single subscription Mar 30, 2024
@svroonland svroonland marked this pull request as ready for review March 30, 2024 11:07
Copy link
Collaborator

@erikvanoosten erikvanoosten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't look at the implementation yet, only docs and tests.

Copy link
Collaborator

@erikvanoosten erikvanoosten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still need more time to digest this.

@svroonland
Copy link
Collaborator Author

svroonland commented Apr 3, 2024

Hmm, should we instead of this:

Consumer.runWithGracefulShutdown(Consumer.partitionedStreamWithControl(Subscription.topics("topic150"), Serde.string, Serde.string)) { 
  stream => ... 
}

offer this:

Consumer.partitionedStreamWithGracefulShutdown(Subscription.topics("topic150"), Serde.string, Serde.string) {
  (stream, _) => stream.flatMapPar(...) 
}

The second parameter would be the SubscriptionStreamControl, which you could always manually call stop on. Or would that prevent certain use cases.. 🤔

@erikvanoosten
Copy link
Collaborator

Hmm, should we instead of this:

If I understand it correctly, the proposal allows for more use cases; with it you can also call stop for any condition you want. Is it true that after stopping, you can start consuming again?

@svroonland
Copy link
Collaborator Author

Well, I mean compared to just the partitionedStreamWithControl method. In both cases you would need to do something with the stream that ultimately reduces to a ZIO of Any, so I don't think the partitionedStreamWithGracefulShutdown is limiting in that regard.

stop currently doesn't support that, since the stream would then be finished. We could probably build pause and resume like in #941.

@erikvanoosten
Copy link
Collaborator

If resume after stop is not supported (and never will be), then I like the first proposal better where you don't need to call stop. What would you do after calling stop?

@svroonland
Copy link
Collaborator Author

Well, in both proposals you can call stop.

I don't think you want to do anything after stop, but it would give you more explicit control when to stop, instead of when the scope ends.

We probably need to decide if we want to add pause/resume in the future. If we do, we should add the control parameter like in the partitionedStreamWithGracefulShutdown example for future compatibility. If we don't, we can drop it altogether and make SubscriptionStreamControl a purely internal concept (if at all).

@guizmaii
Copy link
Member

guizmaii commented Apr 5, 2024

Hey :)

Thanks for the great work!

Here's some initial feedback:

I'm not a big fan of the SubscriptionStreamControl implementation.

To me, functions/methods returning it should return a Tuple (stream, control).
It avoids adding one more concept for our users to understand and learn (Kafka already has a lot of concepts)
It also simplifies the interface of the control type, the current one with the [S <: ZStream[_, _, _]] being complex
It also simplifies the return type of our functions/methods, avoiding this kind of type:

SubscriptionStreamControl[Stream[Throwable, Chunk[(TopicPartition, ZStream[R, Throwable, CommittableRecord[K, V]])]]]

in favor of:

(Stream[Throwable, Chunk[(TopicPartition, ZStream[R, Throwable, CommittableRecord[K, V]])], SubscriptionStreamControl)

Made the change in a PR to show/study how, to me, it simplifies things: https://github.com/zio/zio-kafka/pull/1207/files

@guizmaii
Copy link
Member

guizmaii commented Apr 5, 2024

Didn't finish my review yet. I still have some parts of the code to explore/understand, but I have to go. I'll finish it later 🙂

@svroonland
Copy link
Collaborator Author

Thanks for the feedback Jules. Agreed about the extra concept that would be unwanted. Check out my latest interface proposal where there is only a plainStreamWithGracefulShutdown method and SubscriptionStreamControl remains hidden.

Copy link
Collaborator

@erikvanoosten erikvanoosten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still reading the code...

@erikvanoosten
Copy link
Collaborator

erikvanoosten commented Apr 7, 2024

I understand now that when graceful shutdown starts we're ending the subscribed streams. That should work nicely. Lets work out what will happen next to the runloop. The runloop would still be happily fetching records for that stream. When those are offered to the stream, PartitionStreamControl.offerRecords will probably append those records to the queue (even though it now also contains an 'end' token). Because of the 'end' token that is already in that queue, these new records will never be taken out. Back pressure will kick in (depending on the fetch strategy) and the partitions will be paused. Once we're unsubscribed, 15 seconds later, the queue will be garbage collected. So far so good.

We can do slightly better though. We're fetching and storing all these records in the queue for nothing, even potentially causing an OOM for systems that are tuned for the case where processing happens almost immediately.

My proposal is to:

  1. stop accepting more records in PartitionStreamControl.offerRecords when the queue was ended
  2. in Runloop.handlePoll only pass running streams to fetchStrategy.selectPartitionsToFetch so that partitions for ended streams are immediately paused

If you want, I can extend this PR with that proposal (or create a separate PR).

@svroonland
Copy link
Collaborator Author

@erikvanoosten If you have some time to implement those two things, by all means.

@erikvanoosten
Copy link
Collaborator

erikvanoosten commented Apr 13, 2024

@erikvanoosten If you have some time to implement those two things, by all means.

@svroonland Done in commit 1218204.

Now I am wondering, how can we test this?

@svroonland
Copy link
Collaborator Author

svroonland commented Apr 14, 2024

Change looks good. Totally forgot to implement this part.

@svroonland svroonland changed the title Graceful shutdown of a single subscription Graceful shutdown of a stream for a single subscription Nov 2, 2024
svroonland and others added 8 commits November 2, 2024 14:46
After consumer 1 is shutdown (using stopConsumption), rebalances happen and partitions from consumer 2 are assigned. These streams are never started, so the finalizer completing completedPromise is never called. Waiting for these to complete takes 3 minutes (default maxRebalanceDuration).

In case that streams were assigned and no record was ever put in their queues, there's no need to wait for the stream to complete.
@svroonland svroonland marked this pull request as ready for review November 2, 2024 19:20
Copy link
Collaborator

@erikvanoosten erikvanoosten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See notes below. I feel that this is going in the right direction. Will look again when these notes are addressed. For now 🤯
Next time I'll try to trace the shutdown sequence a bit better.

@svroonland
Copy link
Collaborator Author

svroonland commented Nov 9, 2024

With the stronger test I found some new issues with unexpected interruptions, which by a lucky shot I was able to fix with a forkDaemon.flatMap(_.join). That also allowed simplifying some other scope & fork stuff.

Results were verified with a manual run with nonFlaky(100), where it previously failed with just 10 repetitions.

@erikvanoosten
Copy link
Collaborator

which by a lucky shot I was able to fix with a forkDaemon.flatMap(_.join).

Ouch. That is not a fix, that is a kludge 😞 I really wonder what is going on here

BTW, it would be nice if we can make more test stronger with scheduleProduce (eventually).

@svroonland
Copy link
Collaborator Author

Was able to create a minimized reproducer of the issue: zio/zio#9288

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants