Replies: 1 comment
-
The demand management operators we have are indeed Feel-free to open a PR if you see an opportunity for a general-purpose improvement. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
We are trying to limit the rate of consumption on the message queue based on the state of the system, ie. if the database slows down we want to slow down message processing.
In mutiny we modelled an abstraction over the message queue using Multi.
We’ve tried using the paceDemand and discovered it behaviour is different to what we expected. It basically sends requests to upstream disregarding the demand from the downstream.
After many attempts of writing the custom pacer we found an alternative solution:
We’ve created another wrapping publisher that hijacks the subscriber and changes the request value there based on our pacing strategy.
We’ve also played with capDemand but it doesn’t seem to work well in scenario: messages.capDemand(strategy)…collect().last().subscribeAsCompletionStage(). It just caps the total number of messages to the first value and hangs.
Now the questions:
class Limit{
long limitTo; // how much demand to pass through now. It can be 0.
Uni<?> canRetry; // the request for the rest of pending demand will be retried when this uni is complete
}
We would really appreciate your thoughts!
Beta Was this translation helpful? Give feedback.
All reactions