-
-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Grouping message in the same queue #43
Comments
How many group ids do you expect? There's an easiest way to solve if you have handful number of group ids, you can use group id as priority. Use equal priority and concurrency=1
You can also define groupIds in the application configuration
|
@sonus21 Thanks for your reply. The group ids are dynamic and not a predefined static list. Imagine that you are creating an online ordering service and you want to group the messages based on the order id allowing for messages related to the same order id to be enqueued in a FIFO manner while at the same time allowing messages across different order ids to be executed concurrently. So if message 1 = {order_id: 1, action="create_order"} , message 2 = {order_id: 1, action="reserve_credit"} in that case message 1 is consumed always before message 2 if however message 1 = {order_id: 1, action="create_order"} , message 2 = {order_id: 2, action="reserve_credit"} then there is no need for these to be executed in any particular order meaning that there is no problem of executing them concurrently. That way you are able to provide ordering guarantees when needed while still maintaining a good throughput by allowing for concurrent consumption when it is possible. |
@khashish Thanks for the detail. This feature is quite complex to support, you can use sharding/partitioning concept to deal with this. For example in Kafka we create partitions, so we can create partition in similar fashion but we need to add application code to deal with partitioning. You can use any hashing strategy to deal with this, for example a simple hashing strategy could be
Once you have group id, you can enqueue item in the corresponding group of a queue, I would suggest you set concurrency value to less than or equal to number of partitions and you should use What about increasing/decreasing number of partition? There's a no rebalancing mechanism like Kafka, so we need to deal with that. One simple strategy could be, we set the number of partition to large value like 127/257/509 (prime number) etc but smaller concurrency like 10 so you have 10 workers and if you see you need more workers than you can increase concurrency later. Changing concurrency is very easy than changing number of partition. What about retry mechanism? |
There's one more issue left to solve here, due to competing listeners/consumers out of order execution can happen. To avoid this problem, we should set only set of group ids aka priority in one listener and we should not run competing listeners. For example let's say I have 10 workers and 127 group ids, each worker should be processing only subset of groups, for example let's say I want to run 5 competing consumers, as we need to run only 10 workers and we've 5 competing consumers, each consumer should have concurrency of 2 (number of workers/number of competing consumers), given we have 127 worker group ids, we should distribute them across all consumers. [0,25) => Listener on machine 1 |
Is there any plan to support this ? |
@febinct would workaround solve your usecase? |
Is your feature request related to a problem? Please describe.
Messages in the queue don’t have the option to have a group id to signal them for fifo execution
Describe the solution you'd like.
When enqueuing a message I should provide a message group id that would allow all messages in that queue with the same group id to execute in sequence while still allowing concurrency across different group ids
Describe alternatives you've considered
Using SQS fifo since it allows both concurrent execution across different group ids and respects sequential execution for messages with same group id
The text was updated successfully, but these errors were encountered: