Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default subject routing #346

Open
MikeHawkesCapventis opened this issue Aug 9, 2024 · 2 comments
Open

Default subject routing #346

MikeHawkesCapventis opened this issue Aug 9, 2024 · 2 comments
Labels
proposal Enhancement idea or proposal

Comments

@MikeHawkesCapventis
Copy link

Proposed change

I'd like to be able to have a default handler instance of a function / service, with the ability to have specialist versions for specific customers, using something like:

fn.*.doSomething

and

fn.{clientID}.doSomething

Or some similar subject construct.

When first created, fn.*.doSomething picks up all messages; then I deploy a variation of the default handler for a specific client to listen on subject fn.client-1.doSomething. It runs the variation. In the meantime all other clients carry on using the default handler (such that fn.client-2.doSomething and fn.client-3.doSomething get routed to the generic fn.*.doSomething consumer).

Use case

We can deploy a standard handler for use by most clients. However, if some clients want to have additional processing applied, we can route to a specific handler for that client, without affecting anything else.

Contribution

No response

@MikeHawkesCapventis MikeHawkesCapventis added the proposal Enhancement idea or proposal label Aug 9, 2024
@autodidaddict
Copy link
Contributor

This is an interesting use case and I've wanted to build something similar a few times, where the more stuff I deploy, the more specific handlers get invoked and the less the "catch-all" handler gets invoked.

You can sort of do this now. If you deploy a function with a trigger subject of fn.*.doSomething, it will always be triggered. Then if you deploy a function triggered by fn.client-1.doSomething, both the specific client-bound function and the generic catch-all will be triggered.

If I don't want both to trigger, and it's an either-or situation, what I've done in the past is to ensure that there are no overlaps between the specific triggers and the catch-all. In this case, the catch-all would be triggered by fn.default.doSomething and the client-specific fn.client-x.doSomething. Here there's no overlapping wildcard so you won't get the double-trigger.

If you're okay with both the default processing and the additional processing taking place concurrently, then you can absolutely do this today with no changes.

However, there may be a bigger question here. Do we want a 1:1 correlation between clients and the function? This type of pattern could end up eating up a lot of compute time because we'll run more and more workloads the more clients we get, and we'll have to worry about idle functions for clients that aren't getting many calls.

I don't know if it works with your use case, but a pattern that I've used to great effect before is I'll deploy a single function that is triggered by fn.*.doSomething, and then the function will extract the client ID from the subject, use a key-value store (made available via host services) to get the client-specific data, and then respond accordingly.

@MikeHawkesCapventis
Copy link
Author

Thanks - I'm considering writing a wrapper function that employs a KV/Database mechanism for this. Look up the subject and if there's a tenant-specific handler, use an alternate subject, otherwise allow the original subject through. I'll do this before posting to NATS.

On the broader question of multiple clients - if we did something with the KV TTL, we could (in theory) cache the clients ... when booted from the cache, the client closes. This would incur a restart hit when another message comes through - but - for rarely used functions, this would probably be Ok because we can keep higher priority hot-starts alive. Those used the most often would remain in the cache. This would also provide the basis for auto-scaling commonly used functions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
proposal Enhancement idea or proposal
Projects
None yet
Development

No branches or pull requests

2 participants