Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add window manager within middleware #437

Open
wants to merge 4 commits into
base: develop
Choose a base branch
from

Conversation

overnin
Copy link
Contributor

@overnin overnin commented Dec 6, 2012

I tried to embed the window manager in a middleware in order to slow down my SMPP transport when the aggregator don't use SMPP's Throttling mechanism. It's a first try and questions are still open and i would need some directions on the 2 following points

1> How to cancel the following middlewareStack and the worker execution? Two options were identified returning None or throwing an exception like "StopPropagation". None is not very explicit which can lead to mistake when deploying pile of middlewares. Also there is this rule already in the code that 'Return value should never be none'. An exception "StopPropagation" is more explicit but i don't get all the implication that it might have.

2> How to resume the middlewareStack execution when a message is due to be send? in my code i directly call the worker to send the message... but then i'm bypassing all the following middlewares of the worker.

sent_message_id=sent_message_id,
transport_name=transport_name,
transport_metadata=transport_metadata,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can inherit from vumi.tests.utils.VumiWorkerTestCase and avoid having to implement mkmsg_* these days (we should probably clean-up the remaining instances elsewhere in the Vumi tests).

@overnin
Copy link
Contributor Author

overnin commented Dec 10, 2012

@hodgestar : follow ur idea on MiddlewareControlFlag and just implemented it for Transport outbound. If ok, i'll finish the work and add Dispatcher + Workers

PS: sorry i forgot to have a more incremental comit to avoid mixing the refacting and the new features.

@@ -220,6 +220,7 @@ def _send_failure(f):
d = self._middlewares.apply_consume("outbound", message,
self.transport_name)
d.addCallback(self.handle_outbound_message)
d.addErrback(self._middlewares.control_flag)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this needs to pushed down into _handle because this level makes it really hard to tell which middleware the MiddlewareControlFlag was raised from.

@ghost ghost assigned overnin Dec 10, 2012
@hodgestar
Copy link
Contributor

@overnin Looking good! Left a couple of comments about pushing the control flow handling down into the handle method (mostly for later use cases like replying).

@overnin
Copy link
Contributor Author

overnin commented Dec 10, 2012

@hodgestar : there is now a first handling of control flag in the middleware loop of the stack. Still need to keep a second handling @ transport level in order to maintain the return None failure. I also added a first version of the resume_handling.

@overnin
Copy link
Contributor Author

overnin commented Dec 11, 2012

Questions:
1> Do we want to keep the second handling of MiddlewareControlFlag in each Transport/ApplicationWorker/Dispatcher?

2> In BaseMiddleware, should the function resume_handling be remove to have it as independant helper function like setup_middleware_from_config...

3> When resume_handling in a middleware (cf Window Manager Middleware send_outbound), I also need to resume the Worker specific failure, isn't it? Any suggestion on own to implement it nicely over Transport/ApplicationWorker/Dispatcher?

@overnin
Copy link
Contributor Author

overnin commented Dec 13, 2012

The window manager component is not adapted to have many instance running at the same time due to the way it choose his Redis key. Basically the monitor consider his windows all key like "windows:xxx" which is great for a unique monitor that will send the message. However in the case of multiple middleware monitoring for different worker u don't want any interference.

In order to keep the all redis key naming consistent, seems important to fix naming rules for middleware? like "worker_name:middleware_name:xxxx".

@hodgestar
Copy link
Contributor

To fix the key names you should be able to just pass in a suitable redis sub_manager to the WindowManager constructor (redis.sub_manager(<prefix>) should do it).

@hodgestar
Copy link
Contributor

@overnin Answers to three questions

  1. No, I think that should just be checked explicitly inside the middleware stack loop.

  2. Not quite sure what this question means.

  3. Worker specific failure? There will have to be one resume method per kind of handler, I think. continue_inbound and continue_outbound are the most likely to be immediately important. continue_event and continue_failure will probably be useful to someone eventually. I'm not quite sure what to attach these methods to though -- the middleware stack, maybe?

@smn
Copy link
Member

smn commented Apr 2, 2013

Is this still being worked on @overnin ?

@overnin
Copy link
Contributor Author

overnin commented Apr 2, 2013

Yes sorry i open this pull request some time months ago. I had to put it on
hold due to overwhelming TODO list but I plan to finish it next week (it
some kind of a promise).

I have finish an implementation on my private branch for testing on prod.
From this experience I want to improve 1 part:

  • window manager component should release batch of user messages (and not
    all the ones that are due) if specify in the middleware.

I'll try my best to finish next week.

Oliv
On Apr 2, 2013 3:50 PM, "Simon de Haan" [email protected] wrote:

Is this still being worked on @overnin https://github.com/overnin ?


Reply to this email directly or view it on GitHubhttps://github.com//pull/437#issuecomment-15775988
.

@smn
Copy link
Member

smn commented Dec 23, 2013

Hey @overnin, just so you know, #384 added support to the SMPP transport for when the gateway starts throttling the connection. It will automatically pause itself and resume when it's no longer throttled. Reading through this thread it looks like that could be relevant to you.

@hodgestar
Copy link
Contributor

@overnin This branch still alive?

@kyrelos
Copy link

kyrelos commented May 6, 2018

@overnin is this branch still alive?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants