-
-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/bulk redis payload processing #368
base: master
Are you sure you want to change the base?
Feature/bulk redis payload processing #368
Conversation
Merge latest redis oplog
Please respect the indentation as it's very hard for me to spot the differences. While I do agree that we should use 2, that's for another PR. For now we keep the standard. |
@theodorDiaconu pushed up a fix for the formatting. |
@theodorDiaconu we've been using this in production for a few days now and are seeing great results. Great stuff @emaciel10 |
What exactly do you mean by bulk updates? Like a Mongo So if this for when doing a regular |
@evolross Yeah by "bulk updates" I am just referring to mongo updates that update a large number of documents. In some cases we use the raw mongo driver to trigger these updates and publish to redis after if really necessary but that is generally the exception. More often these are just regular collection multi updates when that update targets a very large number of documents
|
Hi @emaciel10 could you please pull the latest master into this PR, so that we can run the tests here on GitHub? Thank you! |
@emaciel10 the tests are failing, please see here: Meteor-Community-Packages#9 |
Ability to process redis payloads in bulk to reduce the number of requeries that large updates on a server can cause
Findings in production
Bulk updates that send many redis payloads to our meteor servers causes expensive database requeries for limit-sort publications and can spike cpu for other publications which attempt to process these payloads one at a time.
Goal
Reduce the large number of database lookups and requeries that happen when a large number of redis payloads are processed in bulk
Instead of doing document lookups and requeries one by one as the are needed, we process events in bulk, perform these database lookups in bulk and store them in a temporary
documentMap
and then iterate over the payloads triggering only a single requery when necessary.This also gives us the ability to set
maxRedisEventsToProcess
threshold which prevents us from spiking server cpu for extended periods of time in the case where we have received too many redis payloads to process effectively.Test Fixes
Looks like mocha tests had started failing due to some updates to the underlying mocha libraries in
meteortesting:mocha-core
. I pinned down the versions so that the issues with the test suite are resolved without needing to modify all the tests to no longer make use of async and done