Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing Limits module #72

Open
kevinhughes27 opened this issue Aug 4, 2014 · 18 comments
Open

Missing Limits module #72

kevinhughes27 opened this issue Aug 4, 2014 · 18 comments

Comments

@kevinhughes27
Copy link
Contributor

The ruby client has:
https://github.com/Shopify/shopify_api/blob/master/lib/shopify_api/limits.rb

which has not been ported into the python lib

@gavinballard
Copy link
Contributor

@kevinhughes27 Beyond just porting this limiting module over, do you think it would be feasible to somehow build rate limit handling transparently into the API client?

What I mean by that is having all API calls handle a rate limit exceeded exception and automatically retry (following the directive in the returned Retry-After header). Then you could do something like:

for product in products:
  product.add_metafield(...)

and not have to worry about catching a rate limit exception and manually restarting your requests.

My initial thought would be that this might require too much digging into pyactiveresource's internals, just wanted to float the idea.

@kevinhughes27
Copy link
Contributor Author

@gavinballard its a cool idea and something we discussed doing before. I think you are right - it would require too much digging into pyactiveresource internals. I think a better solution would be a library that sits above your api client and manages this. There is a ruby gem somewhere that sort of does this but I can't find it at the moment.

@gavinballard
Copy link
Contributor

Fair enough, I think you're right :). If I come up with a good pattern for this I'll share here.

@gavinballard
Copy link
Contributor

@kevinhughes27 Hey Kevin, don't suppose you ever tracked down that Ruby gem doing something along these lines?

@kevinhughes27
Copy link
Contributor Author

maybe this one? https://github.com/ejfinneran/ratelimit we still don't have a good solution for this

@gavinballard
Copy link
Contributor

Us either :). We're looking to build out a solution on the Ruby side of things, might try to port over to Python if that works out.

@kevinhughes27
Copy link
Contributor Author

deffs let us know about it!

@gavinballard
Copy link
Contributor

@raulbrito, who's looking into this with me, found this article which is quite relevant: http://product.reverb.com/2015/03/07/shopify-rate-limits-sidekiq-and-you/.

Not a bad approach at all!

@kevinhughes27
Copy link
Contributor Author

interesting. Thanks for sharing! Api limiting might be better built at the app framework level like the shopify_app gem especially if it needs to connect to the background queue like this

@mrkschan
Copy link

mrkschan commented Oct 7, 2015

FYI, I wrote a HTTP proxy that can rate limit outbound HTTP calls - http://github.com/mrkschan/cuttle. You may also find the Shopify API setup at - http://mrkschan.blogspot.hk/2015/10/rate-limiting-shopify-api-using-cuttle.html.

@gavinballard
Copy link
Contributor

@mrkschan: Thanks for sharing a great approach!

@flux627
Copy link
Contributor

flux627 commented Apr 17, 2016

I've solved this in my projects by implementing a token bucket algorithm that keeps track of recent requests within a Redis server, per account. The token bucket works well with Shopify's leaky bucket implementation- the leaky bucket starts at zero, gets added to, and overflows are ignored (throw an error), while the token bucket starts with a number of tokens which are then consumed, and when there are no tokens left to consume, there is essentially a queue waiting to consume them. I've monkey-patched the ShopifyConnection class to consume or wait before sending out the request, and it works great. This is the only way I've come across that utilizes true "bursts"- if I send 50 requests simultaneously, the first 40 will get sent right away while the remaining 10 get sent every 0.5 seconds.

@kevinhughes27
Copy link
Contributor Author

very cool! @flux627 is your solution available anywhere? Its not the kind of thing we would include here since it would introduce a dependency on redis but it would be worth linking to.

@flux627
Copy link
Contributor

flux627 commented Apr 18, 2016

Here is my implementation of the token bucket algorithm, segmented by UID, using local memory instead of Redis. This can serve as a base for whatever your specific needs are / server setup requirements.

@kevinhughes27
Copy link
Contributor Author

Thanks!

@orenmazor
Copy link
Contributor

is this happening?

@wowkin2
Copy link

wowkin2 commented Jul 18, 2018

So, is there any native way to handle Exceeded 4 calls per second for api client error now?
Or ideas where to implement it as part of this library?

@wowkin2
Copy link

wowkin2 commented Aug 27, 2018

Here is my solution to this problem, a code will just wait some time and will retry request:
https://gist.github.com/wowkin2/079844c867a1a06ce15ea1e4ffdee87c

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants