Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proof of concept for a rate limiter #155

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

eds000n
Copy link

@eds000n eds000n commented Dec 29, 2022

Proof of concept for #134
only for generic commands for now, let me know what you think. Can be used with -R 10 -l 2

@eds000n
Copy link
Author

eds000n commented Dec 30, 2022

also wondering about the package names, most of the stuff is in the cli package but maybe makes sense to move operations.go into its package, same with logger.go
I'll leave it here for now, but I'm glad to actively contribute

@danielgtaylor danielgtaylor added the enhancement New feature or request label Jan 3, 2023
Copy link
Owner

@danielgtaylor danielgtaylor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution and great start! I left a few comments but like the overall idea. I'd love to see tests for any new features too as I try and get the code coverage up.

It would also be very cool if we could detect standard/common rate limit headers and enhance our calm (i.e. slow down making requests).

Playing devil's advocate here, would it be better to generate HTTP operations/payloads to some file/script and let another specialized load testing tool take that as input? What additional advantages are there to doing all of the work here?

Comment on lines +543 to +555
AddGlobalFlag("rsh-requests", "R", "Do a number of requests", 1, false)
AddGlobalFlag("rsh-load-rate", "l", "Set a load rate in requests per second. Overrides the HTTP cache to false", 1, false)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should these default to zero? I have some concerns:

  1. If non-zero by default they disable caching
  2. Limiting to 1 request a second is problematic for automatic pagination (which may do many sequential requests) and also see Bulk Resource Management #156 which does many requests.
  3. Total number of requests vs. some time period (e.g. 30 seconds) - which is better for this?

cli/cli.go Outdated
if rps, _ := GlobalFlags.GetInt("rsh-load-rate"); rps > 0 {
viper.Set("rsh-load-rate", rps)
// doesn't make sense to flood with requests and obtain a cached response
viper.Set("rsh-no-cache", false)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You probably intended to set this to true since it's setting "no cache".

Comment on lines +17 to +32
ticker := time.Tick(time.Second)
var wg = &sync.WaitGroup{}
wg.Add(l.requests)
for ; true; <-ticker {
if executed >= l.requests {
break
}
for i := 0; i < l.rate; i++ {
go func() {
f()
executed++
wg.Done()
}()
}
}
wg.Wait()
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some questions/concerns:

  1. What happens if a request takes more than a second? Seems like we'd get many in flight at the same time.
  2. If a request finishes really fast there is no opportunity to create new ones until the full second has passed, even if a sliding time window would allow a new request.
  3. This assumes all the requests are the same, but add multiple requests option flag #134 suggests they might get generated for each API operation. How will that work?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants