-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add "batch" requests submission to io-tester #2402
Conversation
nit, s/smother/smoother/, because smother has some bad meaning. but this change is an improvement. |
@@ -354,21 +354,11 @@ class class_data { | |||
return do_until([this, stop] { return std::chrono::steady_clock::now() > stop || requests() > limit(); }, [this, buf, stop, pause, &intent, &in_flight] () mutable { | |||
auto start = std::chrono::steady_clock::now(); | |||
in_flight++; | |||
return issue_request(buf, &intent).then_wrapped([this, start, pause, stop, &in_flight] (auto size_f) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this patch splits it into different continuations to make further
patching simpler.
Consider also using coroutines.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a separate branch in my queue, yes
Looks good. |
The rate-based submission of requests keeps track of the amount of outstanding requests with the in_flight counter. Also, requests in this job are cancellable and the cancellation exception should be properly handled. Currently all this logic sits in a single then-wrapped continuation, this patch splits it into different continuations to make further patching simpler. Signed-off-by: Pavel Emelyanov <[email protected]>
Both rate-based and continuous requests submissions carry rps and parallelism parameters all over their continuations, while both are available via this-> methods. Signed-off-by: Pavel Emelyanov <[email protected]>
The rate-based job can come with the parallelism parameter, so several fibers start and issue requests at the same rate. At the end all fibers cancel the only intent and wait for requests for finish. Despite cancellation is re-entrable, it's simpler and cleaner to cancel and wait only once. Signed-off-by: Pavel Emelyanov <[email protected]>
There's duplicating issue_request().then([] { add_result() }) chain in rate-based and continuous submission helpers, generalize both. This makes it less code and next patching simpler. Signed-off-by: Pavel Emelyanov <[email protected]>
This is to make it possible to better simulate read-life workloads. Currently there are two ways to submit requests -- continuously via several fibers and at a given rate (the rate can be uniform, or can be disturned with random delays). In real life, however, there's often another pattern, namely -- requests come in batches at some rate. Say, once every 200 milliseconds 10 requests are queued shortly one after another. In fact, it's even more convoluted, once every 200 ms there comes a _sequence_ of, say, 10 requests almost one-after-another, but submitting them in a batch gives good approximation. This patch adds "batch" parameter to io-tester conf. Signed-off-by: Pavel Emelyanov <[email protected]>
e280656
to
83d431f
Compare
upd:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me 👍
@avikivity , please consider merging |
3 similar comments
@avikivity , please consider merging |
@avikivity , please consider merging |
@avikivity , please consider merging |
Currently io-tester supports two kinds of IO workloads -- continuous and rated. In the former case requests come one after another from several fibers, in the latter case fibers submit requests at a given rate one at a time.
In real life the incoming requests pattern is not that simple. A nice approach to pretty common pattern can be to submit requests in batches at some rate. Say, once every 200 milliseconds 10 requests are queued at once.
Of course, in reality it's even more convoluted, e.g. once every 200 ms there can comes a sequence of, say, 10 requests with some small pauses in between, but submitting them in a one-time batch gives good approximation.
Also this PR simplifies the continuous and rate-based submission code to make the new batch parameter apply smoother.