Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add "batch" requests submission to io-tester #2402

Merged
merged 5 commits into from
Sep 11, 2024

Conversation

xemul
Copy link
Contributor

@xemul xemul commented Aug 24, 2024

Currently io-tester supports two kinds of IO workloads -- continuous and rated. In the former case requests come one after another from several fibers, in the latter case fibers submit requests at a given rate one at a time.

In real life the incoming requests pattern is not that simple. A nice approach to pretty common pattern can be to submit requests in batches at some rate. Say, once every 200 milliseconds 10 requests are queued at once.

Of course, in reality it's even more convoluted, e.g. once every 200 ms there can comes a sequence of, say, 10 requests with some small pauses in between, but submitting them in a one-time batch gives good approximation.

Also this PR simplifies the continuous and rate-based submission code to make the new batch parameter apply smoother.

@tchaikov
Copy link
Contributor

nit, s/smother/smoother/, because smother has some bad meaning. but this change is an improvement.

@@ -354,21 +354,11 @@ class class_data {
return do_until([this, stop] { return std::chrono::steady_clock::now() > stop || requests() > limit(); }, [this, buf, stop, pause, &intent, &in_flight] () mutable {
auto start = std::chrono::steady_clock::now();
in_flight++;
return issue_request(buf, &intent).then_wrapped([this, start, pause, stop, &in_flight] (auto size_f) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this patch splits it into different continuations to make further
patching simpler.

Consider also using coroutines.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a separate branch in my queue, yes

@avikivity
Copy link
Member

Looks good.

The rate-based submission of requests keeps track of the amount of
outstanding requests with the in_flight counter. Also, requests in this
job are cancellable and the cancellation exception should be properly
handled.

Currently all this logic sits in a single then-wrapped continuation,
this patch splits it into different continuations to make further
patching simpler.

Signed-off-by: Pavel Emelyanov <[email protected]>
Both rate-based and continuous requests submissions carry rps and
parallelism parameters all over their continuations, while both are
available via this-> methods.

Signed-off-by: Pavel Emelyanov <[email protected]>
The rate-based job can come with the parallelism parameter, so several
fibers start and issue requests at the same rate. At the end all fibers
cancel the only intent and wait for requests for finish. Despite
cancellation is re-entrable, it's simpler and cleaner to cancel and wait
only once.

Signed-off-by: Pavel Emelyanov <[email protected]>
There's duplicating issue_request().then([] { add_result() }) chain in
rate-based and continuous submission helpers, generalize both. This
makes it less code and next patching simpler.

Signed-off-by: Pavel Emelyanov <[email protected]>
This is to make it possible to better simulate read-life workloads.

Currently there are two ways to submit requests -- continuously via
several fibers and at a given rate (the rate can be uniform, or can be
disturned with random delays). In real life, however, there's often
another pattern, namely -- requests come in batches at some rate. Say,
once every 200 milliseconds 10 requests are queued shortly one after
another. In fact, it's even more convoluted, once every 200 ms there
comes a _sequence_ of, say, 10 requests almost one-after-another, but
submitting them in a batch gives good approximation.

This patch adds "batch" parameter to io-tester conf.

Signed-off-by: Pavel Emelyanov <[email protected]>
@xemul xemul force-pushed the br-io-tester-add-rps-batch branch from e280656 to 83d431f Compare August 26, 2024 06:49
@xemul
Copy link
Contributor Author

xemul commented Aug 26, 2024

upd:

  • updated doc/io-tester.md
  • fixed PR comment wording

Copy link
Contributor

@pwrobelse pwrobelse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me 👍

@xemul
Copy link
Contributor Author

xemul commented Sep 3, 2024

@avikivity , please consider merging

3 similar comments
@xemul
Copy link
Contributor Author

xemul commented Sep 9, 2024

@avikivity , please consider merging

@xemul
Copy link
Contributor Author

xemul commented Sep 10, 2024

@avikivity , please consider merging

@xemul
Copy link
Contributor Author

xemul commented Sep 11, 2024

@avikivity , please consider merging

@avikivity avikivity closed this in 35fc02a Sep 11, 2024
@avikivity avikivity merged commit 35fc02a into scylladb:master Sep 11, 2024
14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants