Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Continuous benchmarking #2025

Open
dzikoysk opened this issue Oct 19, 2023 · 11 comments
Open

Continuous benchmarking #2025

dzikoysk opened this issue Oct 19, 2023 · 11 comments

Comments

@dzikoysk
Copy link
Member

Currently, we're quite unaware of the performance improvements/degradation caused by introduced changes. It'd be cool to setup GitHub Actions integration that would compare:

  • Changes from given PR
  • Current state from master
  • Maybe a previous major version

Something like that could be an option:

@zugazagoitia
Copy link
Member

Great idea, but we have to consider how reproducible these benchmarks are since GH runners don't guarantee any consistent performance. Threshold tuning for alerts will be important.

Also: Supported runners and hardware resources

@dzikoysk
Copy link
Member Author

dzikoysk commented Oct 22, 2023

GH runners don't guarantee any consistent performance

We don't need to care about a performance of different workers, because we'll benchmark each integration on each run, so we can get reliable & comparable results on the provided machine. Anyway, if already working runner can be slowed down by GitHub, then indeed it wouldn't make sense to use them.


To be fair, I'm not even sure if it's possible with Maven 🤔 I don't see anything useful in this area, with Gradle we could just add a benchmark module with 3 source sets & process JMH output via a script.

@dzikoysk
Copy link
Member Author

Something like this could be a thing too:

The action I've initially mentioned has quite a lot of issues & it's hard to adjust it. I guess we could even try to fork this one and just personalize it.

@zugazagoitia
Copy link
Member

zugazagoitia commented Oct 22, 2023

Anyway, if already working runner can be slowed down by GitHub, then indeed it wouldn't make sense to use them.

We can just branch and test a little.

To be fair, I'm not even sure if it's possible with Maven 🤔

What about just executing the javalin-performance repo against a build of the commit that triggers the workflow?
It'd be optimal to manage only one set of performance tests.

We can keep performance tests there and just write a GH action inside the org, maybe in kotlin-js.

Something like

  1. Build javalin and install a jar to the local maven repo
    ---using our action/scripts---
  2. Run javalin-performance against the local version (tricky?)
  3. Collect the results and process them as we wish
    ---/using our action/scripts---
  4. Comment on the PR if a threshold is met (there's an action for it, I used it on the website repo)

Maybe there's some easier way with the performance just living in the main repo, but I remember @tipsy being against it.

@dzikoysk
Copy link
Member Author

I don't think it makes sense to do this way, it's a little bit overcomplicated. For instance, we'd need to also distribute builds of PRs, because the crucial part of this request is to also provide an automated feedback on each incoming change, before it's even merged to master.

Imo, performance tests should be located in the javalin-performance as it's just another set of tests, just like units/integration. In theory, if it'd be Gradle, we could even keep it in performance source set.

@zugazagoitia
Copy link
Member

zugazagoitia commented Oct 22, 2023

Well there's still the possibility of migrating Javalin 6 to gradle 🤔🤔🤔🤔 @tipsy

@dzikoysk
Copy link
Member Author

xD

Let's stay with Maven - I guess it might be possible to do it as long as we'll cover all the required functions in the actions script. javalin-performance module should produce a json file with a benchmark results & we can somehow group result entries by a name (using some kind of pattern).

@tipsy
Copy link
Member

tipsy commented Oct 22, 2023

(≖_≖ )

@zugazagoitia
Copy link
Member

javalin-performance module should produce a json file with a benchmark results & we can somehow group result entries by a name (using some kind of pattern).

How we can run the module against a specific Javalin source is the only thing I'm not sure about tbh.

JSON is built into JMH: java -jar benchmarks.jar -rf json

@dzikoysk
Copy link
Member Author

How we can run the module against a specific Javalin source is the only thing I'm not sure about tbh.

We need to use submodules, so we can declare different versions of dependencies:

javalin-performance/
  javalin/ - current version
  javalin-master/ - master
  javalin-5x/ - previous major
  jetty-12x/ - we can even compare results to pure jetty/other libraries like Spring/Ktor etc

Each module should contain a list of benchmarks that follows a specific pattern, so later on we can properly analyze the results to build a comparison table on CI.

@dzikoysk
Copy link
Member Author

dzikoysk commented Jan 4, 2024

Regardless of the CI integration for continuous benchmarking, I think you should run some tests before final release of 6.x @tipsy. With these matchedBefore/matchedAfter we have more logic in our handling, so it'll more likely be slower than 5.x - it'd be good to know how much, so we'll know if we should prioritize this whole topic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants