-
Notifications
You must be signed in to change notification settings - Fork 339
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial pass at updated benchmarks for runtime 90X #942
Conversation
.saturating_add((129_379_000 as Weight).saturating_mul(x as Weight)) | ||
// Standard Error: 6_000 | ||
.saturating_add((840_000 as Weight).saturating_mul(y as Weight)) | ||
.saturating_add(RocksDbWeight::get().reads(101 as Weight)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@4meta5 Is this the change we saw earlier that we decided to ignore by not updating the banchmarks? Should we do the same here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it was active_on_initialize. I don't see why we shouldn't update it, but I also never understood why it wasn't updated in the PR that contained the optimization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change actually increases the weight a lot -- there are a lot more reads/writes than previous. So maybe this doesn't have to do with the change I was initially talking about. 🤔
Anyway, I'm a fan of having on_initialize
tally up its weight on the fly like as we've discussed. In that case, we don't even really need a benchmark for it...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change actually increases the weight a lot -- there are a lot more reads/writes than previous. So maybe this doesn't have to do with the change I was initially talking about. 🤔
No, the weight decreased a lot. There was an increase in number of db reads/writes but significant decrease in the complexity parameter coefficients (complexity parameters x and y are the number of collator candidates and nominators respectively). These coefficients dominate the benchmark much more than the reads and writes coefficient.
Especially because the PoV size is the main constraint and for the PoV it is entirely in memory so there are no db reads/writes.
How would you tally up the weight? I've never seen that approach. We shouldn't ignore nor conservatively estimate cost of the computation which is what we usually do for migrations. The purpose of these weight functions is to estimate the average cost equation which is dominated by the number of collator candidates and number of delegators.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, you're right. That would be a decrease in real-world cases.
Here's an idea for an approach to weights in on_initialize()
:
- Benchmark all of it that is directly related to number of collators/nominators, use this as a baseline for weight
- For the variable parts that take advantage of your recent optimizations, tally the operations up manually. This might be as simple as having
pay_stakers()
return its own weight and add it to1
above. This could be done in code but could probably also rely on benchmarking.
These are the pallet weights, didn't we want to do different weights per runtime? |
Yes, but that hasn't been done yet. I'll look at what it would take to implement that... |
@notlesh should we update this PR for new runtime or close it ? |
What does it do?
Updates benchmarks for runtime 90X.
TODO:
crowdloan-rewards
companion (Update weights for Moonbeam runtime 90X crowdloan-rewards#48)crowdloan-rewards
dep (once the above is merged)