Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider performance improvements #50

Open
ndmitchell opened this issue Apr 15, 2019 · 10 comments
Open

Consider performance improvements #50

ndmitchell opened this issue Apr 15, 2019 · 10 comments

Comments

@ndmitchell
Copy link

We at Digital Asset took your work and made a bunch of performance improvements to the Decode module. The code is at https://github.com/digital-asset/daml/blob/master/nix/third-party/proto3-wire/src/Proto3/Wire/Decode.hs and the approximate list of changes I've been able to find are:

  • Switch to the Applicative instance for proto3-wire.
  • Use List instead of Map for RawMessage decoding
  • More INLINE instances.
  • Remove cereal dependency, do things more directly
  • Leave RawMessage a HashMap instead of converting
  • Get rid of Seq

Are you interested in taking changes that optimise the performance?

@ixmatus
Copy link
Collaborator

ixmatus commented Apr 15, 2019

@ndmitchell yes!

@gbaz
Copy link
Collaborator

gbaz commented Apr 15, 2019

Absolutely!

Note that I think we have some divergence because we've made performance improvements to Decode as well in the meantime, some of which are the same, some of which are different. (E.g. Seq is gone too, but also we have a manually unrolled varint encoding, etc.)

C.f. #46 for the changes.

So it would require thinking a bit to figure out which changes in your branch can port over and make a difference in the code as it currently stands.

@ndmitchell
Copy link
Author

Did you write any benchmarks? Our benchmarks were in the context of our larger project, so probably don't reproduce easily.

@gbaz
Copy link
Collaborator

gbaz commented Apr 15, 2019

Same for us, unfortunately.

@ndmitchell
Copy link
Author

FWIW I ran the encoding benchmark I had and your version is 20% faster than ours. Yay! If you can't see anything in our patch set that looks valuable I'll close this ticket.

I didn't really measure decoding performance. That used to be the bottleneck for us, and where we spent most energy optimising. Now it's a code path we don't use that often.

@gbaz
Copy link
Collaborator

gbaz commented Apr 16, 2019

Good to know that all that performance work paid off! (It was rather exhausting). Both encoding and decoding are still pretty costly to us given the volume of data we work with, but we didn't have any great ideas for further improvements. If your team has any more bright ideas, please chime in!

As for the current patch set, the one thing we didn't really do or consider at all was the INLINE annotations. If you have benches that show that they bring improvement on the current codebase, that would be great.

@ndmitchell
Copy link
Author

At some point in time in the past the INLINE pragmas helped. I can't say much about now.

I also note we switched to unchecked shifts. Not sure if GHC automatically optimises around that?

We also used HashMap rather than IntMap. Although that may have been because the rest of our stuff was using that.

There is also a BS.copy in the bytes that I was surprised to see surviving optimisation work.

@gbaz
Copy link
Collaborator

gbaz commented Apr 17, 2019

Maybe @j6carey can speak to the copy. I think we want it for our purposes to reduce overall retention.

Also not sure if the unchecked shifts are an improvement or not -- we might get equivalent results by simply having specialized to a single dictionary. Both are certainly worth considering...

@j6carey
Copy link
Collaborator

j6carey commented Apr 17, 2019

@gbaz , @ndmitchell , do you mean the copy during decoding? I have not studied decoding very much.

I think checked shifts whose shift distance is a compile-time constant get optimized to the same thing as unchecked shifts.

@ndmitchell
Copy link
Author

@j6carey yes, the copy during decoding. We omitted that in our fork.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants