Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: replace node buffers with uint8arrays #51

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

fix: replace node buffers with uint8arrays #51

wants to merge 1 commit into from

Conversation

achingbrain
Copy link
Collaborator

All use of node Buffers have been replaced with Uint8Arrays

BREAKING CHANGES:

  • cbor.encode used to return a Buffer, now it returns a Uint8Array

All use of node Buffers have been replaced with Uint8Arrays

BREAKING CHANGES:

- `cbor.encode` used to return a Buffer, now it returns a Uint8Array
@rvagg
Copy link

rvagg commented Aug 28, 2020

Very nice work, but npm run bench is a bit sad:

Before

encode - node-cbor - 76 x 1,350 ops/sec ±23.87% (73 runs sampled)
encode - borc - 76 x 20,574 ops/sec ±2.73% (89 runs sampled)
encode - stream - borc - 76 x 7,558 ops/sec ±3.59% (82 runs sampled)
encode - JSON.stringify - 76 x 40,179 ops/sec ±5.96% (69 runs sampled)
decode - node-cbor - 47 x 1,788 ops/sec ±19.72% (82 runs sampled)
decode - borc - 47 x 28,251 ops/sec ±7.06% (81 runs sampled)
decode - JSON.parse - 47 x 52,242 ops/sec ±2.92% (90 runs sampled)

After

encode - node-cbor - 76 x 1,291 ops/sec ±22.84% (76 runs sampled)
encode - borc - 76 x 7,794 ops/sec ±3.46% (77 runs sampled)
encode - stream - borc - 76 x 3,208 ops/sec ±8.85% (79 runs sampled)
encode - JSON.stringify - 76 x 40,007 ops/sec ±7.14% (69 runs sampled)
decode - node-cbor - 47 x 3,814 ops/sec ±11.51% (89 runs sampled)
decode - borc - 47 x 11,816 ops/sec ±21.45% (63 runs sampled)
decode - JSON.parse - 47 x 49,421 ops/sec ±3.76% (81 runs sampled)

I recall hearing 3rd hand (I haven't tested for myself and I think it was @mikeal who told me that someone told him), that the various set*() methods are particularly terrible performers. I don't know how heavily they feature in the paths of these benchmarks but something's imposing a very high toll on both encode and decode, more than half the speed for the ops in question.

Makes me think I need to build more benchmarking into the things I'm currently converting!

@@ -454,7 +456,9 @@ class Encoder {
size += resultLength[i]
}

var res = Buffer.allocUnsafe(size)
Copy link

@mikeal mikeal Aug 28, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably where the perf difference is coming from.

There’s no equivalent to this outside of Node.js, so this change is moving us from a request for an already allocated piece of dirty memory to a new allocation of zero’d memory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants