Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance benchmarks #4

Open
dumblob opened this issue Nov 22, 2021 · 3 comments
Open

Performance benchmarks #4

dumblob opened this issue Nov 22, 2021 · 3 comments

Comments

@dumblob
Copy link

dumblob commented Nov 22, 2021

Any notion how does this perform compared to state of the art ring buffers?

@sergeyn
Copy link

sergeyn commented Nov 23, 2021

Any notion how does this perform compared to state of the art ring buffers?

Any notion of what "state of the art" ring buffers you are referring to ?

@dumblob
Copy link
Author

dumblob commented Nov 23, 2021

These projects use IMHO state of the art "multithreaded" queues:

https://github.com/pramalhe/ConcurrencyFreaks

https://github.com/mratsim/weave (e.g. mratsim/weave#21 )

http://daugaard.org/blog/writing-a-fast-and-versatile-spsc-ring-buffer

http://www.vitorian.com/x1/archives/370

Also the queue size might be between the size of L1 and 2*L2 cache as suggested in paragraph 3.3 in Analyzing Efficient Stream Processing on Modern Hardware.

...and others I don't remember their names this quickly

@Taymindis
Copy link
Owner

Hi,

This project I implemented by referencing an atomic collection book. I have no idea about what state of art means.

If you said Multithreaded Testing, I guess you are looking for this functional test?

void multi_enq_deq(pthread_t *threads) {

However, the lfqueue is using ttl to free the node, it might be risky. I will always be suggesting not to use for production env.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants