Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: benchamrk about tcmalloc and memcpy #28

Open
guangqianpeng opened this issue Feb 22, 2019 · 5 comments
Open

Question: benchamrk about tcmalloc and memcpy #28

guangqianpeng opened this issue Feb 22, 2019 · 5 comments

Comments

@guangqianpeng
Copy link

guangqianpeng commented Feb 22, 2019

hi, I am reading the code and have done some benchmark:
https://github.com/guangqianpeng/libaco/blob/master/bench_result
I have two questions:

  1. tcmalloc improves the benchmark results. With aco_amount=1000000 and copy_stack_size=56B, the tcmalloc version achieves 37ns per aco_resume() operation but the default takes 66ns. Why? In this case, aco_resume() does not allocate memory, which is really confusing...
  2. When copying stack, you use %xmm registers to optimize small memory copying. But according to my benchmark, this does not make many differences. I guess memcpy() already takes advantage of these registers. Do you have more benchmark results?

I will be very grateful to you for answering my questions :-)

@hnes
Copy link
Owner

hnes commented Feb 22, 2019

Hi @guangqianpeng,

  1. tcmalloc improves the benchmark results. With aco_amount=1000000 and copy_stack_size=56B, the tcmalloc version achieves 37ns per aco_resume() operation but the default takes 66ns. Why? In this case, aco_resume() does not allocate memory, which is really confusing...

I think the main reason of such result is because the tcmalloc has many specialized optimizations on memory efficiency and locality especially when the allocation of small objects, which is much better than the vanilla glibc allocator.

  1. When copying stack, you use %xmm registers to optimize small memory copying. But according to my benchmark, this does not make many differences. I guess memcpy() already takes advantage of these registers. Do you have more benchmark results?

Here is actually a knack in the code like __uint128_t xmm0: the gcc would try to use sse to optimize the operations about the __uint128_t data type (whereas the clang does not, as far as I know). So if you want to use such sse optimization in libaco now, you could use gcc to compile the libaco into a static library and then use a linker to link it with any object file you like. Also, you could choose to use objdump to inspect the actual machine code generated by the aco_resume function.

Even when there is no such sse enhancement provided by the compiler, such "very-short memcpy inline" with these general purpose registers are still more efficient than to call the libc memcpy directly. That is because, in the case of such short copy, the cost of a function call is not small enough to neglect anymore. So, there would be some gains anyway.

Maybe in the future we should choose to use the sse directly instead of counting on such compiler's behavior ;-) But I'm afraid that such plan has to be postponed since there is a much more important thing to do now, i.e. the #22.

I will be very grateful to you for answering my questions :-)

All the discussions and questions about libaco would always be welcomed here. Just feel free to open any new issue you like :D

@hnes
Copy link
Owner

hnes commented Feb 22, 2019

2. Do you have more benchmark results?

I did some benchmark about such conditional memcpy-inline in the past and did get the result I want. But there were no records. I would like to do another test as soon as I get another spare time.

@guangqianpeng
Copy link
Author

  1. I did use perf tool to check L1 dcache miss rate of the two versions, tcmalloc version achieved about the half miss rate of glibc. I guess there was false sharing or other cache problems and tried to solve it without tcmalloc. But I finally failed :-(

  2. I did have inspect the assembly code of co_resume() and saw such movqd %rbx %xmm1 things, I also trace into the glibc and found that memcpy finally call __memcpy_avx_unaligned(), which use AVX instruction sets. What I didn't think of is that the overhead of memcpy() call cannot be ignored, especially for small stack.

BTW, the libaco project is great, I am looking forward to your next version (especially the co schedualer).

@hnes
Copy link
Owner

hnes commented Feb 23, 2019

BTW, the libaco project is great, I am looking forward to your next version (especially the co scheduler).

Thank you very much for your kind encouragement, @guangqianpeng, and I would try to finish the next release as soon as possible ;-)

@shizhx
Copy link

shizhx commented Jul 29, 2019

BTW, the libaco project is great, I am looking forward to your next version (especially the co scheduler).

Thank you very much for your kind encouragement, @guangqianpeng, and I would try to finish the next release as soon as possible ;-)

How about the next release? :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants