-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rewrite Huffman Coding Implementation #196
base: master
Are you sure you want to change the base?
Conversation
005c538
to
cfd7e3e
Compare
4242052
to
4d3a394
Compare
4d3a394
to
6e281a6
Compare
@JieyangChen7, I am going to need a little bit of help getting this pull request ready to merge. I just rebased on #197. When I try to build, I get the following error message. $ cmake -S . -B build -D CMAKE_PREFIX_PATH="$HOME/.local" -D MGARD_ENABLE_SERIAL=ON
$ cmake --build build --parallel 8
…
lib/libmgard.so.1.3.0: undefined reference to `mgard::huffman_decoding(long*, unsigned long, unsigned char*, unsigned long, unsigned char*, unsigned long, unsigned char*, unsigned long)'
lib/libmgard.so.1.3.0: undefined reference to `mgard::huffman_encoding(long*, unsigned long, unsigned char**, unsigned long*, unsigned char**, unsigned long*, unsigned char**, unsigned long*)'
collect2: error: ld returned 1 exit status
CMakeFiles/mgard-x-autotuner.dir/build.make:107: recipe for target 'bin/mgard-x-autotuner' failed (This is on 6e281a6.) Here's the issue, as far as I can tell.
I would just modify |
@ben-e-whitney Ok, let me take a look at this commit first and figure out the possible solutions. I will let you know. |
@ben-e-whitney I managed to fix this issue by calling the new compress/decompress API in There is one minor change needed. Since the new compress API returns Those changes are on my local machine. If you are ok with those changes, I can push the changes directly to this huffman-decoding branch or I can let you know what is changed and you make a commit? |
@ben-e-whitney I noticed that the new lossless compress/decompress implementation has a lower throughput compared with the old implementation. Here are my results with the same data (134 million quantized elements) and settings (Zstd with level=1). I timed just the lossless compress/decompress functions. Is this performance expected? I want to make sure I'm using the functions in the right way. |
@JieyangChen7 Thanks for all the help.
Yes, I think that's fine. I'd even put all of Sorry for all the trouble
Please push them directly to this branch.
Definitely not expected/intended, but I didn't do any performance regression tests, so I'm not shocked to learn that something is slow. I'll look into this next week and try to fix it before asking for this branch to be merged. |
@ben-e-whitney Thanks for the reply.
I have put both CartesianProduct and CartesianProduct::iterator in the #ifndef __NVCC__ block and pushed the changes to this branch. It compiles fine on both my machine and on GitHub. It can also pass all tests of MGARD-X on my machine using the new CPU lossless compressor. But it is reporting failures for some of your tests. For example: /home/runner/work/MGARD/MGARD/tests/src/test_compress.cpp:248: FAILED. Do you know what might be wrong? |
@JieyangChen7 I modified I'm still working on the performance regression and the test failures. |
These commits
huffman_encoding
,huffman_decoding
,compress_memory_huffman
, anddecompress_memory_huffman
;compress_memory_huffman
;huffman_encode
andhuffman_decode
;RFMH
, and new lossless compression methods,CPU_HUFFMAN_ZLIB
andCPU_ZSTD
(renaming originalCPU_HUFFMAN_ZLIB
toCPU_ZLIB
);CPU_ZLIB
, formerly,CPU_HUFFMAN_ZLIB
, didn't use Huffman coding, whileCPU_HUFFMAN_ZSTD
did);compress
anddecompress
, adding support for the new lossless compression methods and the new Huffman code serialization method;include/compressors.hpp
and the zlib and ZSTD compression functions;