-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
thoughput and compression ratio of the high level API #187
Comments
@hengjiew Sorry about the late reply. |
@JieyangChen7 Thanks for the reply. I will test with that PR. Besides this, is there any guidance about setting the error tolerance? When I set the tolerance below 1.0e-4, why does it stop compressing the data? Thanks! |
@hengjiew Besides storing the compressed data, the returned data buffer also stores necessary information for decompressing the data. In the GPU parallel implementation, that information can be as large as hundreds of KB to a few MB. So, when the input dataset is small, it is likely that the overhead for storing that information is high, which may limit the overall compression ratios. When the input data is large, such overhead is negligible. |
Another issue: since your data is pointwise random, there's very little 'compressible structure' for MGARD to take advantage of. The algorithm can't do much with noise. You should get a better compression ratio if your data is smoother. Try a random combination of sines and cosines. for (std::size_t i = 0; i < ni; ++i) {
const double x = static_cast<i> / ni;
for (std::size_t j = 0; j < nj; ++j) {
const double y = static_cast<j> / nj;
for (std::size_t k = 0; k < nk; ++k) {
const double z = static_cast<k> / nk;
// Set `f` to be a function with some smoothness.
arr_h[nj * i + nk * j + k] = f(x, y, z);
}
}
} |
Hello, I am testing the high level APIs on a V100 GPU (Summit) with a very simple benchmark. The input data is generated from random numbers between (0, 1). I got a few questions and it would be very helpful if you guys could shed some lights on them.
Below is the test I am using. Thank you so much!
The text was updated successfully, but these errors were encountered: