Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Side effects of making libpng write faster by changing the deflate compression level #498

Open
abhibaruah opened this issue Nov 6, 2023 · 6 comments

Comments

@abhibaruah
Copy link

Hello all,

We have noticed that while writing PNG images using libpng with the default compression settings, the performance is really slow.

For comparison, using libtiff to write tiff files with the default deflate compression takes almost half the time to write the same image data while generating a file of almost the same size.

  1. Is there a reason why writing PNG files is significantly slower?
  2. One possible solution to speed up PNG writes is to alter the compression level by using ' png_set_compression_level'. Is this recommended? Are there any side effects to using this function (other than an increased size if I lower the deflate level)?
@jbowler
Copy link
Contributor

jbowler commented Nov 12, 2023

Try turning off the filtering.

@jbowler
Copy link
Contributor

jbowler commented Dec 9, 2023

@ctruta; no response in a month. I'm guessing they realized that there is a lot of rope to play with here. It's not a bug. It might be worth having a "Discussions" tab, e.g. see https://github.com/NightscoutFoundation/xDrip; it's then possible to pretty much bounce questions like this, feature requests, etc, to the Discussions tab.

@jbowler
Copy link
Contributor

jbowler commented Jan 30, 2024

Dead bug.

@randy408
Copy link

randy408 commented Feb 2, 2024

I did some experiments and reducing compression level gives the most predictable performance / file size tradeoff: https://libspng.org/docs/encode/#performance (these are docs for my PNG library but the encoder defaults are all the same and it applies to most PNG libraries).

Using zlib-ng instead of zlib also helps.

@jbowler
Copy link
Contributor

jbowler commented Feb 2, 2024

@randy408 I found much the same thing for compression level. Mark specifically mentions Z_RLE as useful for PNG and at least one other non-PNG format claims compression better than PNG with just RLE (no filtering IRC). Results of filtering are highly dependent on test set. CGI graphics arts images do really well with SUB...

My results (reading this from my own version of libpng1.7 :-)

  1. High speed: Z_RLE (as fast as HUFFMAN_ONLY can can reduce size a lot in a few cases).

  2. IDAT and iCCP: Z_DEFAULT_STRATEGY, otherwise (iTXt, zTXt) Z_FILTERED

  3. Then set level based on the strategy choice: Z_RLE and Z_HUFFMAN level := 1

  4. Z_FIXED, Z_FILTERED, Z_DEFAULT_STRATEGY as follows:

  5. HIgh speed: level := 1

  6. 'low' compression: level := 3

  7. 'medium' compression: level := 6 (the zlib default)

  8. 'high' plus 'low memory' (on read) or 'high read speed' : level := 9

The 'windowBits' is set based on strategy/level/user request plus a consideration of the size of the uncompressed data; it's rarely worth setting it to cover the whole data except with Z_FILTERED at a level >= 4 (determined by experiment.) and Z_FIXED (by logical argument.)

Sometimes a lower windowBits increases compression :-)

Choice of filter should be based on formatxbit_depthxwidth. In general I found NONE (i.e. no filtering) to be recommended if the actual number of bytes in the row (excluding the filter byte) is less than 256. Palette (all cases) and grayscale <8bit-per-pixel as well as 16-bit gray+alpha always ended up better with NONE.

I define 'fast' filters as NONE+SUB+UP; these are fast in both encode and decode. NONE+SUB is particularly good for the reader because it avoids the need to access the previous row. Anyway the filter selection I ended up with is:

  1. high speed or palette or low bit depth, or row<128 bytes or image size <= 512 bytes: NONE
  2. low compression: NONE+SUB
  3. medium compression: FAST
  4. high compression: ALL

Except that if the implementation is not able to buffer (e.g. to handle writing very large images in low memory) the NONE filter is forced.

Overall I'm firmly of the opinion that the app should specify the aim such as fast write or fast read and the implementation should work out the best way of doing it because the zlib settings are extensive and the result depends on all of the settings at once. There is too much rope and no general solution.

Getting a decent test set is particularly tricky: I have a test set of around 125000 PNG files that I spidered off the web over 10 years ago. It's primarily smaller images (<256x256) so it's probably still representative of web page icons etc. Larger images are difficult; the image database used for AI is primarily photographic and not really applicable to PNG. My spidering showed that a lot of the very large images were graphics arts productions, particularly posters. These have low noise and low colour count whereas CG images (ray tracing) have high noise but often a fairly restricted gamut. Knowing the nature of the image being compressed is probably as good a guide to the settings as anything but it's yet another variable!

@jbowler
Copy link
Contributor

jbowler commented Sep 4, 2024

@ctruta: this is a discussion not an issue. Don't know how you want to deal with that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants