You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When loading a tiff with DEFLATE compression geotiff.js makes a large number of partial GET requests (around 20 to 30). I assume the content is just distributed that way throughout the file. This seems to be triggering a bug in chrome, where the browser tries to satisfy some of these requests with parts of previous requests, and it fails. Related links:
Would it be possible to implement some sort of work around in geotiff.js? For example, a simple one would be to allow the caller to pass in additional headers that can get added to requests, which could set the no-cache header (https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control). Or perhaps implement a retry on a per GET basis to catch this specific error?
The text was updated successfully, but these errors were encountered:
After looking closer at the code it seems like there is already a way to specify the request header through options passed into GeoTIFF.fromUrl. After adding the following headers I can no longer see the error, however everything slows down, as expected:
This seems like a tricky issue and is related to the reasons why I don't like the current implementation of the blocked source.
First off: I don't think that the sending of many, many block requests is viable, as browsers set a rather harsh limit of concurrent connections per host. There is a small improvement on that already implemented: consecutive blocks are requested with a single request, but if there are 'holes' between the blocks, then many requests are sent, which is the behavior you are experiencing.
There is also the possibility to request multiple patches of data with the HTTP range request, where the response is returned as an HTTP multipart, but I never really got around to implement that. I think this would be most suitable, as for each image read request, only one request is sent to the server.
Regarding the retry mechanism: I think that it is feasible and sensible to do so even if I'm not a big fan of circumventing browser issues. Unfortunately, I'm not really able to implement this right now, but I'm happily accepting PRs.
When loading a tiff with DEFLATE compression geotiff.js makes a large number of partial GET requests (around 20 to 30). I assume the content is just distributed that way throughout the file. This seems to be triggering a bug in chrome, where the browser tries to satisfy some of these requests with parts of previous requests, and it fails. Related links:
webtorrent/webtorrent#1193 (comment)
mozilla/pdf.js#9022
https://bugs.chromium.org/p/chromium/issues/detail?id=770694
Would it be possible to implement some sort of work around in geotiff.js? For example, a simple one would be to allow the caller to pass in additional headers that can get added to requests, which could set the no-cache header (https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control). Or perhaps implement a retry on a per GET basis to catch this specific error?
The text was updated successfully, but these errors were encountered: