-
-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Help]: Writing Geotiff #126
Comments
You can try to restart Dask scheduler. |
@AlexeyPechnikov How do I do that, should I restart the runtime? |
To restart Dask without lost of your current state re-execute this cell:
|
When I got the error I mentioned above, I tried your suggestion and ran it again, but it gave the error again. I don't understand why this section, which always works, now gives errors, can you help? INFO:distributed.scheduler:Receive client connection: Client-worker-aefae76f-0397-11ef-b3b3-0242ac1c000c The above exception was the direct cause of the following exception: Traceback (most recent call last):
|
In case it still does not work for you you need to check your installed Python libraries. The tested libraries are listed in PyGMTSAR Dockerfile https://github.com/AlexeyPechnikov/pygmtsar/blob/pygmtsar2/docker/pygmtsar.Dockerfile and can be installed as
After that, rerun your jupyter kernel and reprocess the notebook to check. |
I'm working on colab, should I try this anyway? |
No, on Google Colab just check if you use "High RAM" instance. |
I am using high RAM, should I turn off this feature? |
I turned off the high ram feature but the error still persists. |
High RAM is better, no need to disable it. Ok, you can also try to export single band rasters (for one date). |
This feature was working very well, what is the reason why it is not working now? Also, how can I save them one by one from xarray? |
Maybe changes you made in your notebook or updates to Google Colab's installed libraries could be causing issues with reproducibility. For consistent execution, you might want to check my examples, which are updated in response to changes in Google Colab, or use the PyGMTSAR Docker image. To export a single date raster as a single-band GeoTIFF, use |
Thanks, what is your most current colab notebook? |
All PyGMTSAR public Google Colab notebooks are up-to-date. |
Alexey, I examined your notebooks, but I could not see the code area where you saved the displacement maps, similar to my saves. How can I do that. Or am I getting an error due to an update to pygmtsar? I will send you access to my colab notebook to your e-mail address, if you deem it appropriate. |
I cannot debug and support your own code for free. Use PyGMTSAR export functions as I have mentioned above (see https://github.com/AlexeyPechnikov/pygmtsar/blob/pygmtsar2/pygmtsar/pygmtsar/Stack_export.py) or you need to pay for my work on your special requirements. But what actually you reason to reinvent the available in PyGMTSAR GeoTiFF export function?… |
|
It means your wavelength choice does not make sense. The filter size spans thousands of kilometers, even though the full size of a Sentinel-1 scene is much smaller. |
@AlexeyPechnikov It is very important for me to get past this problem. |
The progress indicator is blue, so it is currently calculating. It's possible that your |
I tried using the export geotiff you mentioned above to save them in historical tif format one by one, but I still got the same error. |
There is no error in your recent screenshot; it is working. |
INFO:distributed.scheduler:Receive client connection: Client-worker-842d6f0e-1103-11ef-b26f-0242ac1c000c
|
@AlexeyPechnikov However, the analysis I made covered a very small area of 1 year. I don't understand why the dimensions are a problem because I have been running this problematic code for 2 months. |
There are no errors in the log, and the processing can continue. The 'CancelledError' simply indicates that one of the tasks was cancelled and will be re-executed automatically. The message 'INFO:distributed.core:Event loop was unresponsive in Nanny for 32.10s.' indicates that the processing tasks are large and require substantial RAM. You should check the sizes of your processing grids. Also, subset.rio.to_raster(filename, driver='GTiff', crs='EPSG:4326') is not part of the PyGMTSAR code but your own, and it requires the complete data cube to be materialized, which is impractical for large grids. Use PyGMTSAR functions or your own well-optimized code because straightforward solutions do not work for large datasets. You would check how 'Lake Sarez Landslides, Tajikistan' and 'Golden Valley, CA.' examples processes large data stacks efficiently. |
As I mentioned above, you start memory-intensive computations with |
|
To compute the displacements between scenes (dates) you need to apply the least-squares solution:
It seems you have inconsistency between your |
@AlexeyPechnikov Should I apply this for los values? In the image, disp_sbas_finish is the deformation in the los direction. ew is east-west, ud is vertical deformation. Should I apply the formula you mentioned for disp_sbas_finish, i.e. the efficiency in the los direction, or for ew and ud? |
Commonly, we use least-squares processing on phase values and convert them to LOS and EW and vertical displacements later. In the notebook the least-squares processing and LOS projection calculating are merged into a single command, you need to split them. |
I didn't fully understand. Also I couldn't find the turbo variable in the notebook, where do I pull it from? |
Do not apply east-west or vertical projection to your |
Change 300000 meters (300 km) filter size to a reasonable value. |
What does this value represent? If I change this value according to 79 myself, will it negatively affect the results? ValueError: The overlapping depth 507 is larger than your array 79. |
|
@AlexeyPechnikov, Do you see any abnormalities in the images? |
Also, how can I calculate the turbo variable you mentioned here? Can you show me an example line? |
Your interferograms look affected by strong atmospheric noise. Try to cleanup (detrend) them. While I don't know your ground truth and it potentially could be valid surface deformation, I'm doubtful. I think I haven't shared public examples with turbulence correction. You can use Gaussian filtering as in the 'Imperial Valley SBAS Analysis, 2015' notebook. |
|
For your area with a side of about 10 km, you selected a filter wavelength that is too long. What is the reason? You should compare this filter wavelength and area size with the example in the 'Imperial Valley SBAS Analysis, 2015' notebook. And, by the way, the ValueError on your screenshot has the exact information about the maximum possible wavelength for your array size. |
@AlexeyPechnikov ,Hello, I was saving my results as geotiff with the following snippet. While it was working 2-3 weeks ago, now I am getting errors and cannot print the results. How can I solve this?
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.99s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.99s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.91s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 10.00s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 10.00s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 11.12s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 11.12s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 11.12s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 10.12s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 10.12s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.81s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 9.81s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.79s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.79s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.79s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 11.17s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 11.18s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.98s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.98s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.91s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 9.91s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.07s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 10.30s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 10.51s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 10.50s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 10.50s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.94s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.94s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.73s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.73s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 20.85s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.80s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.79s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.98s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.99s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 20.15s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.33s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.34s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.15s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-2f2a6a160e5cca29371a725244bf31c5', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7d8c899dd4b0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7d8c899dd3f0>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:42143 remote=tcp://127.0.0.1:46968> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-2f2a6a160e5cca29371a725244bf31c5', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7d8c899dd6c0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7d8c899dd720>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:42143 remote=tcp://127.0.0.1:46968> already closed.
INFO:distributed.core:Connection to tcp://127.0.0.1:46968 has been closed.
INFO:distributed.scheduler:Remove client Client-worker-fe041166-02d4-11ef-9f2d-0242ac1c000c
INFO:distributed.scheduler:Receive client connection: Client-worker-fe0471b1-02d4-11ef-9f27-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:42562
INFO:distributed.scheduler:Receive client connection: Client-worker-fe041166-02d4-11ef-9f2d-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:42564
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.29s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Connection to tcp://127.0.0.1:47004 has been closed.
INFO:distributed.scheduler:Remove client Client-worker-fed43ffe-02d4-11ef-9f21-0242ac1c000c
INFO:distributed.core:Event loop was unresponsive in Scheduler for 19.16s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.33s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.33s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.33s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.scheduler:Close client connection: Client-worker-fe041166-02d4-11ef-9f2d-0242ac1c000c
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.20s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.batched:Batched Comm Closed <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:42143 remote=tcp://127.0.0.1:47004>
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 115, in _background_send
nbytes = yield coro
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 767, in run
value = future.result()
File "/usr/local/lib/python3.10/dist-packages/distributed/comm/tcp.py", line 262, in write
raise CommClosedError()
distributed.comm.core.CommClosedError
INFO:distributed.scheduler:Receive client connection: Client-worker-fed43ffe-02d4-11ef-9f21-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:36614
INFO:distributed.scheduler:Close client connection: Client-worker-fed43ffe-02d4-11ef-9f21-0242ac1c000c
INFO:distributed.core:Connection to tcp://127.0.0.1:46982 has been closed.
INFO:distributed.scheduler:Remove client Client-worker-fe0471b1-02d4-11ef-9f27-0242ac1c000c
CancelledError Traceback (most recent call last)
in <cell line: 3>()
1 disp_subsett2 = disp_sbas_finish.rio.write_crs("epsg:4326", inplace=False)
2 disp_subsett2.rio.set_spatial_dims('lon', 'lat', inplace=True)
----> 3 disp_subsett2.rio.to_raster(f'disp_sbas.tiff')
12 frames
/usr/local/lib/python3.10/dist-packages/distributed/client.py in _gather()
2231 else:
2232 raise exception.with_traceback(traceback)
-> 2233 raise exc
2234 if errors == "skip":
2235 bad_keys.add(key)
CancelledError: ('getitem-7bc26de7f9999889faa36778be8593d0', 0, 0)
The text was updated successfully, but these errors were encountered: