Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Help]: Writing Geotiff #126

Open
oguzhannysr opened this issue Apr 25, 2024 · 46 comments
Open

[Help]: Writing Geotiff #126

oguzhannysr opened this issue Apr 25, 2024 · 46 comments

Comments

@oguzhannysr
Copy link

oguzhannysr commented Apr 25, 2024

@AlexeyPechnikov ,Hello, I was saving my results as geotiff with the following snippet. While it was working 2-3 weeks ago, now I am getting errors and cannot print the results. How can I solve this?

image

disp_subsett2 = disp_sbas_finish.rio.write_crs("epsg:4326", inplace=False)
disp_subsett2.rio.set_spatial_dims('lon', 'lat', inplace=True)
disp_subsett2.rio.to_raster(f'disp_sbas.tiff')

INFO:distributed.core:Event loop was unresponsive in Nanny for 9.99s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.99s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.91s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 10.00s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 10.00s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 11.12s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 11.12s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 11.12s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 10.12s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 10.12s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.81s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 9.81s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.79s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.79s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.79s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 11.17s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 11.18s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.98s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.98s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.91s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 9.91s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.07s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 10.30s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 10.51s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 10.50s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 10.50s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.94s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.94s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.73s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.73s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 20.85s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.80s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.79s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.98s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.99s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 20.15s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.33s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.34s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.15s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-2f2a6a160e5cca29371a725244bf31c5', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7d8c899dd4b0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7d8c899dd3f0>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:42143 remote=tcp://127.0.0.1:46968> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-2f2a6a160e5cca29371a725244bf31c5', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7d8c899dd6c0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7d8c899dd720>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:42143 remote=tcp://127.0.0.1:46968> already closed.
INFO:distributed.core:Connection to tcp://127.0.0.1:46968 has been closed.
INFO:distributed.scheduler:Remove client Client-worker-fe041166-02d4-11ef-9f2d-0242ac1c000c
INFO:distributed.scheduler:Receive client connection: Client-worker-fe0471b1-02d4-11ef-9f27-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:42562
INFO:distributed.scheduler:Receive client connection: Client-worker-fe041166-02d4-11ef-9f2d-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:42564
INFO:distributed.core:Event loop was unresponsive in Nanny for 20.29s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Connection to tcp://127.0.0.1:47004 has been closed.
INFO:distributed.scheduler:Remove client Client-worker-fed43ffe-02d4-11ef-9f21-0242ac1c000c
INFO:distributed.core:Event loop was unresponsive in Scheduler for 19.16s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.33s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.33s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.33s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.scheduler:Close client connection: Client-worker-fe041166-02d4-11ef-9f2d-0242ac1c000c
INFO:distributed.core:Event loop was unresponsive in Nanny for 9.20s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.batched:Batched Comm Closed <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:42143 remote=tcp://127.0.0.1:47004>
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 115, in _background_send
nbytes = yield coro
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 767, in run
value = future.result()
File "/usr/local/lib/python3.10/dist-packages/distributed/comm/tcp.py", line 262, in write
raise CommClosedError()
distributed.comm.core.CommClosedError
INFO:distributed.scheduler:Receive client connection: Client-worker-fed43ffe-02d4-11ef-9f21-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:36614
INFO:distributed.scheduler:Close client connection: Client-worker-fed43ffe-02d4-11ef-9f21-0242ac1c000c
INFO:distributed.core:Connection to tcp://127.0.0.1:46982 has been closed.
INFO:distributed.scheduler:Remove client Client-worker-fe0471b1-02d4-11ef-9f27-0242ac1c000c

CancelledError Traceback (most recent call last)
in <cell line: 3>()
1 disp_subsett2 = disp_sbas_finish.rio.write_crs("epsg:4326", inplace=False)
2 disp_subsett2.rio.set_spatial_dims('lon', 'lat', inplace=True)
----> 3 disp_subsett2.rio.to_raster(f'disp_sbas.tiff')

12 frames
/usr/local/lib/python3.10/dist-packages/distributed/client.py in _gather()
2231 else:
2232 raise exception.with_traceback(traceback)
-> 2233 raise exc
2234 if errors == "skip":
2235 bad_keys.add(key)

CancelledError: ('getitem-7bc26de7f9999889faa36778be8593d0', 0, 0)

@oguzhannysr oguzhannysr changed the title Writing Geotiff [Help]: [Help]: Writing Geotiff Apr 25, 2024
@AlexeyPechnikov
Copy link
Owner

You can try to restart Dask scheduler.

@oguzhannysr
Copy link
Author

@AlexeyPechnikov How do I do that, should I restart the runtime?

@AlexeyPechnikov
Copy link
Owner

To restart Dask without lost of your current state re-execute this cell:

# cleanup for repeatable runs
if 'client' in globals():
    client.close()
client = Client()
client

@oguzhannysr
Copy link
Author

When I got the error I mentioned above, I tried your suggestion and ran it again, but it gave the error again. I don't understand why this section, which always works, now gives errors, can you help?

INFO:distributed.scheduler:Receive client connection: Client-worker-aefae76f-0397-11ef-b3b3-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:55322
INFO:distributed.scheduler:Receive client connection: Client-worker-aefbf145-0397-11ef-b3a9-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:55358
INFO:distributed.scheduler:Receive client connection: Client-worker-aefb4dc0-0397-11ef-b3b5-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:55346
INFO:distributed.scheduler:Receive client connection: Client-worker-aefb4587-0397-11ef-b3ae-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:55338
INFO:distributed.core:Event loop was unresponsive in Nanny for 8.69s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 8.69s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 8.69s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 8.84s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 8.84s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.39s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.38s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 16.38s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.38s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 23.13s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 7.00s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 7.03s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 7.06s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 7.06s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.51s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 16.54s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.54s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.54s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.49s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.77s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 16.71s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.76s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.76s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.76s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 7.66s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 7.65s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 15.72s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Connection to tcp://127.0.0.1:55358 has been closed.
INFO:distributed.scheduler:Remove client Client-worker-aefbf145-0397-11ef-b3a9-0242ac1c000c
INFO:distributed.core:Event loop was unresponsive in Nanny for 30.65s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 30.65s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 24.25s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 24.25s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 10, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5270>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5450>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.13s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.scheduler:Close client connection: Client-worker-aefbf145-0397-11ef-b3a9-0242ac1c000c
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-4b0b85a4b992c9c92e509e4b19026e7b', 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081215570>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0812152d0>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 11, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0812152a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214f10>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 11, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0812152a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214f10>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-9e75dca11455a3812cd04b6fb1274b4e', 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081215180>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214df0>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-be2d503f4356c066ea6589c2861aca67', 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0812149a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214640>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b32ad0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b31e40>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b32ad0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b31e40>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b52710>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b53d00>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b52710>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b53d00>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 13, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c15d20>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c15390>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 13, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c15d20>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c15390>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-ebddb1e0e6c9622df7b4f88e5599a8a9', 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c16ef0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c14220>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-d8f7a2cd7dbbdda3bdf448ee415bf604', 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c148e0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c14610>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-29d756474573f7e9b25ce3321b7ddba6', 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c146d0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c147c0>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-d3e7c4063ca67f326af952e1db795259', 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c14b80>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c14d30>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 10, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5270>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5450>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 10, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5270>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5450>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 10, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5270>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5450>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 11, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0812152a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214f10>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 11, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0812152a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214f10>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 11, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0812152a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214f10>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b32ad0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b31e40>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b32ad0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b31e40>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b32ad0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b31e40>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 13, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c15d20>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c15390>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 13, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c15d20>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c15390>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 13, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c15d20>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c15390>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b52710>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b53d00>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b52710>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b53d00>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b52710>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b53d00>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 12, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe082597130>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe082597160>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 12, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe082597130>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe082597160>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 12, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe082597130>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe082597160>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 14, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0825970a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0825970d0>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 14, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0825970a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0825970d0>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 14, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0825970a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0825970d0>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 14, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0825970a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0825970d0>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 14, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0825970a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0825970d0>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 14, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0825970a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0825970d0>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': 'original-xarray-correlation-aa49928eb1a8c35b11cedd952f043584'}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--41b80a94332ef20c3ed709e58f7500a6', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--8f3d587a87441f5ef9a64fb660fcac10', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--0a86185bfa68c433e1d1106944d7421b', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--2381badfd2c9e4e472a4bc0bcea480af', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray-ele-getitem-3dff0c0b1297f41c4c840dab18997453', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--9081f3a009580b2bdbdf45ab29f4a636', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('array-a46f6f15799e48fad8fde929682ebaf2', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--a84c523e18bdf364a6a8c89a63f8001b', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('mean_agg-aggregate-5ba091f58b50c5152b3a23f84a632321', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--03a074e697c65c1dcfa5108783844155', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--66835645657a33f1cee70013a61950ae', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('array-497117a73dfd25c61cca2e435389967a', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--9b575535d46ac4471b0cd6e970213457', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 12, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe082597130>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe082597160>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 12, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe082597130>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe082597160>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 12, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe082597130>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe082597160>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b52710>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b53d00>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b52710>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b53d00>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b52710>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b53d00>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 13, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c15d20>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c15390>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 13, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c15d20>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c15390>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 13, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c15d20>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c15390>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b32ad0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b31e40>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b32ad0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b31e40>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b32ad0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b31e40>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 11, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0812152a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214f10>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 11, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0812152a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214f10>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 11, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0812152a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214f10>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 10, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5270>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5450>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 10, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5270>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5450>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 10, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5270>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5450>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
INFO:distributed.core:Event loop was unresponsive in Nanny for 6.75s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 6.74s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-67e075ce076bbb85dc28c17b673d3531', 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c32ce0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c32f20>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-8dd107733eec2a4017b992a660a9cd27', 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c32b30>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c32c20>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write [{'op': 'task-erred', 'key': ('getitem-a90d0488e2ae55b85f016009c3cf21c7', 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c330a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c33310>}]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 6074, in send_all
c.send(*msgs)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 10, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5270>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5450>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 10, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5270>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5450>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 10, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5270>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5450>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 11, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0812152a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214f10>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 11, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0812152a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214f10>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 11, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0812152a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214f10>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b32ad0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b31e40>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b32ad0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b31e40>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b32ad0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b31e40>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 13, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c15d20>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c15390>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 13, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c15d20>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c15390>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 13, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c15d20>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c15390>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b52710>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b53d00>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b52710>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b53d00>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b52710>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b53d00>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 12, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe082597130>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe082597160>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 12, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe082597130>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe082597160>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 12, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe082597130>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe082597160>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 14, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0825970a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0825970d0>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 14, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0825970a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0825970d0>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 14, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0825970a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0825970d0>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 14, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0825970a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0825970d0>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 14, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0825970a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0825970d0>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 14, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0825970a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0825970d0>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': 'original-xarray-correlation-aa49928eb1a8c35b11cedd952f043584'}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--41b80a94332ef20c3ed709e58f7500a6', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--8f3d587a87441f5ef9a64fb660fcac10', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--0a86185bfa68c433e1d1106944d7421b', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--2381badfd2c9e4e472a4bc0bcea480af', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('getitem-3dff0c0b1297f41c4c840dab18997453', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--9081f3a009580b2bdbdf45ab29f4a636', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('array-a46f6f15799e48fad8fde929682ebaf2', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--a84c523e18bdf364a6a8c89a63f8001b', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('mean_agg-aggregate-5ba091f58b50c5152b3a23f84a632321', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--03a074e697c65c1dcfa5108783844155', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--66835645657a33f1cee70013a61950ae', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('array-497117a73dfd25c61cca2e435389967a', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--9b575535d46ac4471b0cd6e970213457', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 12, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe082597130>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe082597160>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 12, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe082597130>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe082597160>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 12, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe082597130>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe082597160>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b52710>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b53d00>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b52710>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b53d00>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b52710>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b53d00>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 13, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c15d20>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c15390>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 13, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c15d20>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c15390>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 13, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c15d20>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c15390>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b32ad0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b31e40>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b32ad0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b31e40>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b32ad0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b31e40>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 11, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0812152a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214f10>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 11, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0812152a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214f10>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 11, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0812152a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214f10>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 10, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5270>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5450>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 10, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5270>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5450>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 10, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5270>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5450>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
INFO:distributed.core:Connection to tcp://127.0.0.1:55322 has been closed.
INFO:distributed.scheduler:Remove client Client-worker-aefae76f-0397-11ef-b3b3-0242ac1c000c
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 10, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5270>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5450>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 10, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5270>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5450>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 11, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0812152a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214f10>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 11, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0812152a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214f10>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b32ad0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b31e40>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b32ad0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b31e40>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 13, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c15d20>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c15390>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 13, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c15d20>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c15390>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b52710>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b53d00>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b52710>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b53d00>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 12, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe082597130>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe082597160>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 12, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe082597130>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe082597160>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 14, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0825970a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0825970d0>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 14, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0825970a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0825970d0>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 14, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0825970a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0825970d0>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 14, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0825970a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0825970d0>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': 'original-xarray-correlation-aa49928eb1a8c35b11cedd952f043584'}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--41b80a94332ef20c3ed709e58f7500a6', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--8f3d587a87441f5ef9a64fb660fcac10', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--0a86185bfa68c433e1d1106944d7421b', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--2381badfd2c9e4e472a4bc0bcea480af', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('getitem-3dff0c0b1297f41c4c840dab18997453', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--9081f3a009580b2bdbdf45ab29f4a636', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('array-a46f6f15799e48fad8fde929682ebaf2', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--a84c523e18bdf364a6a8c89a63f8001b', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('mean_agg-aggregate-5ba091f58b50c5152b3a23f84a632321', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--03a074e697c65c1dcfa5108783844155', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--66835645657a33f1cee70013a61950ae', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('array-497117a73dfd25c61cca2e435389967a', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'key-in-memory', 'key': ('xarray--9b575535d46ac4471b0cd6e970213457', 0, 0)}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 12, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe082597130>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe082597160>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 12, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe082597130>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe082597160>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b52710>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b53d00>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 15, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b52710>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b53d00>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 13, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c15d20>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c15390>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 13, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe083c15d20>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe083c15390>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b32ad0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b31e40>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 0, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe081b32ad0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081b31e40>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 11, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0812152a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214f10>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 11, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0812152a0>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe081214f10>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 10, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5270>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5450>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346> already closed.
CRITICAL:distributed.scheduler:Closed comm <BatchedSend: closed> while trying to write {'op': 'task-erred', 'key': ('getitem-b00c286a4d2c5aedc20f053ecf45334d', 10, 0, 0), 'exception': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5270>, 'traceback': <distributed.protocol.serialize.Serialized object at 0x7fe0839b5450>}
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/scheduler.py", line 5669, in report
c.send(msg)
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 156, in send
raise CommClosedError(f"Comm {self.comm!r} already closed.")
distributed.comm.core.CommClosedError: Comm <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338> already closed.
INFO:distributed.scheduler:Receive client connection: Client-worker-aefb4587-0397-11ef-b3ae-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:37204
INFO:distributed.scheduler:Receive client connection: Client-worker-aefbf145-0397-11ef-b3a9-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:37188
INFO:distributed.scheduler:Receive client connection: Client-worker-aefb4dc0-0397-11ef-b3b5-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:38194
INFO:distributed.scheduler:Receive client connection: Client-worker-aefae76f-0397-11ef-b3b3-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:38186
INFO:distributed.core:Event loop was unresponsive in Nanny for 22.08s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Connection to tcp://127.0.0.1:55346 has been closed.
INFO:distributed.scheduler:Remove client Client-worker-aefb4dc0-0397-11ef-b3b5-0242ac1c000c
INFO:distributed.core:Connection to tcp://127.0.0.1:55338 has been closed.
INFO:distributed.scheduler:Remove client Client-worker-aefb4587-0397-11ef-b3ae-0242ac1c000c
INFO:distributed.batched:Batched Comm Closed <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55322>
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 115, in _background_send
nbytes = yield coro
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 767, in run
value = future.result()
File "/usr/local/lib/python3.10/dist-packages/distributed/comm/tcp.py", line 262, in write
raise CommClosedError()
distributed.comm.core.CommClosedError
INFO:distributed.batched:Batched Comm Closed <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346>
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/comm/tcp.py", line 297, in write
raise StreamClosedError()
tornado.iostream.StreamClosedError: Stream is closed

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 115, in _background_send
nbytes = yield coro
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 767, in run
value = future.result()
File "/usr/local/lib/python3.10/dist-packages/distributed/comm/tcp.py", line 307, in write
convert_stream_closed_error(self, e)
File "/usr/local/lib/python3.10/dist-packages/distributed/comm/tcp.py", line 142, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55346>: Stream is closed
INFO:distributed.core:Event loop was unresponsive in Nanny for 22.02s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 22.02s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.batched:Batched Comm Closed <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35395 remote=tcp://127.0.0.1:55338>
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 115, in _background_send
nbytes = yield coro
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 767, in run
value = future.result()
File "/usr/local/lib/python3.10/dist-packages/distributed/comm/tcp.py", line 262, in write
raise CommClosedError()
distributed.comm.core.CommClosedError
INFO:distributed.core:Event loop was unresponsive in Nanny for 15.45s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 15.45s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.scheduler:Close client connection: Client-worker-aefae76f-0397-11ef-b3b3-0242ac1c000c

CancelledError Traceback (most recent call last)
in <cell line: 3>()
1 disp_subsett2 = disp_sbas_finish.rio.write_crs("epsg:4326", inplace=False)
2 disp_subsett2.rio.set_spatial_dims('lon', 'lat', inplace=True)
----> 3 disp_subsett2.rio.to_raster(f'disp_sbas.tiff')

12 frames
/usr/local/lib/python3.10/dist-packages/distributed/client.py in _gather()
2231 else:
2232 raise exception.with_traceback(traceback)
-> 2233 raise exc
2234 if errors == "skip":
2235 bad_keys.add(key)

CancelledError: ('getitem-b01734d57e58971e18e041a264b58a25', 0, 0)

@AlexeyPechnikov
Copy link
Owner

In case it still does not work for you you need to check your installed Python libraries. The tested libraries are listed in PyGMTSAR Dockerfile https://github.com/AlexeyPechnikov/pygmtsar/blob/pygmtsar2/docker/pygmtsar.Dockerfile and can be installed as

pip3 install \
    adjustText==1.0.4 \
    asf_search==7.0.4 \
    dask==2024.1.1 \
    distributed==2024.1.1 \
    geopandas==0.14.3 \
    h5netcdf==1.3.0 \
    h5py==3.10.0 \
    imageio==2.31.5 \
    ipywidgets==8.1.1 \
    joblib==1.3.2 \
    matplotlib==3.8.0 \
    nc-time-axis==1.4.1 \
    numba==0.57.1 \
    numpy==1.24.4 \
    pandas==2.2.1 \
    remotezip==0.12.2 \
    rioxarray==0.15.1 \
    scikit-learn==1.3.1 \
    scipy==1.11.4 \
    seaborn==0.13.0 \
    shapely==2.0.3 \
    statsmodels==0.14.0 \
    tqdm==4.66.1 \
    xarray==2024.2.0 \
    xmltodict==0.13.0 \
    pygmtsar

After that, rerun your jupyter kernel and reprocess the notebook to check.

@oguzhannysr
Copy link
Author

I'm working on colab, should I try this anyway?

@AlexeyPechnikov
Copy link
Owner

No, on Google Colab just check if you use "High RAM" instance.

@oguzhannysr
Copy link
Author

I am using high RAM, should I turn off this feature?

@oguzhannysr
Copy link
Author

I turned off the high ram feature but the error still persists.

@AlexeyPechnikov
Copy link
Owner

High RAM is better, no need to disable it. Ok, you can also try to export single band rasters (for one date).

@oguzhannysr
Copy link
Author

This feature was working very well, what is the reason why it is not working now? Also, how can I save them one by one from xarray?

@AlexeyPechnikov
Copy link
Owner

Maybe changes you made in your notebook or updates to Google Colab's installed libraries could be causing issues with reproducibility. For consistent execution, you might want to check my examples, which are updated in response to changes in Google Colab, or use the PyGMTSAR Docker image. To export a single date raster as a single-band GeoTIFF, use disp_subset[0], disp_subset.isel(date=0), or disp_subset.sel(date=...). And, pay attention to the PyGMTSAR export functions which are well-optimized for most use cases.

@oguzhannysr
Copy link
Author

Thanks, what is your most current colab notebook?

@AlexeyPechnikov
Copy link
Owner

All PyGMTSAR public Google Colab notebooks are up-to-date.

@oguzhannysr
Copy link
Author

Alexey, I examined your notebooks, but I could not see the code area where you saved the displacement maps, similar to my saves. How can I do that. Or am I getting an error due to an update to pygmtsar? I will send you access to my colab notebook to your e-mail address, if you deem it appropriate.

@AlexeyPechnikov
Copy link
Owner

I cannot debug and support your own code for free. Use PyGMTSAR export functions as I have mentioned above (see https://github.com/AlexeyPechnikov/pygmtsar/blob/pygmtsar2/pygmtsar/pygmtsar/Stack_export.py) or you need to pay for my work on your special requirements. But what actually you reason to reinvent the available in PyGMTSAR GeoTiFF export function?…

@oguzhannysr
Copy link
Author

image
Thank you Alexey, but I'm stuck like this again :(

@oguzhannysr
Copy link
Author

image
@AlexeyPechnikov ,I got this error while trying a different notebook, how can I get past it?

@AlexeyPechnikov
Copy link
Owner

It means your wavelength choice does not make sense. The filter size spans thousands of kilometers, even though the full size of a Sentinel-1 scene is much smaller.

@oguzhannysr
Copy link
Author

görüntü Teşekkür ederim Alexey, ama yine böyle sıkışıp kaldım :(

@AlexeyPechnikov It is very important for me to get past this problem.

@AlexeyPechnikov
Copy link
Owner

The progress indicator is blue, so it is currently calculating. It's possible that yourdisp_sbas_finish variable is defined in a way that requires too much RAM for processing. Be aware that exporting to NetCDF produces a single large file, which can be quite huge. As discussed above, exporting to GeoTIFF generates a set of files and can be much more efficient. You don’t need to switch between functions; stick with the selected one and adjust your code if problems arise. Also, restarting the Dask scheduler can help resolve issues that occurred before the execution of the selected cell.

@oguzhannysr
Copy link
Author

I tried using the export geotiff you mentioned above to save them in historical tif format one by one, but I still got the same error.

@AlexeyPechnikov
Copy link
Owner

There is no error in your recent screenshot; it is working.

@oguzhannysr
Copy link
Author

Son ekran görüntüsünde herhangi bir hata yok; çalışıyor.

image

@oguzhannysr
Copy link
Author

INFO:distributed.scheduler:Receive client connection: Client-worker-842d6f0e-1103-11ef-b26f-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:39142
INFO:distributed.scheduler:Receive client connection: Client-worker-8630719a-1103-11ef-b272-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:39160
INFO:distributed.scheduler:Receive client connection: Client-worker-8638e296-1103-11ef-b267-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:39170
INFO:distributed.scheduler:Receive client connection: Client-worker-862be432-1103-11ef-b26d-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:39154
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.16s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 16.17s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.17s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.15s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.31s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 15.65s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 31.75s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 31.80s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 31.80s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 31.80s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 16.64s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
WARNING:distributed.utils_perf:full garbage collections took 19% CPU time recently (threshold: 10%)
INFO:distributed.core:Event loop was unresponsive in Nanny for 32.10s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Connection to tcp://127.0.0.1:39170 has been closed.
INFO:distributed.scheduler:Remove client Client-worker-8638e296-1103-11ef-b267-0242ac1c000c
INFO:distributed.core:Event loop was unresponsive in Nanny for 32.07s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 32.09s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Scheduler for 32.09s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 31.91s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.batched:Batched Comm Closed <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:43693 remote=tcp://127.0.0.1:39170>
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 115, in _background_send
nbytes = yield coro
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 767, in run
value = future.result()
File "/usr/local/lib/python3.10/dist-packages/distributed/comm/tcp.py", line 262, in write
raise CommClosedError()
distributed.comm.core.CommClosedError
WARNING:distributed.utils_perf:full garbage collections took 19% CPU time recently (threshold: 10%)
WARNING:distributed.utils_perf:full garbage collections took 19% CPU time recently (threshold: 10%)
WARNING:distributed.utils_perf:full garbage collections took 19% CPU time recently (threshold: 10%)
WARNING:distributed.utils_perf:full garbage collections took 19% CPU time recently (threshold: 10%)
WARNING:distributed.utils_perf:full garbage collections took 19% CPU time recently (threshold: 10%)
WARNING:distributed.utils_perf:full garbage collections took 19% CPU time recently (threshold: 10%)
INFO:distributed.core:Event loop was unresponsive in Nanny for 15.56s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 15.55s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Event loop was unresponsive in Nanny for 15.53s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.core:Connection to tcp://127.0.0.1:39154 has been closed.
INFO:distributed.scheduler:Remove client Client-worker-862be432-1103-11ef-b26d-0242ac1c000c
INFO:distributed.core:Event loop was unresponsive in Scheduler for 15.56s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.scheduler:Close client connection: Client-worker-8638e296-1103-11ef-b267-0242ac1c000c
INFO:distributed.core:Event loop was unresponsive in Nanny for 15.56s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
INFO:distributed.batched:Batched Comm Closed <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:43693 remote=tcp://127.0.0.1:39154>
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/distributed/batched.py", line 115, in _background_send
nbytes = yield coro
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 767, in run
value = future.result()
File "/usr/local/lib/python3.10/dist-packages/distributed/comm/tcp.py", line 262, in write
raise CommClosedError()
distributed.comm.core.CommClosedError
INFO:distributed.scheduler:Close client connection: Client-worker-862be432-1103-11ef-b26d-0242ac1c000c
INFO:distributed.scheduler:Receive client connection: Client-worker-862be432-1103-11ef-b26d-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:46956

CancelledError Traceback (most recent call last)
in <cell line: 2>()
6 filename = f"result/data_{date}.tif"
7 # Subset'i GeoTIFF olarak kaydetme
----> 8 subset.rio.to_raster(filename, driver='GTiff', crs='EPSG:4326')
9 print(f"{filename} başarıyla kaydedildi.")

17 frames
/usr/local/lib/python3.10/dist-packages/distributed/client.py in _gather()
2231 else:
2232 raise exception.with_traceback(traceback)
-> 2233 raise exc
2234 if errors == "skip":
2235 bad_keys.add(key)

CancelledError: ('getitem-08f5ccff885124bb3aed4c18f43f0f97', 48, 0, 0)

@oguzhannysr
Copy link
Author

@AlexeyPechnikov However, the analysis I made covered a very small area of ​​1 year. I don't understand why the dimensions are a problem because I have been running this problematic code for 2 months.

@AlexeyPechnikov
Copy link
Owner

There are no errors in the log, and the processing can continue. The 'CancelledError' simply indicates that one of the tasks was cancelled and will be re-executed automatically. The message 'INFO:distributed.core:Event loop was unresponsive in Nanny for 32.10s.' indicates that the processing tasks are large and require substantial RAM. You should check the sizes of your processing grids. Also, subset.rio.to_raster(filename, driver='GTiff', crs='EPSG:4326') is not part of the PyGMTSAR code but your own, and it requires the complete data cube to be materialized, which is impractical for large grids. Use PyGMTSAR functions or your own well-optimized code because straightforward solutions do not work for large datasets. You would check how 'Lake Sarez Landslides, Tajikistan' and 'Golden Valley, CA.' examples processes large data stacks efficiently.

@oguzhannysr
Copy link
Author

image
This is the data I want to save.

@oguzhannysr
Copy link
Author

image

@AlexeyPechnikov
Copy link
Owner

As I mentioned above, you start memory-intensive computations with disp_sbas_finish. PyGMTSAR utilizes delayed computation principles, and your variable can be not actual output data but rather a formula to compute them. When you stack a lot of computation on data without materializing them, it only works for small datasets. Try materializing the data on disk first and then export it afterward.

@oguzhannysr
Copy link
Author

image
@AlexeyPechnikov Alexey, thank you very much for your help, the notebooks you mentioned worked faster and better. Now I have a different question. I parsed the los data in vertical and east-west directions and wrote it as geotiff. However, it prints the results as pair. My expectation was that it would be like the los outputs on the left in the screenshot, that is, a single .tif file for each date.

@AlexeyPechnikov
Copy link
Owner

To compute the displacements between scenes (dates) you need to apply the least-squares solution:

disp = sbas.lstsq(unwrap.phase - trend - turbo, corr)

It seems you have inconsistency between your disp_sbas_finish defined for interferograms and least-squares processed los results.

@oguzhannysr
Copy link
Author

@AlexeyPechnikov Should I apply this for los values? In the image, disp_sbas_finish is the deformation in the los direction. ew is east-west, ud is vertical deformation. Should I apply the formula you mentioned for disp_sbas_finish, i.e. the efficiency in the los direction, or for ew and ud?

@oguzhannysr
Copy link
Author

image
You have already done this step in the notebook, do I need to do it again?

@AlexeyPechnikov
Copy link
Owner

Commonly, we use least-squares processing on phase values and convert them to LOS and EW and vertical displacements later. In the notebook the least-squares processing and LOS projection calculating are merged into a single command, you need to split them.

@oguzhannysr
Copy link
Author

I didn't fully understand. Also I couldn't find the turbo variable in the notebook, where do I pull it from?

@AlexeyPechnikov
Copy link
Owner

Do not apply east-west or vertical projection to your disp_sbas variable because it is already LOS projection. See 'CENTRAL Türkiye Mw 7.8 & 7.5 Earthquakes Co-Seismic Interferogram, 2023.' example for the functions usage. If you don't calculate turbulent atmosphere correction turbo, just omit it.

@oguzhannysr
Copy link
Author

image
I opened the notebook you mentioned and changed the initial variables only for my own field, but even though I tried twice, I always get stuck here, what should I do?

@AlexeyPechnikov
Copy link
Owner

Change 300000 meters (300 km) filter size to a reasonable value.

@oguzhannysr
Copy link
Author

oguzhannysr commented May 15, 2024

What does this value represent? If I change this value according to 79 myself, will it negatively affect the results? ValueError: The overlapping depth 507 is larger than your array 79.

@oguzhannysr
Copy link
Author

Do not apply east-west or vertical projection to your disp_sbas variable because it is already LOS projection. See 'CENTRAL Türkiye Mw 7.8 & 7.5 Earthquakes Co-Seismic Interferogram, 2023.' example for the functions usage. If you don't calculate turbulent atmosphere correction turbo, just omit it.

image
@AlexeyPechnikov ,Is it right for me to do this?

@oguzhannysr
Copy link
Author

image
image
image
image
image

@AlexeyPechnikov, Do you see any abnormalities in the images?

@oguzhannysr
Copy link
Author

Do not apply east-west or vertical projection to your disp_sbas variable because it is already LOS projection. See 'CENTRAL Türkiye Mw 7.8 & 7.5 Earthquakes Co-Seismic Interferogram, 2023.' example for the functions usage. If you don't calculate turbulent atmosphere correction turbo, just omit it.

Also, how can I calculate the turbo variable you mentioned here? Can you show me an example line?

@AlexeyPechnikov
Copy link
Owner

Your interferograms look affected by strong atmospheric noise. Try to cleanup (detrend) them. While I don't know your ground truth and it potentially could be valid surface deformation, I'm doubtful.

I think I haven't shared public examples with turbulence correction. You can use Gaussian filtering as in the 'Imperial Valley SBAS Analysis, 2015' notebook.

@oguzhannysr
Copy link
Author

image
image
@AlexeyPechnikov ,Even though I change the wavelength, the expected array size also changes. How can I choose the appropriate wavelength?

@AlexeyPechnikov
Copy link
Owner

For your area with a side of about 10 km, you selected a filter wavelength that is too long. What is the reason? You should compare this filter wavelength and area size with the example in the 'Imperial Valley SBAS Analysis, 2015' notebook. And, by the way, the ValueError on your screenshot has the exact information about the maximum possible wavelength for your array size.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants