Skip to content

tsinghua-fib-lab/CDGON-KDD24

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Physics-informed NeuralODE for Post-disaster Mobility Recovery (KDD2024 Research Track)

This repo contains codes and data for the following paper:

Jiahao Li, Huandong Wang, Xinlei Chen*: Physics-informed NeuralODE for Post-disaster Mobility Recovery

Environment building and code runing

The main environment requirements can be seen in requirements.txt

To run the training, testing and get the experimental result, just run

cd CDGON-KDD24
python main.py

About the data

The mobility data from Aug 1st - Sep 10th, 2019 in FL, GA and SC are collected in

./data

All data are organized into shapes like [T,N,N], representing T population mobility matrices, where diagonal elements represent intra-regional population flows and non-diagonal elements represent inter-regional population flows.

Supplementary materials, including proof of convergence of the key formula $\frac{\mathrm{d} r_i(t)}{\mathrm{d} t} = \alpha \frac{r_i(t)}{\overline{r_i}}[ \overline{r_i} - r_i(t)]$, performance evaluation results of STGCN and CDGON, and hyper-parameter experimental results:

Theorem: In the formula $\frac{\mathrm{d} r_i(t)}{\mathrm{d} t} = \alpha \frac{r_i(t)}{\overline{r_i}}[ \overline{r_i} - r_i(t)]$, $r_i(t)$ converges to $\overline{r_i}$ when $t \to \infty $ instead of oscillating perpetually around $\overline{r_i}$.

Proof of Theorem:

The differential equation $\frac{\mathrm{d} r_i(t)}{\mathrm{d} t} = \alpha \frac{r_i(t)}{\overline{r_i}}[ \overline{r_i} - r_i(t)]$ can be transformed to:

$\overline{r_i}\mathrm{d} r_i(t) = \alpha r_i(t)[ \overline{r_i} - r_i(t)] \mathrm{d} t$,

which is a separable variable equation and we can have:

$\frac{1}{\alpha r_i(t)[ \overline{r_i} - r_i(t)]}\mathrm{d} r_i(t) = \frac{1}{\overline{r_i}}\mathrm{d} t$,

which can be integrated to obtain:

$\frac{r_i(t)}{\overline{r_i} - r_i(t)} = Ce^{\alpha t}$,

which can generate:

$r_i(t) = \frac{\overline{r_i}}{1+Ce^{-\alpha t}} $,

which satisfies:

$\lim_{t \to \infty} r_i(t) = \lim_{t \to \infty}\frac{\overline{r_i}}{1+Ce^{-\alpha t}}=\overline{r_i}$

which proves the theorem

Comparision of Performance Evaluation between STGCN and CDGON

Experiments Type Metrics STGCN CDGON
Performance Evaluation in FL MAE 62417.8359 59767.4805
R2 0.9909 0.9948
NRMSE 0.0954 0.0724
Performance Evaluation in GA MAE 7082.8677 2013.2821
R2 0.9734 0.9977
NRMSE 0.1631 0.0475
Performance Evaluation in in SC MAE 18133.7500 9040.6785
R2 0.9759 0.9941
NRMSE 0.1554 0.0771

Hyper-parameter experimental results

Hyper-parameter settings in CDGON

Embedding dimension: 48

edge loss weight $\lambda$: 100

Learning rate: 0.003

Experiment results on different embedding dimensions, where the other parameter settings are same as original paper.

Experiments Type Metrics 16 32 48 64 80
Performance Evaluation in FL MAE 65796.6484 31556.1035 59767.4805 38900.8594 16794.2266
R2 0.9817 0.9947 0.9948 0.9949 0.9993
NRMSE 0.1353 0.0728 0.0724 0.0711 0.0265
Performance Evaluation in GA MAE 8596.8066 2476.3357 2013.2821 6726.749 2004.8026
R2 0.9 0.9965 0.9977 0.9649 0.9976
NRMSE 0.3162 0.0588 0.0475 0.1873 0.0493
Performance Evaluation in in SC MAE 15904.1963 20676.6699 9040.6758 13369.793 35273.7461
R2 0.9814 0.9683 0.9941 0.9821 0.9332
NRMSE 0.1363 0.178 0.0771 0.134 0.2584
Generalization FL -> GA MAE 8991.2666 5983.022 5433.7192 3936.6899 12559.5605
R2 0.9442 0.9704 0.9831 0.9852 0.8773
NRMSE 0.2363 0.1722 0.1301 0.1215 0.3502
Generalization FL -> SC MAE 22449.4219 15979.0742 13609.5889 13573.167 32977.6367
R2 0.9525 0.9736 0.9773 0.9797 0.8973
NRMSE 0.218 0.1625 0.1508 0.1426 0.3204
Generalization GA -> FL MAE 44399.207 46987.5703 48276.1992 40045.4766 57892.6836
R2 0.9901 0.9919 0.9901 0.9922 0.9863
NRMSE 0.0997 0.0898 0.0997 0.0881 0.1171
Generalization GA -> SC MAE 14188.7959 16384.1875 14315.9375 12567.7197 13666.5508
R2 0.9721 0.9732 0.982 0.9847 0.9831
NRMSE 0.167 0.1637 0.1341 0.1235 0.13
Generalization SC -> FL MAE 37216.1172 42591.7539 73204.6719 53571.0898 74730.8906
R2 0.9926 0.9916 0.9801 0.988 0.9782
NRMSE 0.0859 0.0916 0.1412 0.1095 0.1478
Generalization SC -> GA MAE 4730.644 3934.5408 8921.1035 3830.97 11805.2217
R2 0.9785 0.9875 0.9687 0.9891 0.9456
NRMSE 0.1467 0.1117 0.177 0.1042 0.2331

Experiment results on different $\lambda$, where the other parameter settings are same as original paper.

Experiments Metrics 10 50 100 500 1000
Performance Evaluation in FL MAE 32522.3301 40861.3164 59767.4805 35012.9688 64905.2461
R2 0.994 0.9912 0.9948 0.9953 0.9866
NRMSE 0.0772 0.0939 0.0724 0.0686 0.116
Performance Evaluation in GA MAE 43049.1211 4385.7549 2013.2821 7148.8594 5631.6104
R2 0.4618 0.9816 0.9977 0.9399 0.9876
NRMSE 0.7336 0.1357 0.0475 0.2452 0.1113
Performance Evaluation in SC MAE 21663.9062 22532.2617 9040.6758 16925.8652 38515.3086
R2 0.9648 0.9658 0.9941 0.9674 0.8788
NRMSE 0.1875 0.1848 0.0771 0.1804 0.3481
Generalization FL -> GA MAE 7073.2612 7787.6982 5433.7192 4014.4309 8322.3545
R2 0.9646 0.9603 0.9831 0.9835 0.9553
NRMSE 0.1882 0.1993 0.1301 0.1283 0.2114
Generalization FL -> SC MAE 17733.6484 18547.4883 13609.5889 14024.291 21716.9141
R2 0.9692 0.9665 0.9773 0.9771 0.9559
NRMSE 0.1756 0.1832 0.1508 0.1513 0.2099
Generalization GA -> FL MAE 44205.918 45013.5977 48276.1992 50217.8906 52088.3398
R2 0.9913 0.9912 0.9901 0.9889 0.9904
NRMSE 0.0932 0.0937 0.0997 0.1053 0.0979
Generalization GA -> SC MAE 12889.4541 13916.4736 14315.9375 13024.3271 12746.8926
R2 0.9849 0.9823 0.982 0.986 0.986
NRMSE 0.1227 0.1329 0.1341 0.1185 0.1184
Generalization SC -> FL MAE 67278.0703 80161.0156 73204.6719 54324.0898 69783.5703
R2 0.9834 0.9756 0.9801 0.9888 0.9717
NRMSE 0.1288 0.1561 0.1412 0.1058 0.1681
Generalization SC -> GA MAE 5476.9692 14294.9346 8921.1035 3708.6938 9658.623
R2 0.9864 0.9228 0.9687 0.9922 0.9241
NRMSE 0.1166 0.2779 0.177 0.0881 0.2756

Experiment results on different learning rates, where the other parameter settings are same as original paper.

Experiments Metrics 0.0001 0.0005 0.001 0.005 0.01
Performance Evaluation in FL MAE 529004.1875 34693.8633 33831.0703 36524.457 60011.1562
R2 0.1695 0.9975 0.9976 0.9984 0.9942
NRMSE 0.9113 0.0496 0.0488 0.0396 0.076
Performance Evaluation in GA MAE 37850.4648 4462.7021 5688.2329 2161.2673 11902.7266
R2 0.003 0.9876 0.9777 0.9972 0.9387
NRMSE 0.9985 0.1112 0.1495 0.0526 0.2476
Performance Evaluation in SC MAE 58461.9492 15060.959 18775.9277 32302.1719 19255.9629
R2 0.7281 0.9858 0.9748 0.9269 0.9672
NRMSE 0.5214 0.1194 0.1586 0.2703 0.1812
Generalization FL -> GA MAE 46940.0625 39384.4102 6600.5337 4510.082 5471.5781
R2 -0.3108 0.0531 0.9146 0.9828 0.9798
NRMSE 1.1449 0.9731 0.2923 0.1311 0.1421
Generalization FL -> SC MAE 151936.8438 131516.125 19109.916 17394.168 15692.9873
R2 -0.6068 -0.2168 0.9596 0.9638 0.9707
NRMSE 1.2676 1.1031 0.201 0.1904 0.1712
Generalization GA -> FL MAE 437491.4062 44095.7656 44387.4062 53430.0625 68644.3203
R2 0.2801 0.9919 0.9919 0.9883 0.9839
NRMSE 0.8485 0.0901 0.0897 0.1083 0.1271
Generalization GA -> SC MAE 110432.5 14388.5625 13461.3564 13739.5166 14401.3418
R2 0.1196 0.9806 0.9821 0.9843 0.9798
NRMSE 0.9383 0.1392 0.1339 0.1254 0.142
Generalization SC -> FL MAE 424390.75 47684.0977 90427.2734 84410.2734 83852.9453
R2 0.3208 0.9903 0.9352 0.968 0.9719
NRMSE 0.8241 0.0984 0.2547 0.1788 0.1676
Generalization SC -> GA MAE 29812.2109 7731.0142 5788.4355 6306.5957 7316.2769
R2 0.4087 0.9602 0.9809 0.97 0.9542
NRMSE 0.769 0.1995 0.1382 0.1732 0.214

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages