Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat/nf evaluation #891

Open
wants to merge 17 commits into
base: main
Choose a base branch
from
Open

Feat/nf evaluation #891

wants to merge 17 commits into from

Conversation

marcopeix
Copy link
Contributor

Script to evaluate all NF models on benchmark datasets. The table needs to be updated with metrics as results are obtained.

Main changes:

  • Remove "alias" in models.py and simply use the model name instead
  • Specify the ds column as int or datetime in datasets.py
  • Add README and environment.yml file for reproducibility and to show results

@CLAassistant
Copy link

CLAassistant commented Feb 15, 2024

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
2 out of 3 committers have signed the CLA.

✅ cchallu
✅ marcopeix
❌ Marco Peixeiro


Marco Peixeiro seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

Copy link

@VascoSch92 VascoSch92 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of passing the dataset name as an argument, you could use a config file? (yaml, toml or whatever).

In this way if you want to add another dataset you have just to change the config and not three scripts ;-)

from datasetsforecast.long_horizon import LongHorizon


def get_dataset(name):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the name of the method is not correct. You are returning only the dataset but also other parameters

@@ -0,0 +1,30 @@
from neuralforecast.auto import *

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will import just what you need and not everything


def get_model(model_name, horizon, num_samples):
"""Returns the model class given the model name.
"""

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: docstring on one line, i.e., """Returns the model class given the model name."""

'AutoPatchTST': AutoPatchTST(config=None, h=horizon, loss=HuberLoss(), num_samples=num_samples)
}

return model_dict[model_name]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you could just create a dictionary and import it instead of create a dictionary, put it inside a method and use the method.

os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"


def main(args):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i will call it run_experiments

'MSE': [model_mse],
'sMAPE': [model_smape],
'MAPE': [model_mape],
'time': [elapsed_time]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'elapsed_time' instead of time?

results_path = f'./results/{args.dataset}'
os.makedirs(results_path, exist_ok=True)

metrics_df = pd.DataFrame(metrics)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for every model you are creating a pandas dataframe. What if you loop evaluate every model and you create a pandas dataframe with all the results? It will be much easier to compare different models

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants