You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We want to make it easy for users to get their best VersionedModel from their published paths, so we initially included the following static method prototype:
@staticmethoddefget_best(paths: list[str],
metric: str='val_loss',
comparator: Callable[[Any, Any], Any] =min) -> \
'VersionedModel':
"""Returns an instance of the best VersionedModel from the given paths. :param paths: A list of the VersionedModel paths to check. Each path may be on the local filesystem or remote, independent of the other paths. An S3 path should be a URL of the form "s3://bucket-name/path/to/dir". :param metric: The name of the metric in the model's training history to use as the basis of comparison for determining the best model. This metric is often the validation loss. :param comparator: The function to use to compare model metric values. It takes two model metric values and returns the more desirable of the two. For example, min will return the model with the lowest metric value, say, validation loss. """
However, it's not clear whether we have enough information to choose for the user which of their models is the best. Should we pick the model based on the metric score in the last epoch? In the epoch that scores best according to the comparator? Should we just load all of the models and evaluate them on the validation dataset? This could be a helpful feature, but we can't make assumptions on the user's behalf. In the first release, the user can choose for themselves how to compare models.
Also included were the following test function prototypes:
deftest_get_best_model_gets_min_val_loss() ->None:
"""Tests that get_best returns the versioned model with the minimum validation loss."""# TODOassertFalsedeftest_get_best_model_custom_comparator() ->None:
"""Tests that get_best returns the versioned model with the maximum validation loss when max is supplied as the custom comparator."""# TODOassertFalsedeftest_get_best_model_custom_metric() ->None:
"""Tests that get_best returns the versioned model with the minimum performance on the custom metric when one is supplied."""# TODOassertFalse
The text was updated successfully, but these errors were encountered:
We want to make it easy for users to get their best VersionedModel from their published paths, so we initially included the following static method prototype:
However, it's not clear whether we have enough information to choose for the user which of their models is the best. Should we pick the model based on the metric score in the last epoch? In the epoch that scores best according to the comparator? Should we just load all of the models and evaluate them on the validation dataset? This could be a helpful feature, but we can't make assumptions on the user's behalf. In the first release, the user can choose for themselves how to compare models.
Also included were the following test function prototypes:
The text was updated successfully, but these errors were encountered: