Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include kwargs in the evaluator's wrappers #200

Open
boriero opened this issue Jun 28, 2022 · 0 comments
Open

Include kwargs in the evaluator's wrappers #200

boriero opened this issue Jun 28, 2022 · 0 comments
Labels
bug Something isn't working enhancement New feature or request

Comments

@boriero
Copy link

boriero commented Jun 28, 2022

Instructions

  • Include kwargs in the evaluator's functions

from

def precision_evaluator(test_data: pd.DataFrame,
                        threshold: float = 0.5,
                        prediction_column: str = "prediction",
                        target_column: str = "target",
                        eval_name: str = None) -> EvalReturnType:

    eval_fn = generic_sklearn_evaluator("precision_evaluator__", precision_score)
    eval_data = test_data.assign(**{prediction_column: (test_data[prediction_column] > threshold).astype(int)})
    return eval_fn(eval_data, prediction_column, target_column, eval_name)

to

def precision_evaluator(
    test_data: pd.DataFrame,
    threshold: float = 0.5,
    prediction_column: str = "prediction",
    target_column: str = "target",
    eval_name: str = None,
    **kwargs,
) -> EvalReturnType:   

    eval_fn = generic_sklearn_evaluator("precision_evaluator__", precision_score)
    eval_data = test_data.assign(**{prediction_column: (tet_data[prediction_column] > threshold).astype(int)})
    return eval_fn(eval_data, prediction_column, target_column, eval_name, **kwargs)

Describe the feature and the current state.

  • Evaluators are parsed through a function that does not have **kwargs, so one cannot use other parametrizations than the default.

Will this change a current behavior? How?

  • One will be able, as required by my project, to have the precision and recall by label and not an average of labels, which can only be changed by setting the proper parameter, as done below. Furthermore, with only this change, any king of extra parametrization for the evaluators will be possible

precision_evaluator(target_column=target, average=None, labels=[0, 1])

Extra information

  • Given the structure of the function of the generic evaluator generic_sklearn_evaluator, it seems to me that to have **kwargs was the intention since the beginning, but they missed the kwarg in the individual evaluator's wrappers, as it can be read in its definition:
def generic_sklearn_evaluator(name_prefix: str, sklearn_metric: Callable[..., float]) -> UncurriedEvalFnType:
    """
    Returns an evaluator build from a metric from sklearn.metrics
    Parameters
    ----------
    name_prefix: str
        The default name of the evaluator will be name_prefix + target_column.
    sklearn_metric: Callable
        Metric function from sklearn.metrics. It should take as parameters y_true, y_score, kwargs.

@boriero boriero added bug Something isn't working enhancement New feature or request labels Jun 28, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant