Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backward compatibility policy #241

Open
albertcthomas opened this issue Jun 24, 2020 · 7 comments
Open

Backward compatibility policy #241

albertcthomas opened this issue Jun 24, 2020 · 7 comments

Comments

@albertcthomas
Copy link
Collaborator

Regarding #236 but also more generally:

  1. What's the policy regarding backward compatibility with the ramp-kits? Any change should be compatible with the kits in ramp-kits (or any change made in rampwf should also modify the ramp-kits so that they work with the suggested change)?

Now that ramp-workflow is on PyPI, would it be a possibility to require the kits in ramp-kits to use a specific version of ramp-workflow and other dependencies? The kits not in the ramp-workflow repo are difficult to maintain, the ones in tests\kits\ are easy to maintain as part of the tests.

  1. What's the difference between the kits in tests/kits/ and the ones in ramp-kits but not in tests/kits/?
@agramfort
Copy link
Contributor

agramfort commented Jun 24, 2020 via email

@kegl
Copy link
Contributor

kegl commented Jun 27, 2020

@agramfort : what would you like to have for the scorers? If the goal is to be able to use sklearn metrics directly, it would be relatively easy to have a generic score_type factory that receives an sklearn scorer as input (when initialized in problem.py), and wraps it into a ramp scorer. You would get the best of both worlds.

I don't think we could completely scrape away the functionalities we added (e.g. precision for displaying, letting lower-the-better scorers), plus it's nice to have the possibility to recode score_function that receives Prediction objects when we have complex predictions and scorers.

@agramfort
Copy link
Contributor

agramfort commented Jun 29, 2020 via email

@kegl
Copy link
Contributor

kegl commented Jul 14, 2020

@agramfort is there an automatic way to determine in skelarn what input a given scorer requires? E.g. raw y_pred like RMSE, or class indices like accuracy (computed from y_proba, returned by predict). If not, we'll need two or three different wrappers that the user would need to choose from.Any other suggestion how to deal with this?

@agramfort
Copy link
Contributor

agramfort commented Jul 14, 2020 via email

@kegl
Copy link
Contributor

kegl commented Jul 20, 2020

OK, I see. This is something that we would do in RAMP, too, to wrap sklearn scorers into a RAMP scorer. But it seems that the "user" needs to provide the information on the sklearn score (e.g. what input it requires), it cannot be determined automatically, right? I mean: there is no catalogue (dict) in sklearn where greater_is_better and needs_proba parameters can be read out, right?

@agramfort
Copy link
Contributor

agramfort commented Jul 21, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants