The Area under the ROC curves can tell us how good is our model with a single value. The AUROC of a random model is 0.5, while for an ideal one is 1.
In other words, AUC can be interpreted as the probability that a randomly selected positive example has a greater score than a randomly selected negative example.
Classes and methods:
auc(x, y)
- sklearn.metrics class for calculating area under the curve of the x and y datasets. For ROC curves x would be false positive rate, and y true positive rate.roc_auc_score(x, y)
- sklearn.metrics class for calculating area under the ROC curves of the x false positive rate and y true positive rate datasets.randint(x, y, size=z)
- np.random class for generating random integers from the “discrete uniform” distribution; fromx
(inclusive) toy
(exclusive) of sizez
.
The entire code of this project is available in this jupyter notebook.
Add notes from the video (PRs are welcome)
The notes are written by the community. If you see an error here, please create a PR with a fix. |