# A tibble: 1 × 3
.metric .estimator .estimate
<chr> <chr> <dbl>
1 roc_auc binary 0.870
Lecture 13
Duke University
STA 113 - Fall 2023
Finish visualizing decision boundaries for classification models
Define sensitivity, specificity, and ROC curves
ae-11-spam
Ultimate goal: Recreate the following visualization.
ae-11-spam
Reminder of instructions for getting started with application exercises:
ae-11-spam
(repo name will be suffixed with your GitHub name).Sensitivity is the true positive rate – is the probability of a positive prediction, given positive observed.
Specificity is the true negative rate - is the probability of a negative test result given negative observed.
The plot we created earlier displays sensitivity and specificity for a given decision bound.
An alternative display can visualize various sensitivity and specificity rates for all possible decision bounds.
Receiver operating characteristic (ROC) curve+ plot true positive rate vs. false positive rate (1 - specificity).
Do you think a better model has a large or small area under the ROC curve?