The ROC (Receiver Operating Characteristic) curve is a plot that shows how a binary classifier performs across all possible confidence thresholds. The y-axis is the True Positive Rate (recall), and the x-axis is the False Positive Rate (1 minus specificity). Each point on the curve corresponds to a different threshold setting. A perfect classifier hugs the top-left corner (catches everything, no false alarms), while a random coin-flip classifier follows the diagonal.
The Area Under the ROC Curve (AUC-ROC) collapses the curve into a single number between 0.5 (random) and 1.0 (perfect). It measures how well the model separates positive from negative examples overall, regardless of any specific threshold choice. This makes it useful for comparing models, but it can be misleading on heavily imbalanced datasets: a model might achieve 0.95 AUC-ROC while still missing most positive examples if the positive class is rare.
In computer vision, ROC curves are used for binary tasks like defect detection (defect vs. no defect), medical screening (tumor vs. normal), and quality pass/fail classification. Best practice is to report ROC curves alongside precision-recall curves, especially when the positive class is rare. The precision-recall curve is more informative in imbalanced settings because it focuses on the minority class performance that ROC can obscure.
