What are the two methods used for the calibration in Supervised Learning?

The two methods used for predicting good probabilities in Supervised Learning are

  • Platt Calibration
  • Isotonic Regression
    These methods are designed for binary classification, and it is not trivial.

In supervised learning, calibration refers to the process of mapping the model’s output probabilities to predicted probabilities that are more accurate and representative of the true likelihood of the outcomes. The two commonly used methods for calibration in supervised learning are:

  1. Platt Scaling: Platt scaling, named after John Platt, is a technique typically used for binary classification models. It fits a logistic regression model to the output scores of the classifier and uses the logistic function to convert these scores into probabilities. Platt scaling requires a separate calibration dataset or uses cross-validation on the training data.
  2. Isotonic Regression: Isotonic regression is a non-parametric approach to calibration. It fits a piecewise-constant, non-decreasing function to the output scores of the classifier, ensuring monotonicity. This method doesn’t make any assumptions about the underlying distribution of the data and can handle multi-class classification and continuous output scores.

In summary, the correct answer would be Platt Scaling and Isotonic Regression. These methods are essential for refining the output probabilities of classifiers, making them more reliable for decision-making tasks, particularly in applications where well-calibrated probabilities are crucial, such as in risk assessment or medical diagnostics.