In the context of machine learning, a false negative occurs when a model incorrectly predicts a negative outcome (i.e., classifies an instance as negative) when the true outcome is positive. In other words, it is a type of error where the model fails to identify a relevant pattern or signal that is actually present in the data.
To elaborate:
- True Positive (TP): The model correctly predicts a positive outcome.
- False Negative (FN): The model incorrectly predicts a negative outcome when the true outcome is positive.
- False Positive (FP): The model incorrectly predicts a positive outcome when the true outcome is negative.
- True Negative (TN): The model correctly predicts a negative outcome.
False negatives are particularly important in scenarios where missing a positive instance has significant consequences or costs, such as in medical diagnoses where failing to identify a disease can have serious implications. The balance between false positives and false negatives is often managed through adjusting the model’s decision threshold or using evaluation metrics like precision, recall, F1 score, or the area under the ROC curve (AUC-ROC) depending on the specific requirements of the task.