Which machine learning algorithm is known as the lazy learner and why is it called so?

KNN is a Machine Learning algorithm known as a lazy learner. K-NN is a lazy learner because it doesn’t learn any machine learnt values or variables from the training data but dynamically calculates distance every time it wants to classify, hence memorises the training dataset instead.

The machine learning algorithm known as the “lazy learner” is the k-Nearest Neighbors (k-NN) algorithm. It is called a lazy learner because it doesn’t learn a discriminative function from the training data but instead memorizes the training dataset.

Here’s why it’s called a lazy learner:

  1. No Training Phase: Unlike other algorithms such as Support Vector Machines or Decision Trees, k-NN doesn’t have a distinct training phase where it learns a model. Instead, it stores all of the training instances and uses them directly during the prediction phase.
  2. Deferred Generalization: In k-NN, generalization to new data points is done only when a prediction is required. When a new instance needs to be classified, k-NN simply looks at the k nearest neighbors in the training data and assigns the class label based on the majority class among them. It doesn’t derive any explicit generalization rules or models from the training data.
  3. Computation at Prediction Time: Since k-NN does not do any processing during the training phase, all computation happens at prediction time. This means that k-NN “defers” the work of learning until it’s actually needed, making it “lazy” in that regard.

Despite its simplicity and lazy nature, k-NN can be quite effective, especially in low-dimensional spaces or when the decision boundary is highly irregular. However, it can be computationally expensive, especially with large datasets, because it requires calculating distances between the query instance and all training instances.