Instance based learning algorithm is also referred as Lazy learning algorithm as they delay the induction or generalization process until classification is performed.
Instance-based learning algorithms are sometimes referred to as “lazy learning” algorithms because they don’t involve a traditional learning phase where the model is trained on the entire dataset to build a generalized representation. Instead, they lazily memorize the training dataset and make predictions based on similarity measures between new instances (queries) and the instances stored in the training dataset.
Here’s why they’re called “lazy”:
- No explicit training phase: Traditional machine learning algorithms require an explicit training phase where the model is fitted to the training data. In contrast, instance-based learning algorithms defer processing the training data until a query is made.
- Minimal pre-processing: Lazy learning algorithms typically require minimal pre-processing of the data. They directly use the training data as the model.
- Computationally deferred: Instead of immediately processing the training data to build a model, lazy learning algorithms defer computation until prediction time. They compute the model or decision surface “on-demand” when a prediction is requested.
- Storage-intensive: Lazy learning algorithms store the entire training dataset, or some representative subset of it, in memory. This can be memory-intensive, especially for large datasets, but it allows for fast querying at prediction time.
So, in summary, instance-based learning algorithms are called “lazy” because they postpone processing of the training data until a query is made, as opposed to eagerly building a model during a training phase.