SVM algorithms have basically advantages in terms of complexity. First I would like to clear that both Logistic regression as well as SVM can form non linear decision surfaces and can be coupled with the kernel trick. If Logistic regression can be coupled with kernel then why use SVM?
● SVM is found to have better performance practically in most cases.
● SVM is computationally cheaper O(N^2*K) where K is no of support vectors (support vectors are those points that lie on the class margin) where as logistic regression is O(N^3)
● Classifier in SVM depends only on a subset of points . Since we need to maximize distance between closest points of two classes (aka margin) we need to care about only a subset of points unlike logistic regression.
Kernel SVM, or Kernel Support Vector Machine, is a variant of the Support Vector Machine (SVM) algorithm that employs a kernel function to transform the input data into a higher-dimensional space. SVMs are primarily used for classification and regression tasks.
The basic idea behind SVMs is to find a hyperplane that best separates the data into different classes. In cases where the data is not linearly separable in its original feature space, a kernel function is employed to map the data into a higher-dimensional space where a hyperplane can effectively separate the classes.
The kernel function allows SVMs to handle complex relationships and capture non-linear decision boundaries. Common kernel functions include:
- Linear Kernel: Suitable for linearly separable data.
- Polynomial Kernel: Suitable for data with polynomial relationships.
- Radial Basis Function (RBF) or Gaussian Kernel: Widely used for capturing non-linear patterns. It is often the default choice for SVM.
- Sigmoid Kernel: Suitable for data with sigmoidal (S-shaped) relationships.
By using these kernel functions, SVMs can effectively handle non-linear relationships between features, making them a powerful tool for various machine learning tasks.