Serengeti logo BLACK white bg w slogan
Menu

Python Book of Recipes #3 - Savory Classification Recipe: Spot-Checking Algorithms for a Flavorful Machine Learning Dish

Boško Savić, Senior Software Developer
29.05.2023.

In the field of machine learning, there is a famous saying: "No free lunch." This phrase emphasizes the need to carefully evaluate and compare different algorithms to find the most suitable one for a specific task. In this recipe, we will spot-check several classification algorithms using cross-validation to assess their performance.

I will skip, or even better, I will leave the steps to be taken before the stop-check algorithms for another recipes.

That includes Summarize Data (Descriptive Statistics, Data Visualization), Prepare data (Data Cleaning, Feature Selection, Data Transform) ...

For our recipe, we will use Sonar Mines vs Rocks dataset. The problem is to predict metal or rock objects from sonar return data. For more information, you can visit the link.

Ingredients:

  • Data: X_train (training features), Y_train (training labels)
  • Algorithms: Logistic Regression (LR), Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), K-Nearest Neighbors (KNN), Decision Tree (CART), Naive Bayes (NB), Support Vector Machines (SVM)

Step 1. Import the necessary libraries and modules.

  • Import the required algorithms: Logistic Regression, Linear Discriminant Analysis, Quadratic Discriminant Analysis, KNeighborsClassifier, DecisionTreeClassifier, GaussianNB, SVC

Step 2. Read dataset from CSV, convert to NumPy array and split dataset into training and validation sets.

image 2

Step 3. Define the models.

Create a dictionary called "models" to store the algorithms with their corresponding names:

  • LR: Logistic Regression
  • LDA: Linear Discriminant Analysis
  • QDA: Quadratic Discriminant Analysis
  • KNN: K-Nearest Neighbors
  • CART: Decision Tree
  • NB: Naive Bayes
  • SVM: Support Vector Machines

I also leave ensemble algorithms for other recipes, like Random Forest, Extra Tree Classifier, Gradient Boosting Classifier, XGBoost. As Albert Einstein said, “Everything should be made as simple as possible, but not simpler”

Step 4. Conduct a spot-check evaluation. Results suggest that LR, KNN, SVM can be candidates for our problem, so we can tune these algorithms in order to get the best results, but before jumping to tuning, we should step back and experiment with other options like different validation size, try to standardize, normalize the dataset and re-run the evaluation, remove unnecessary features, create new features...

image 3
image 4

The full code can be found on the link.

Conclusion

Spot-checking different classification algorithms is an essential step in machine learning. By comparing the performance of various algorithms using cross-validation, we can gain insights into their strengths and weaknesses. Remember, "No free lunch" reminds us that there is no one-size-fits-all algorithm, and careful evaluation is crucial to find the best approach for a specific problem.

Enjoy experimenting with different algorithms and discovering the most suitable one for your classification task!

Bonus

Logistic Regression (LR):

Pros:

  • Simplicity and interpretability.
  • Efficient computation.
  • Works well with linearly separable classes.

Cons:

  • May not perform well when the data has complex relationships or non-linear decision boundaries.
  • Assumes linearity between features and the log-odds of the target variable.

Linear Discriminant Analysis (LDA):

Pros:

  • Efficient computation.
  • Can handle multiple classes.
  • Assumes linear decision boundaries but allows for flexibility in feature space.
  • Provides probabilistic outputs.

Cons:

  • Assumes that the classes have the same covariance matrix.
  • May not perform well when the data has non-linear decision boundaries.

Quadratic Discriminant Analysis (QDA):

Pros:

  • More flexible than LDA as it allows for different covariance matrices for each class.
  • Can capture non-linear decision boundaries more accurately.
  • Provides probabilistic outputs.

Cons:

  • Requires more data to estimate the covariance matrices accurately.
  • Can be computationally expensive for high-dimensional datasets.
  • Prone to overfitting when the number of features is large.

K-Nearest Neighbors (KNN):

Pros:

  • Non-parametric method that can handle complex relationships and non-linear decision boundaries.
  • Easy to understand and implement.
  • No assumptions about the underlying data distribution.

Cons:

  • Can be computationally expensive for large datasets or high-dimensional feature spaces.
  • Sensitive to the choice of distance metric and the number of neighbors (k).
  • Requires careful preprocessing and normalization of features.

Decision Tree (CART):

Pros:

  • Easy to interpret and visualize.
  • Can handle both numerical and categorical features.
  • Non-linear relationships and interactions between features can be captured.

Cons:

  • Prone to overfitting, especially when the tree is deep.
  • Can be sensitive to small variations in the data.
  • Not suitable for datasets with high dimensionality and sparse features.

Naive Bayes (NB):

Pros:

  • Simple and computationally efficient.
  • Performs well with high-dimensional data.
  • Robust to irrelevant features.
  • Can handle both numerical and categorical features.

Cons:

  • Assumes independence between features, which is often not true in real-world scenarios.
  • It can be overly simplistic and result in suboptimal performance.
  • May struggle with rare classes or classes with imbalanced prior probabilities.

Support Vector Machines (SVM):

Pros:

  • Effective in high-dimensional spaces and with complex decision boundaries.
  • Works well with small-to-medium-sized datasets.
  • Can handle both linear and non-linear relationships using kernel functions.

Cons:

  • Can be sensitive to the choice of the kernel function and its parameters.
  • Computationally expensive for large datasets.
  • Requires careful preprocessing and feature scaling.
  • Interpreting the resulting model and understanding the learned decision boundaries can be challenging.

Remember that the performance and suitability of each algorithm can vary depending on the specific dataset and problem at hand. It's always recommended to experiment with different algorithms and evaluate their performance using appropriate evaluation metrics and validation techniques.

Let's do business

The project was co-financed by the European Union from the European Regional Development Fund. The content of the site is the sole responsibility of Serengeti ltd.
cross