Swiftorial Logo
Home
Swift Lessons
Matchups
CodeSnaps
Tutorials
Career
Resources

Confusion Matrix Tutorial

Introduction

In the field of Data Science, evaluating the performance of a classification model is crucial. One of the most effective tools for this purpose is the Confusion Matrix. It provides a detailed breakdown of the model's performance by comparing the actual and predicted classifications.

What is a Confusion Matrix?

A Confusion Matrix is a table used to describe the performance of a classification model on a set of test data for which the true values are known. It allows you to see how many instances were correctly and incorrectly classified for each class.

Structure of a Confusion Matrix

A Confusion Matrix is typically a square matrix of size n x n, where n is the number of classes. Each cell in the matrix represents the count of instances that fall into the corresponding category. Here's a simple example for a binary classification problem:

Actual vs Predicted

Predicted: No Predicted: Yes
Actual: No TN (True Negative) FP (False Positive)
Actual: Yes FN (False Negative) TP (True Positive)

Key Metrics Derived from a Confusion Matrix

From the Confusion Matrix, we can derive several key metrics to evaluate the model's performance:

  • Accuracy: The ratio of correctly predicted instances to the total instances. Accuracy = (TP + TN) / (TP + TN + FP + FN)
  • Precision: The ratio of correctly predicted positive instances to the total predicted positives. Precision = TP / (TP + FP)
  • Recall (Sensitivity): The ratio of correctly predicted positive instances to all actual positives. Recall = TP / (TP + FN)
  • F1 Score: The harmonic mean of Precision and Recall. F1 Score = 2 * (Precision * Recall) / (Precision + Recall)

Example: Confusion Matrix in Python

Let's see an example of how to create and interpret a Confusion Matrix using Python and the sklearn library.

Code:

import numpy as np
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report

# True labels
y_true = [0, 1, 0, 1, 0, 1, 0, 0, 0, 1]

# Predicted labels
y_pred = [0, 0, 0, 1, 0, 1, 0, 1, 0, 1]

# Creating the confusion matrix
cm = confusion_matrix(y_true, y_pred)

print("Confusion Matrix:")
print(cm)

# Generating a classification report
print("\nClassification Report:")
print(classification_report(y_true, y_pred))
                

Output:

Confusion Matrix:
[[5 1]
 [1 3]]

Classification Report:
              precision    recall  f1-score   support

           0       0.83      0.83      0.83         6
           1       0.75      0.75      0.75         4

    accuracy                           0.80        10
   macro avg       0.79      0.79      0.79        10
weighted avg       0.80      0.80      0.80        10
                

Conclusion

The Confusion Matrix is a powerful tool for evaluating the performance of classification models. By understanding the structure and derived metrics, you can gain deep insights into your model's strengths and weaknesses. This tutorial has provided a comprehensive overview, including practical examples, to help you effectively utilize the Confusion Matrix in your Data Science projects.