Swiftorial Logo
Home
Swift Lessons
Tutorials
Learn More
Career
Resources

Backpropagation Tutorial

Introduction

Backpropagation, short for "backward propagation of errors," is a fundamental algorithm used in training artificial neural networks. It is a supervised learning technique that adjusts the weights of network connections through gradient descent, reducing the error rate of the model.

How Backpropagation Works

Backpropagation involves two main phases:

  • Forward Pass: The input data is passed through the network layer by layer to compute the output predictions.
  • Backward Pass: The error (difference between predicted and actual output) is propagated back through the network, adjusting the weights to minimize this error.

Mathematical Foundation

To understand backpropagation, it is essential to grasp the gradient descent optimization algorithm. The core idea is to minimize the loss function by iteratively moving in the direction of steepest descent as defined by the negative gradient.

Example: Consider a simple neural network with one hidden layer. The loss function used is Mean Squared Error (MSE).

1. Compute the output:

output = activation(weights * input + bias)

2. Calculate the error:

error = actual_output - predicted_output

3. Compute the gradient of the loss function with respect to weights:

gradient = error * activation_derivative(output)

4. Update the weights:

weights = weights - learning_rate * gradient

Detailed Steps in Backpropagation

The backpropagation algorithm follows these steps:

  1. Initialize weights and biases randomly.
  2. For each training example, perform a forward pass to compute the output.
  3. Calculate the error using the loss function.
  4. Propagate the error back through the network layers:
    • Compute the gradient of the loss function concerning weights and biases.
    • Adjust the weights and biases using the gradients.
  5. Repeat steps 2-4 for a fixed number of epochs or until convergence.

Example Implementation in Python

Here is a simple example of implementing backpropagation in Python using NumPy:

import numpy as np

# Sigmoid activation function
def sigmoid(x):
    return 1 / (1 + np.exp(-x))

# Derivative of sigmoid function
def sigmoid_derivative(x):
    return x * (1 - x)

# Input dataset
inputs = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
# Output dataset
outputs = np.array([[0], [1], [1], [0]])

# Initialize weights and biases
input_layer_neurons = inputs.shape[1]
hidden_layer_neurons = 2
output_layer_neurons = 1

# Random weights and biases
hidden_weights = np.random.uniform(size=(input_layer_neurons, hidden_layer_neurons))
hidden_bias = np.random.uniform(size=(1, hidden_layer_neurons))
output_weights = np.random.uniform(size=(hidden_layer_neurons, output_layer_neurons))
output_bias = np.random.uniform(size=(1, output_layer_neurons))

# Training the neural network
epochs = 10000
learning_rate = 0.1

for _ in range(epochs):
    # Forward pass
    hidden_layer_activation = np.dot(inputs, hidden_weights)
    hidden_layer_activation += hidden_bias
    hidden_layer_output = sigmoid(hidden_layer_activation)

    output_layer_activation = np.dot(hidden_layer_output, output_weights)
    output_layer_activation += output_bias
    predicted_output = sigmoid(output_layer_activation)

    # Backward pass
    error = outputs - predicted_output
    d_predicted_output = error * sigmoid_derivative(predicted_output)

    error_hidden_layer = d_predicted_output.dot(output_weights.T)
    d_hidden_layer = error_hidden_layer * sigmoid_derivative(hidden_layer_output)

    # Updating weights and biases
    output_weights += hidden_layer_output.T.dot(d_predicted_output) * learning_rate
    output_bias += np.sum(d_predicted_output, axis=0, keepdims=True) * learning_rate
    hidden_weights += inputs.T.dot(d_hidden_layer) * learning_rate
    hidden_bias += np.sum(d_hidden_layer, axis=0, keepdims=True) * learning_rate

print("Final predicted output:")
print(predicted_output)
                

Conclusion

Backpropagation is a powerful algorithm for training neural networks. By iteratively adjusting weights and biases using gradient descent, it helps minimize the error rate and improve the model’s accuracy. Understanding the intricacies of backpropagation is crucial for anyone looking to delve deeper into machine learning and neural networks.