Swiftorial Logo
Home
Swift Lessons
Matchups
CodeSnaps
Tutorials
Career
Resources

SGD Optimizer Tutorial

Introduction to SGD Optimizer

Stochastic Gradient Descent (SGD) is an iterative method for optimizing an objective function with suitable smoothness properties. It is commonly used in machine learning and deep learning to minimize the loss function by updating model parameters.

Unlike the traditional gradient descent which uses the entire dataset to compute gradients, SGD updates the model parameters using a single data point or a mini-batch of data. This results in faster convergence and reduced computation time, especially for large datasets.

Mathematical Foundation

The basic formula for updating the model parameters in SGD is:

θt+1 = θt - η * ∇L(θt, xi, yi)

Where:

  • θt: model parameters at iteration t
  • η: learning rate
  • ∇L: gradient of the loss function with respect to parameters
  • (xi, yi): a single training example

Implementing SGD in Keras

Keras provides a straightforward implementation of the SGD optimizer. To use it in your model, you need to import it and then pass it as an argument to the compile method of your model.

Example code to implement SGD in Keras:

from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD

# Creating a simple model
model = Sequential()
model.add(Dense(64, activation='relu', input_dim=20))
model.add(Dense(1, activation='sigmoid'))

# Compiling the model with SGD optimizer
sgd = SGD(lr=0.01)  # learning rate set to 0.01
model.compile(optimizer=sgd, loss='binary_crossentropy', metrics=['accuracy'])
                    

Parameters of SGD Optimizer

The SGD optimizer has several parameters that you can tune to improve performance:

  • learning_rate (lr): Controls the step size at each iteration while moving toward a minimum of the loss function.
  • momentum: Helps accelerate SGD in the relevant direction and dampens oscillations. It is set to 0 by default.
  • decay: Learning rate decay over each update, which helps in fine-tuning the learning process.
  • nesterov: Whether to apply Nesterov momentum, which is a variant of momentum that can lead to faster convergence.

Example of Training a Model with SGD

Here is an example of how to train a model using the SGD optimizer with a simple dataset:

Example code to train a model using SGD:

from keras.datasets import mnist
from keras.utils import to_categorical

# Load the MNIST dataset
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape((60000, 28 * 28)).astype('float32') / 255
X_test = X_test.reshape((10000, 28 * 28)).astype('float32') / 255

# Convert labels to one-hot encoding
y_train = to_categorical(y_train, num_classes=10)
y_test = to_categorical(y_test, num_classes=10)

# Create a model
model = Sequential()
model.add(Dense(128, activation='relu', input_shape=(28 * 28,)))
model.add(Dense(10, activation='softmax'))

# Compile the model with SGD
sgd = SGD(lr=0.01, momentum=0.9)
model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])

# Train the model
model.fit(X_train, y_train, epochs=10, batch_size=32, validation_data=(X_test, y_test))
                    

Conclusion

The SGD optimizer is a powerful tool in the field of machine learning and deep learning. By understanding its mechanics and how to implement it in frameworks like Keras, you can significantly improve the training of your models. Experimenting with different parameters can yield better results depending on the specific dataset and model architecture.