Swiftorial Logo
Home
Swift Lessons
Tutorials
Learn More
Career
Resources

Python Advanced - Deep Learning with PyTorch

Implementing deep learning models using PyTorch in Python

PyTorch is an open-source deep learning framework that provides a flexible and efficient platform for building and training deep learning models. This tutorial explores how to use PyTorch for implementing deep learning models in Python.

Key Points:

  • PyTorch is an open-source deep learning framework.
  • It provides a flexible and efficient platform for building and training deep learning models.
  • PyTorch integrates well with Python for research and production applications.

Installing PyTorch

To use PyTorch, you need to install it using pip:


pip install torch torchvision
            

Creating a Neural Network

Here is an example of creating a simple neural network with PyTorch:


import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms

# Define the neural network class
class SimpleNN(nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.fc1 = nn.Linear(784, 128)
        self.fc2 = nn.Linear(128, 64)
        self.fc3 = nn.Linear(64, 10)
    
    def forward(self, x):
        x = x.view(-1, 784)
        x = torch.relu(self.fc1(x))
        x = torch.relu(self.fc2(x))
        x = self.fc3(x)
        return x

# Initialize the network, loss function, and optimizer
net = SimpleNN()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.01, momentum=0.9)
            

Preparing the Data

Here is an example of preparing the data using the torchvision dataset:


# Define the transform
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])

# Load the training and test datasets
trainset = torchvision.datasets.MNIST(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=32, shuffle=True)

testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=32, shuffle=False)
            

Training the Model

Here is an example of training the neural network:


# Train the network
for epoch in range(10):
    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        inputs, labels = data
        
        # Zero the parameter gradients
        optimizer.zero_grad()
        
        # Forward pass
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        
        # Backward pass and optimize
        loss.backward()
        optimizer.step()
        
        # Print statistics
        running_loss += loss.item()
        if i % 100 == 99:
            print(f'Epoch {epoch + 1}, Batch {i + 1}, Loss: {running_loss / 100:.3f}')
            running_loss = 0.0

print('Finished Training')
            

Evaluating the Model

Here is an example of evaluating the trained model:


correct = 0
total = 0

# No gradient calculation for evaluation
with torch.no_grad():
    for data in testloader:
        images, labels = data
        outputs = net(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print(f'Accuracy of the network on the 10000 test images: {100 * correct / total:.2f}%')
            

Saving and Loading Models

Here is an example of saving and loading a PyTorch model:


# Save the model
torch.save(net.state_dict(), 'simple_nn.pth')

# Load the model
net = SimpleNN()
net.load_state_dict(torch.load('simple_nn.pth'))
            

Transfer Learning

Here is an example of using transfer learning with a pre-trained model in PyTorch:


import torchvision.models as models

# Load a pre-trained ResNet model
resnet = models.resnet18(pretrained=True)

# Freeze all layers
for param in resnet.parameters():
    param.requires_grad = False

# Replace the final fully connected layer
num_features = resnet.fc.in_features
resnet.fc = nn.Linear(num_features, 10)

# Print the modified ResNet model
print(resnet)
            

Using GPUs

Here is an example of training a PyTorch model on a GPU:


# Check if GPU is available
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f'Using device: {device}')

# Move the model to the GPU
net.to(device)

# Modify the training loop to move inputs and labels to the GPU
for epoch in range(10):
    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        inputs, labels = data
        inputs, labels = inputs.to(device), labels.to(device)
        
        # Zero the parameter gradients
        optimizer.zero_grad()
        
        # Forward pass
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        
        # Backward pass and optimize
        loss.backward()
        optimizer.step()
        
        # Print statistics
        running_loss += loss.item()
        if i % 100 == 99:
            print(f'Epoch {epoch + 1}, Batch {i + 1}, Loss: {running_loss / 100:.3f}')
            running_loss = 0.0

print('Finished Training')
            

Summary

In this tutorial, you learned about implementing deep learning models using PyTorch in Python. PyTorch is a powerful and flexible deep learning framework that provides tools for building, training, and evaluating deep learning models. Understanding how to create neural networks, prepare data, train models, evaluate performance, save and load models, use transfer learning, and utilize GPUs can help you leverage PyTorch for various deep learning applications.