Coding with Python

I wrote a book! Learn how to use AI to code better Python!!

✨ "A Quick Guide to Coding with AI" ✨ is your guide to harnessing the full potential of Generative AI in software development. Check it out now at 40% off

Demystifying Deep Learning with Python and PyTorch

This tutorial guides you through the fundamentals of building a simple neural network using PyTorch, a powerful open-source machine learning framework. …

Updated August 26, 2023



This tutorial guides you through the fundamentals of building a simple neural network using PyTorch, a powerful open-source machine learning framework.

Welcome to the exciting world of deep learning! In this tutorial, we’ll explore how to construct a basic neural network using PyTorch, a Python library widely recognized for its flexibility and ease of use in developing and training deep learning models.

What is a Neural Network?

Imagine a brain made up of interconnected “neurons” that learn from data. That’s essentially what a neural network is! It’s a computational model inspired by the structure of our brains, capable of learning complex patterns and relationships within data.

Neural networks consist of layers:

  • Input Layer: Receives raw data (e.g., pixel values of an image).
  • Hidden Layers: Process information through mathematical operations, extracting features and patterns.
  • Output Layer: Produces the final result (e.g., classifying an image as a cat or dog).

Each connection between neurons has a “weight” associated with it, representing its strength. During training, the network adjusts these weights to minimize errors and improve accuracy.

Why PyTorch?

PyTorch is a favorite among deep learning practitioners due to several key advantages:

  • Pythonic: It feels natural to Python developers, leveraging familiar syntax and data structures.
  • Dynamic Computational Graph: Allows for more flexibility during model development and debugging compared to static graph frameworks.
  • Extensive Ecosystem: A thriving community provides pre-trained models, datasets, and helpful resources.
  • GPU Acceleration: PyTorch efficiently utilizes the power of GPUs (graphics processing units) for faster training of complex models.

Let’s Build a Simple Neural Network!

For this tutorial, we’ll build a neural network to classify handwritten digits from the MNIST dataset.

Step 1: Setting up PyTorch and Importing Libraries

import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms
  • torch: The core PyTorch library for tensor operations and building models.
  • torch.nn: Contains modules for defining neural network layers.
  • torch.optim: Provides optimization algorithms (like SGD) to update model weights during training.
  • torchvision: A package with datasets, image transformations, and pre-trained models.

Step 2: Loading the MNIST Dataset

train_dataset = datasets.MNIST(root='./data', train=True, download=True,
                               transform=transforms.ToTensor())
test_dataset = datasets.MNIST(root='./data', train=False, download=True,
                              transform=transforms.ToTensor())

# Create data loaders for efficient batching
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=1000)

We load the MNIST dataset, apply a transformation to convert images into PyTorch tensors, and create data loaders for efficient data handling during training.

Step 3: Defining the Neural Network Architecture

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(784, 128)  # Input layer (784 pixels) to hidden layer
        self.relu = nn.ReLU()            # Activation function
        self.fc2 = nn.Linear(128, 10)   # Hidden layer to output layer (10 digits)

    def forward(self, x):
        x = x.view(-1, 784)           # Flatten input image
        x = self.fc1(x)
        x = self.relu(x)
        x = self.fc2(x)
        return x

model = Net()
  • We define a class Net inheriting from nn.Module. This is our neural network model.

  • The __init__ method initializes the layers:

    • nn.Linear: Fully connected (dense) layers. 784 input neurons connect to 128 hidden neurons, and then 128 hidden neurons connect to 10 output neurons.
    • nn.ReLU: The ReLU activation function introduces non-linearity, enabling the network to learn complex patterns.
  • The forward method defines how data flows through the network:

    • Flatten the input image (28x28 pixels) into a 784-dimensional vector.
    • Pass the flattened input through the first linear layer (self.fc1).
    • Apply the ReLU activation function to introduce non-linearity.
    • Pass the result through the second linear layer (self.fc2) to produce the final output (probabilities for each digit).

Step 4: Training the Model

criterion = nn.CrossEntropyLoss() # Loss function
optimizer = optim.SGD(model.parameters(), lr=0.01) # Optimization algorithm

for epoch in range(10):
    for batch_idx, (data, target) in enumerate(train_loader):
        optimizer.zero_grad()  # Reset gradients
        output = model(data)   # Forward pass
        loss = criterion(output, target) # Calculate loss
        loss.backward()       # Backward pass: calculate gradients
        optimizer.step()      # Update model weights

print('Training complete!')
  • We choose the CrossEntropyLoss function to measure the difference between our model’s predictions and the actual labels (digits).

  • We use Stochastic Gradient Descent (SGD) to update the model’s weights during training.

  • The training loop iterates through the dataset in batches, performing forward and backward passes to minimize the loss function.

Step 5: Evaluating the Model

correct = 0
total = len(test_dataset)

with torch.no_grad():
    for data, target in test_loader:
        output = model(data)
        _, predicted = torch.max(output.data, 1)
        total += target.size(0)
        correct += (predicted == target).sum().item()

print('Accuracy of the network on the 10000 test images: %d %%' % (100 * correct / total))

We evaluate the model’s performance on a separate test set and calculate its accuracy.

Key Points to Remember:

  • Start Simple: Begin with basic architectures and gradually increase complexity as you gain experience.
  • Data is King: The quality and quantity of your data significantly influence model performance. Experiment with different datasets.
  • Understand Hyperparameters: Learning rate, batch size, number of epochs – these parameters fine-tune your training process. Adjust them to find optimal results.

Coding with AI

AI Is Changing Software Development. This Is How Pros Use It.

Written for working developers, Coding with AI goes beyond hype to show how AI fits into real production workflows. Learn how to integrate AI into Python projects, avoid hallucinations, refactor safely, generate tests and docs, and reclaim hours of development time—using techniques tested in real-world projects.

Explore the book ->