Skip to content

Files

Latest commit

29980d6 · May 22, 2025

History

History

Ch09_GAN

Deep Learning Crash Course

Early Access - Use Code PREORDER for 25% Off
by Benjamin Midtvedt, Jesús Pineda, Henrik Klein Moberg, Harshith Bachimanchi, Joana B. Pereira, Carlo Manzo, Giovanni Volpe
No Starch Press, San Francisco (CA), 2025
ISBN-13: 9781718503922
https://nostarch.com/deep-learning-crash-course


  1. Dense Neural Networks for Classification

  2. Dense Neural Networks for Regression

  3. Convolutional Neural Networks for Image Analysis

  4. Encoders–Decoders for Latent Space Manipulation

  5. U-Nets for Image Transformation

  6. Self-Supervised Learning to Exploit Symmetries

  7. Recurrent Neural Networks for Timeseries Analysis

  8. Attention and Transformers for Sequence Processing

  9. Generative Adversarial Networks for Image Synthesis
    Demonstrates generative adversarial networks (GAN) training for image generation, domain translation (CycleGAN), and virtual staining in microscopy.

  • Code 9-1: Generating New MNIST Digits with a GAN
    Implements a simple Deep Convolutional GAN (DCGAN) on the MNIST dataset to generate novel handwritten digits. Illustrates how the generator maps random noise vectors into realistic images, while the discriminator learns to distinguish them from real MNIST samples. Includes visualization of loss curves and intermediate samples during training.

  • Code 9-A: Generating MNIST Digits On Demand with a Conditional GAN
    Extends the basic MNIST GAN to a conditional GAN (cGAN), enabling you to specify which digit to generate. Shows how to incorporate class labels into both generator and discriminator by concatenating embedding vectors or feature maps, resulting in targeted digit generation (for example, only 7s).

  • Code 9-B: Virtually Staining a Biological Tissue with a Conditional GAN
    Applies cGANs to transform brightfield images of human motor neurons into virtually stained fluorescence images—without using invasive chemical stains. Demonstrates how to train on paired brightfield and fluorescence images (13 z-planes to 3 fluorescence channels) and produce consistent neuron and nucleus stains. Enables faster, less-destructive microscopy in biomedical studies.

  • Code 9-C: Converting Microscopy Images with a CycleGAN
    Shows how CycleGAN can handle unpaired images in two domains (e.g., holographic vs. brightfield micrographs). The model learns a forward generator and backward generator with cycle consistency, ensuring that a transformed image can be mapped back to the original domain. Illustrates conversion between holograms and brightfield images even though paired training samples do not exist.

  1. Diffusion Models for Data Representation and Exploration

  2. Graph Neural Networks for Relational Data Analysis

  3. Active Learning for Continuous Learning

  4. Reinforcement Learning for Strategy Optimization

  5. Reservoir Computing for Predicting Chaos