0% found this document useful (0 votes)
4 views23 pages

Atharv Report Final

This document discusses the implementation and training of a Generative Adversarial Network (GAN) model using the MNIST dataset to generate synthetic images of handwritten digits. It outlines the architecture of the GAN, the training process, and the incorporation of techniques like label smoothing and batch normalization to enhance stability. The project aims to provide insights into GANs, their challenges, and potential applications in generating realistic data.

Uploaded by

atharvzend0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views23 pages

Atharv Report Final

This document discusses the implementation and training of a Generative Adversarial Network (GAN) model using the MNIST dataset to generate synthetic images of handwritten digits. It outlines the architecture of the GAN, the training process, and the incorporation of techniques like label smoothing and batch normalization to enhance stability. The project aims to provide insights into GANs, their challenges, and potential applications in generating realistic data.

Uploaded by

atharvzend0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

ABSTRACT

Generative Adversarial Networks (GANs) are a class of deep learning models that have
shown remarkable capabilities in generating realistic synthetic data. Introduced by Ian
Goodfellow in 2014, GANs work through a unique adversarial setup involving two neural
networks: a Generator and a Discriminator. The Generator aims to create synthetic data
that is indistinguishable from real data, while the Discriminator attempts to differentiate
between real and generated data. The two networks are trained simultaneously in a zero-
sum game where the Generator learns to improve its outputs, and the Discriminator
improves its ability to distinguish real from fake data.
This project focuses on the implementation and training of a basic GAN model using the
MNIST dataset of handwritten digits. The Generator in the model is designed to take
random noise as input and generate images that resemble handwritten digits. The
Discriminator, on the other hand, is tasked with classifying these images as either real
(from the MNIST dataset) or fake (produced by the Generator).
In order to improve the stability of the training process, several techniques were
incorporated into the model architecture, such as label smoothing and batch
normalization. Label smoothing helps prevent the model from becoming overly confident,
which can lead to overfitting, while batch normalization helps regulate the learning
process, improving both convergence speed and model generalization. The training
process was performed using the Adam optimizer, which is known for its efficiency and
effectiveness in training deep learning models.

1
TABLE OF CONTENTS

Sr. No. Contents Page No.

1 Certificate

2 Abstract

3 Table of Contents

4 Chapter 1: Introduction 4

5 Chapter 2: Literature Survey 8

6 Chapter 3: Proposed Methodology 13

7 Chapter 4: Experimental Results 21

8 Chapter 5: Conclusion 23

9 References 24

2
Chapter 1: Introduction
1.1 Introduction
Generative Adversarial Networks (GANs) have emerged as one of the most powerful
and intriguing innovations in the field of deep learning. Introduced by Ian Goodfellow
in 2014, GANs have revolutionized the way synthetic data is generated. They consist
of two neural networks—the Generator and the Discriminator—engaged in a two-
player game: the Generator creates data that mimics real-world samples, while the
Discriminator tries to distinguish between real and synthetic data. Through this
adversarial training process, both networks iteratively improve, leading to the
generation of realistic data that is often indistinguishable from the actual data.
The strength of GANs lies in their ability to generate high-quality, realistic data
without the need for paired input-output datasets. Their impact spans various
domains such as computer vision, text-to-image synthesis, style transfer, medical
image enhancement, and artistic content generation. However, training GANs is a
complex task, often plagued with challenges such as mode collapse, training
instability, and gradient vanishing.
This project implements a basic GAN using PyTorch with the goal of generating
synthetic images of handwritten digits from the MNIST dataset. This dataset, a
staple in the machine learning community, contains 28x28 grayscale images of digits
ranging from 0 to 9. The project emphasizes both qualitative and quantitative
evaluation of the generated outputs and introduces stabilization techniques like label
smoothing and batch normalization to enhance performance.
1.2 Need
In many real-world applications, access to high-quality labeled data can be limited
due to cost, privacy, or availability constraints. Generative models, especially GANs,
offer a solution by creating synthetic data that can augment or even replace real
datasets in training machine learning models.
The need for this project is rooted in the following:
 Understanding GANs from the ground up by implementing and
experimenting with the architecture.
 Generating realistic synthetic data for use in applications like data
augmentation or image simulation.

1
 Exploring stabilization techniques to improve GAN training, which is
notoriously unstable.
 Creating a foundation for more complex generative models like DCGANs,
WGANs, and CycleGANs.
For beginners and practitioners alike, understanding how to build and train GANs on
datasets like MNIST provides critical insights into adversarial learning, optimization
challenges, and generative modeling strategies.
1.3 Motivation
The motivation behind this project is twofold: academic learning and practical skill-
building. GANs represent a cutting-edge advancement in artificial intelligence and
understanding their functioning equips one with a powerful tool in the deep learning
arsenal. By implementing a GAN from scratch using PyTorch, this project allows for
a hands-on experience that bridges theoretical concepts with practical execution.
Specific motivations include:
 Curiosity in generative AI: With growing trends in AI-generated art, text, and
images, understanding GANs opens doors to innovation and creativity.
 Hands-on learning of PyTorch: Building a GAN provides exposure to model
definition, custom training loops, and visualization of outputs using one of the
most widely used deep learning frameworks.
 Career relevance: GANs are increasingly applied in industry sectors such as
fashion, healthcare, and entertainment. This project acts as a stepping stone
toward mastering generative models for real-world use cases.
 Academic value: The MNIST-based GAN implementation sets a solid
foundation for advanced research or coursework in AI and deep learning.
This project also serves as an inspiration to explore deeper GAN variants and apply
them to more complex datasets and domains.

2
Chapter 2: Literature Survey

2.1 Focused Research Survey (F.R.S)


Generative Adversarial Networks (GANs), introduced by Ian Goodfellow et al. in 2014,
have significantly influenced the landscape of generative modeling. The core idea
revolves around two neural networks—the Generator and the Discriminator—engaged in
a minimax game. The Generator tries to create data that mimics the real distribution,
while the Discriminator works to differentiate real data from fake data.
Key contributions from seminal works include:
Goodfellow et al. (2014): Proposed the foundational GAN framework, showcasing its
potential in generating synthetic data without explicit supervision.
Radford et al. (2015): Introduced Deep Convolutional GANs (DCGANs), employing
convolutional layers for stable training and higher-quality image outputs.
Mirza and Osindero (2014): Developed Conditional GANs (cGANs) to enable label-
conditioned generation.
Arjovsky et al. (2017): Proposed Wasserstein GANs (WGANs) to improve training
stability through the Earth Mover’s Distance.
Karras et al. (2018): Introduced Progressive Growing GANs for generating high-
resolution images in a staged training process.
This focused survey sets the foundation for a more detailed analysis of training
challenges, architectural enhancements, and practical applications of GANs.

2.2 Deep Literature Study


Foundations of GANs
GANs function through adversarial training involving two networks:
Generator (G): Takes random noise as input and generates synthetic data.
Discriminator (D): Evaluates whether a given sample is real (from dataset) or fake (from
Generator).
The training objective follows a min-max optimization:
min⁡Gmax⁡DV(D,G)=Ex∼pdata[log⁡D(x)]+Ez∼pz[log⁡(1−D(G(z)))]\min_G \max_D V(D,
G) = \mathbb{E}_{x \sim p_{data}}[\log D(x)] + \mathbb{E}_{z \sim p_z}[\log(1 -
D(G(z)))]GminDmaxV(D,G)=Ex∼pdata[logD(x)]+Ez∼pz[log(1−D(G(z)))]
3
This iterative competition improves both models over time.
Challenges in GAN Training
Mode Collapse: Generator outputs limited diversity. Solution: Minibatch discrimination
(Salimans et al., 2016).
Vanishing Gradients: When Discriminator is too strong, Generator receives negligible
updates. Solution: Use of WGAN with Wasserstein loss.
Hyperparameter Sensitivity: Performance varies drastically with small changes in learning
rate, optimizer, and architecture.
Advancements in GAN Architectures
DCGAN (Radford et al., 2015): Leveraged convolutional layers, batch normalization, and
ReLU activations for improved performance.
Conditional GAN (Mirza and Osindero, 2014): Enabled targeted generation using class
labels or other inputs.
Wasserstein GAN (Arjovsky et al., 2017): Addressed gradient vanishing issues by
modifying the loss function.
Progressive GAN (Karras et al., 2018): Increased image resolution progressively, aiding
in stable high-quality image synthesis.
Applications of GANs
Image Generation: Human face generation (e.g., StyleGAN).
Image-to-Image Translation: Pix2Pix, CycleGAN for converting sketches to real images,
colorization, etc.
Super Resolution: SRGAN enhances low-resolution images.
Data Augmentation: Useful in domains like medical imaging with limited data.
Text-to-Image Synthesis: GANs transform textual descriptions into photorealistic images.

2.3 Research Gaps / Conclusion from Literature Survey


While GANs have demonstrated remarkable capabilities, several research gaps persist:
Training Instability: Despite WGAN and progressive training techniques, GANs remain
difficult to train, often requiring manual tuning and early stopping mechanisms.
Mode Diversity: Current architectures sometimes struggle to capture the full diversity of
the data distribution (mode collapse).
Evaluation Metrics: Assessing the quality of generated samples still lacks a universally
accepted quantitative metric. Metrics like Inception Score and FID are not perfect.
4
Scalability: Applying GANs to high-resolution or multi-modal data remains resource-
intensive and computationally demanding.
Ethical Concerns: The ability of GANs to generate photorealistic fake content raises
issues in misinformation and data misuse.
Conclusion:
The literature shows a continuous evolution in GAN architectures and training strategies
to overcome core challenges. While the foundational ideas are solid, ongoing research is
needed to improve stability, interpretability, and evaluation, making GANs more reliable
for broader real-world adoption.

Chapter 3: Proposed Methodology


3.1 Understanding GANs
Generative Adversarial Networks (GANs) consist of two adversarial models: the
Generator and the Discriminator. The Generator creates synthetic data from random
noise, aiming to mimic the real data distribution. The Discriminator evaluates the
authenticity of the input image, determining whether it is real or generated. Both networks
are trained in a minimax game where the Generator tries to fool the Discriminator, while
the Discriminator improves its ability to detect fake images. The goal is to reach a state
where the Generator produces highly realistic images indistinguishable from real ones.

3.2 Data Collection and Preprocessing


The MNIST dataset, comprising 28x28 grayscale images of handwritten digits (0–9), was
used due to its simplicity and structure. The preprocessing steps involved:
Normalization: Scaling pixel values from [0, 255] to [-1, 1] to aid in training convergence.
Shaping: Reshaping each image to fit the network input shape.
(Optional) Data Augmentation: While not used for MNIST, this can be included for more
complex datasets to enhance variability.

3.3 GAN Model Architecture


The architecture includes:
Generator:
Input: Random noise vector (latent space).
Layers: Dense → BatchNorm → ReLU → Reshape → Conv2DTranspose layers → Tanh
activation.
Output: Image similar in structure to MNIST digits.
Discriminator:
Input: Image (real or fake).
Layers: Conv2D → LeakyReLU → BatchNorm → Flatten → Dense → Sigmoid.
Output: Binary classification (real/fake).
5
Loss Function:
Binary Cross-Entropy loss is used for both Generator and Discriminator:
Discriminator: Real = 1, Fake = 0
Generator: Encouraged to fool the Discriminator (output close to 1)
3.4 Training the GAN Model
Training proceeds in two alternating steps:
Step 1: Train the Discriminator using both real images and fake images generated by the
Generator.
Step 2: Train the Generator to improve its capability to fool the Discriminator.
Key aspects:
Optimizer: Adam optimizer with adaptive learning rates for both networks.
Epochs: Typically trained for 100–200 epochs.
Batch Size: A size like 128 balances speed and performance.
Adversarial Training: Continuous feedback loop between Generator and Discriminator
for mutual improvement.
3.5 Evaluation of the GAN Model
Visual Inspection: Periodic image generation to observe realism and diversity.
Loss Curves: Tracking loss values for both networks over epochs to monitor stability and
convergence.
Overfitting Detection: Regular evaluation ensures generalization and prevents
overfitting.
Model Checkpointing: Saving model weights during training for recovery and
evaluation.
3.6 Results and Visualization
Generated Samples: Visualization of synthetic digits at various training stages to track
Generator progress.
Loss Graphs: Displaying Generator and Discriminator loss over time to evaluate training
dynamics.
Future Scope: Further work may involve experimenting with advanced GAN variants
(e.g., DCGAN, WGAN) or larger datasets.

6
Chapter 4: Experimental Results
CODE
# ===============================
# ✅ FINAL GAN PROJECT CODE
# ===============================
# 🔧 SETUP (Remove old logs)
!rm -rf runs
!pip install -q tensorboard
%load_ext tensorboard
%tensorboard --logdir runs
# ===============================
# 🧠 IMPORTS
# ===============================
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.datasets as datasets
from torch.utils.data import DataLoader
import torchvision.transforms as transforms
from torch.utils.tensorboard import SummaryWriter
import matplotlib.pyplot as plt
import os
# ===============================
# 🧠 MODEL DEFINITIONS
# ===============================
class Discriminator(nn.Module):
def __init__(self, img_dim):
super().__init__()
self.model = nn.Sequential(
nn.Linear(img_dim, 512),
nn.LeakyReLU(0.2),
nn.Linear(512, 256),
nn.LeakyReLU(0.2),
nn.Linear(256, 1),
nn.Sigmoid()
)
def forward(self, x):
return self.model(x)
class Generator(nn.Module):
def __init__(self, z_dim, img_dim):
super().__init__()
self.model = nn.Sequential(
nn.Linear(z_dim, 256),
nn.BatchNorm1d(256),
nn.LeakyReLU(0.2),
nn.Linear(256, 512),
nn.BatchNorm1d(512),
nn.LeakyReLU(0.2),

7
nn.Linear(512, img_dim),
nn.Tanh()
)
def forward(self, x):
return self.model(x)
# ===============================
# ⚙️ HYPERPARAMETERS
# ===============================
device = "cuda" if torch.cuda.is_available() else "cpu"
lr = 1e-4
z_dim = 64
image_dim = 28 * 28
batch_size = 64
num_epochs = 100

# ===============================
# 📦 DATASET & LOADER
# ===============================
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
])
dataset = datasets.MNIST(root="dataset/", transform=transform, download=True)
loader = DataLoader(dataset, batch_size=batch_size, shuffle=True)
# ===============================
# 🧠 INITIALIZE MODELS & OPTIMIZERS
# ===============================
disc = Discriminator(image_dim).to(device)
gen = Generator(z_dim, image_dim).to(device)
opt_disc = optim.Adam(disc.parameters(), lr=lr)
opt_gen = optim.Adam(gen.parameters(), lr=lr)
criterion = nn.BCELoss()
fixed_noise = torch.randn((batch_size, z_dim)).to(device)
writer_fake = SummaryWriter(f"runs/GAN_MNIST/fake")
writer_real = SummaryWriter(f"runs/GAN_MNIST/real")
step = 0
G_losses = []
D_losses = []
# Directory to save generated samples and models
os.makedirs("samples", exist_ok=True)
os.makedirs("models", exist_ok=True)
# ===============================
# 🔁 TRAINING LOOP
# ===============================
for epoch in range(num_epochs):
for batch_idx, (real, _) in enumerate(loader):
real = real.view(-1, image_dim).to(device)
cur_batch_size = real.size(0)
# ========== Train Discriminator ==========
noise = torch.randn(cur_batch_size, z_dim).to(device)
fake = gen(noise)
8
disc_real = disc(real).view(-1)
lossD_real = criterion(disc_real, torch.ones_like(disc_real) * 0.9) # Label smoothing
disc_fake = disc(fake.detach()).view(-1)
lossD_fake = criterion(disc_fake, torch.zeros_like(disc_fake))
lossD = (lossD_real + lossD_fake) / 2
disc.zero_grad()
lossD.backward()
opt_disc.step()
# ========== Train Generator ==========
output = disc(fake).view(-1)
lossG = criterion(output, torch.ones_like(output)) # Fool the discriminator
gen.zero_grad()
lossG.backward()
opt_gen.step()
# Save losses
G_losses.append(lossG.item())
D_losses.append(lossD.item())
# ========== Logging & Visualization ==========
if batch_idx % 100 == 0:
print(
f"Epoch [{epoch+1}/{num_epochs}] Batch {batch_idx}/{len(loader)} "
f"Loss D: {lossD:.4f}, Loss G: {lossG:.4f}"
)
with torch.no_grad():
fake = gen(fixed_noise).reshape(-1, 1, 28, 28)
real_imgs = real.reshape(-1, 1, 28, 28)
img_grid_fake = torchvision.utils.make_grid(fake, normalize=True)
img_grid_real = torchvision.utils.make_grid(real_imgs, normalize=True)
writer_fake.add_image("Fake Images", img_grid_fake, global_step=step)
writer_real.add_image("Real Images", img_grid_real, global_step=step)
step += 1
# Save model checkpoints and generated images
if (epoch + 1) % 10 == 0:
torch.save(gen.state_dict(), f"models/generator_epoch_{epoch+1}.pth")
torchvision.utils.save_image(fake, f"samples/fake_epoch_{epoch+1}.png", normalize=True)
# ===============================
# 📊 PLOT LOSS CURVES
# ===============================
plt.figure(figsize=(10, 5))
plt.title("Generator and Discriminator Loss During Training")
plt.plot(G_losses, label="G")
plt.plot(D_losses, label="D")
plt.xlabel("Iterations")
plt.ylabel("Loss")
plt.legend()
plt.savefig("samples/loss_curve.png")
plt.show()
# ===============================
# ✅ FINAL GAN PROJECT CODE
# ===============================
# 🔧 SETUP (Remove old logs)
9
!rm -rf runs
!pip install -q tensorboard
%load_ext tensorboard
%tensorboard --logdir runs
# ===============================
# 🧠 IMPORTS
# ===============================
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.datasets as datasets
from torch.utils.data import DataLoader
import torchvision.transforms as transforms
from torch.utils.tensorboard import SummaryWriter
import matplotlib.pyplot as plt
import os
# ===============================
# 🧠 MODEL DEFINITIONS
# ===============================
class Discriminator(nn.Module):
def __init__(self, img_dim):
super().__init__()
self.model = nn.Sequential(
nn.Linear(img_dim, 512),
nn.LeakyReLU(0.2),
nn.Linear(512, 256),
nn.LeakyReLU(0.2),
nn.Linear(256, 1),
nn.Sigmoid()
)
def forward(self, x):
return self.model(x)
class Generator(nn.Module):
def __init__(self, z_dim, img_dim):
super().__init__()
self.model = nn.Sequential(
nn.Linear(z_dim, 256),
nn.BatchNorm1d(256),
nn.LeakyReLU(0.2),
nn.Linear(256, 512),
nn.BatchNorm1d(512),
nn.LeakyReLU(0.2),
nn.Linear(512, img_dim),
nn.Tanh()
)
def forward(self, x):
return self.model(x)
# ===============================
# ⚙️ HYPERPARAMETERS
# ===============================
10
device = "cuda" if torch.cuda.is_available() else "cpu"
lr = 1e-4
z_dim = 64
image_dim = 28 * 28
batch_size = 64
num_epochs = 100
# ===============================
# 📦 DATASET & LOADER
# ===============================
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
])
dataset = datasets.MNIST(root="dataset/", transform=transform, download=True)
loader = DataLoader(dataset, batch_size=batch_size, shuffle=True)
# ===============================
# 🧠 INITIALIZE MODELS & OPTIMIZERS
# ===============================
disc = Discriminator(image_dim).to(device)
gen = Generator(z_dim, image_dim).to(device)
opt_disc = optim.Adam(disc.parameters(), lr=lr)
opt_gen = optim.Adam(gen.parameters(), lr=lr)
criterion = nn.BCELoss()
fixed_noise = torch.randn((batch_size, z_dim)).to(device)
writer_fake = SummaryWriter(f"runs/GAN_MNIST/fake")
writer_real = SummaryWriter(f"runs/GAN_MNIST/real")
step = 0
G_losses = []
D_losses = []
# Directory to save generated samples and models
os.makedirs("samples", exist_ok=True)
os.makedirs("models", exist_ok=True)
# ===============================
# 🔁 TRAINING LOOP
# ===============================
for epoch in range(num_epochs):
for batch_idx, (real, _) in enumerate(loader):
real = real.view(-1, image_dim).to(device)
cur_batch_size = real.size(0)
# ========== Train Discriminator ==========
noise = torch.randn(cur_batch_size, z_dim).to(device)
fake = gen(noise)
disc_real = disc(real).view(-1)
lossD_real = criterion(disc_real, torch.ones_like(disc_real) * 0.9) # Label smoothing
disc_fake = disc(fake.detach()).view(-1)
lossD_fake = criterion(disc_fake, torch.zeros_like(disc_fake))
lossD = (lossD_real + lossD_fake) / 2
disc.zero_grad()
lossD.backward()
opt_disc.step()
# ========== Train Generator ==========
11
output = disc(fake).view(-1)
lossG = criterion(output, torch.ones_like(output)) # Fool the discriminator
gen.zero_grad()
lossG.backward()
opt_gen.step()
# Save losses
G_losses.append(lossG.item())
D_losses.append(lossD.item())

# ========== Logging & Visualization ==========


if batch_idx % 100 == 0:
print(
f"Epoch [{epoch+1}/{num_epochs}] Batch {batch_idx}/{len(loader)} "
f"Loss D: {lossD:.4f}, Loss G: {lossG:.4f}"
)
with torch.no_grad():
fake = gen(fixed_noise).reshape(-1, 1, 28, 28)
real_imgs = real.reshape(-1, 1, 28, 28)
img_grid_fake = torchvision.utils.make_grid(fake, normalize=True)
img_grid_real = torchvision.utils.make_grid(real_imgs, normalize=True)
writer_fake.add_image("Fake Images", img_grid_fake, global_step=step)
writer_real.add_image("Real Images", img_grid_real, global_step=step)
step += 1
# Save model checkpoints and generated images
if (epoch + 1) % 10 == 0:
torch.save(gen.state_dict(), f"models/generator_epoch_{epoch+1}.pth")
torchvision.utils.save_image(fake, f"samples/fake_epoch_{epoch+1}.png", normalize=True)
# ===============================
# 📊 PLOT LOSS CURVES
# ===============================
plt.figure(figsize=(10, 5))
plt.title("Generator and Discriminator Loss During Training")
plt.plot(G_losses, label="G")
plt.plot(D_losses, label="D")
plt.xlabel("Iterations")
plt.ylabel("Loss")
plt.legend()
plt.savefig("samples/loss_curve.png")
plt.show()

Output

12
13
14
15
16
17
18
19
Chapter 5: Conclusion

5.1 Project Summary:


This project successfully implemented a basic Generative Adversarial Network (GAN) to
generate realistic synthetic images using the MNIST dataset. The primary objective of
understanding the working of GANs and the dynamics between the Generator and
Discriminator networks through adversarial training was achieved. The model was able to
produce synthetic handwritten digits that visually resembled the real samples, validating
the effectiveness of the approach.
5.2 Key Learnings and Future Scope:
This work highlights the vast potential of GANs in areas such as data augmentation,
creative content generation, and enhancement of machine learning models using
synthetic data. However, despite their capabilities, GANs present challenges like training
instability and mode collapse. Future improvements will focus on stabilizing the training
process and experimenting with advanced GAN architectures, including Conditional
GANs and StyleGAN. This project forms a strong foundation for deeper exploration and
real-world application of GANs in various AI domains.

20
REFERENCES
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley,
Sherjil Ozair, Aaron Courville, and Yoshua Bengio.
Generative Adversarial Networks. arXiv preprint arXiv:1406.2661, 2014.
https://arxiv.org/abs/1406.2661
PyTorch Documentation – Neural Networks (torch.nn)
https://pytorch.org/docs/stable/nn.html
MNIST Handwritten Digit Dataset
http://yann.lecun.com/exdb/mnist/
TensorBoard – Visualizing Learning
https://www.tensorflow.org/tensorboard
Radford, Alec, Luke Metz, and Soumith Chintala.
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial
Networks. arXiv preprint arXiv:1511.06434, 2015.
https://arxiv.org/abs/1511.06434
Towards Data Science. A Beginner's Guide to GANs with Code in PyTorch
https://towardsdatascience.com/a-beginners-guide-to-generative-adversarial-networks-
gans-with-pytorch-code-79a36549f0c4
GitHub Repositories and Tutorials on GANs
https://github.com/eriklindernoren/PyTorch-GAN

21

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy