AutoEncoders and GANs
AutoEncoders and GANs
• Data-specific &Lossy
• HyperparametersCode size, Number of layers, Number of nodes
per layer & Loss function
Hyperparameters
• Code size
• Number of layers
• Number of nodes per layer
• Loss function
Applications contd.,
• Data Compression
• Feature Extraction
• Image Generation
• Image colourisation
• Watermark removal
AutoEncoder-Code Segment for data compression
import numpy as np encoder = keras.Model(input_img, encoded)
from keras.layers import Input, Dense encoded_input = Input(shape=(encoding_dim,))
from keras.models import Model decoder_layer = autoencoder.layers[-1]
from keras.datasets import mnist decoder = Model(encoded_input,
decoder_layer(encoded_input))
import matplotlib.pyplot as plt
autoencoder.compile(optimizer=‘Adam',
encoding_dim = 32 # 32 floats -> compression of loss='binary_crossentropy')
factor 24.5, assuming the input is 784 floats
(x_train, _), (x_test, _) = mnist.load_data()
input_img = Input(shape=(784,))
x_train = x_train.astype('float32') / 255.
encoded = Dense(encoding_dim,
activation='relu')(input_img) x_test = x_test.astype('float32') / 255.
decoded = Dense(784, x_train = x_train.reshape((len(x_train),
activation='sigmoid')(encoded) np.prod(x_train.shape[1:])))
autoencoder = Keras.Model(input_img, x_test = x_test.reshape((len(x_test),
decoded) np.prod(x_test.shape[1:])))
print x_train.shape
print x_test.shape
autoencoder.fit(x_train, x_train,
epochs=50,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test))
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
Output
Sparse Autoencoders
Types of AutoEncoders- Convolution
AutoEncoders
Dimensionality Reduction Denoising Image
Convolution Operation
Transpose Convolution
Cont.,
Cont.,
AutoEncoders to color the
gray-scale images
AutoEncoders to color the gray-scale images
• an Auto-encoder is a neural h-hidden layer
network that is trained to h=f(x) -encoder function
attempt to copy its input to its
output. r=g(h)- decoder function
• encoder — transforms the
input data into a lower- The goal of the convolutional
dimensional representation. layers is to extract potential
• decoder — recovers the input features by applying convolutional
from the low-dimensional filtering.
representation.
Example
Test Images and Results
Generative Adversarial
Networks (GANs)
Introduction
Generative modeling is an unsupervised learning approach.
It was developed and introduced by Ian J. Goodfellow in 2014.
It automatically discovers and learn patterns in input data.
The model can be used to generate new examples from the original
dataset.
Generative: To learn a generative model, which describes how data is
generated in terms of a probabilistic model.
Adversarial: The training of a model is done in an adversarial setting.
Networks: Use deep neural networks as the artificial intelligence (AI)
algorithms for training purpose.
Cont.,
• Generator: It is trained to generate new dataset, for example in
computer vision it generates new images from existing real world
images.
• Discriminator: It compares those images with some real world
examples and classify real and fake images.
Types of GAN
1) DC GAN- Deep Convolutional Generative Adversarial Networks
2) Conditional GAN and Unconditional GAN (CGAN)
3) Least Square GAN(LSGAN)
4) Cycle GAN
Fake Image Generated
After 10 epochs
After 90 epochs
After 200 epochs
from numpy import zeros, ones, expand_dims,
asarray (X_train, _), (_, _) = fashion_mnist.load_data()
from numpy.random import randn, randint X_train = X_train.astype(np.float32) / 127.5 - 1
from keras.datasets import fashion_mnist X_train = np.expand_dims(X_train, axis=3)
from keras.optimizers import Adam print(X_train.shape)
from keras.models import Model, load_model def generate_latent_points(latent_dim,
from keras.layers import Input, Dense, Reshape, n_samples):
Flatten x_input = randn(latent_dim * n_samples)
from keras.layers import Conv2D, z_input = x_input.reshape(n_samples,
Conv2DTranspose, Concatenate latent_dim)
from keras.layers import LeakyReLU, Dropout, return z_input
Embedding
def generate_fake_samples(generator,
from keras.layers import BatchNormalization, latent_dim, n_samples):
Activation
z_input = generate_latent_points(latent_dim,
from keras import initializers n_samples)
from keras.initializers import RandomNormal images = generator.predict(z_input)
from keras.optimizers import Adam, RMSprop, y = zeros((n_samples, 0))
SGD
return images, y
from matplotlib import pyplot
import numpy as np
def summarize_performance(step, def save_plot(examples, n_examples):
g_model, latent_dim, n_samples=100): for i in range(n_examples):
X, _ = generate_fake_samples(g_model, pyplot.subplot(sqrt(n_examples),
latent_dim, n_samples) sqrt(n_examples), 1 + i)
X = (X + 1) / 2.0 pyplot.axis('off')
for i in range(100): pyplot.imshow(examples[i, :, :, 0],
pyplot.subplot(10, 10, 1 + i) cmap='gray_r')
pyplot.axis('off') pyplot.show()
pyplot.imshow(X[i, :, :, 0],
cmap='gray_r')
filename2 = 'model_%04d.h5' % (step+1)
g_model.save(filename2)
print('>Saved: %s' % (filename2))
def define_generator(latent_dim):
def define_discriminator(in_shape=(28, 28, 1)): init = RandomNormal(stddev=0.02)
init = RandomNormal(stddev=0.02) in_lat = Input(shape=(latent_dim,))
in_image = Input(shape=in_shape) gen = Dense(256, kernel_initializer=init)(in_lat)
fe = Flatten()(in_image) gen = LeakyReLU(alpha=0.2)(gen)
fe = Dense(1024)(fe) gen = Dense(512, kernel_initializer=init)(gen)
fe = LeakyReLU(alpha=0.2)(fe) gen = LeakyReLU(alpha=0.2)(gen)
fe = Dropout(0.3)(fe) gen = Dense(1024, kernel_initializer=init)(gen)
fe = Dense(512)(fe) gen = LeakyReLU(alpha=0.2)(gen)
fe = LeakyReLU(alpha=0.2)(fe) gen = Dense(28 * 28 * 1, kernel_initializer=init)(gen)
fe = Dropout(0.3)(fe) out_layer = Activation('tanh')(gen)
fe = Dense(256)(fe) out_layer = Reshape((28, 28, 1))(gen)
fe = LeakyReLU(alpha=0.2)(fe) model = Model(in_lat, out_layer)
fe = Dropout(0.3)(fe) return model
out = Dense(1, activation='sigmoid')(fe) generator = define_generator(100)
model = Model(in_image, out) def define_gan(g_model, d_model):
opt = Adam(lr=0.0002, beta_1=0.5) d_model.trainable = False
model.compile(loss='binary_crossentropy', optimizer=opt, gan_output = d_model(g_model.output)
metrics=['accuracy'])
return model model = Model(g_model.input, gan_output)