0% found this document useful (0 votes)
5 views

Autoencoders Tutorial _ What Are Autoencoders_ _ Edureka

The document is a tutorial on autoencoders, which are unsupervised neural networks used for data compression. It covers the definition, architecture, applications, properties, hyperparameters, and various types of autoencoders, including convolutional and sparse autoencoders. Additionally, it provides a practical implementation example using TensorFlow and the MNIST dataset for image compression.

Uploaded by

beebird234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Autoencoders Tutorial _ What Are Autoencoders_ _ Edureka

The document is a tutorial on autoencoders, which are unsupervised neural networks used for data compression. It covers the definition, architecture, applications, properties, hyperparameters, and various types of autoencoders, including convolutional and sparse autoencoders. Additionally, it provides a practical implementation example using TensorFlow and the MNIST dataset for image compression.

Uploaded by

beebird234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

9/4/2020 Autoencoders Tutorial | What are Autoencoders?

| Edureka

Autoencoders Tutorial : A Beginner’s Guide to Autoencoders


Last updated on May 14,2020 14.3K Views

Sayantini 
A Data Science Enthusiast with in-hand skills in programming languages such as...

Arti cial Intelligence encircles a wide range of technologies and techniques that enable computer systems to solve problems like
Data Compression which is used in computer vision, computer networks, computer architecture, and many other
elds. Autoencoders are unsupervised neural networks that use machine learning to do this compression for us. This
Autoencoders Tutorial will provide you with a complete insight into autoencoders in the following sequence:

What are Autoencoders?


The need for Autoencoders
Applications of Autoencoders
Architecture of Autoencoders
Properties & Hyperparameters
Types of Autoencoders
Data Compression using Autoencoders (Demo)

Let’s begin with the most fundamental and essential question, What are autoencoders?

What are Autoencoders?


An autoencoder neural network is an Unsupervised Machine learning algorithm that applies backpropagation, setting the
target values to be equal to the inputs. Autoencoders are used to reduce the size of our inputs into a smaller representation. If
anyone needs the original data, they can reconstruct it from the compressed data.

We have a similar machine learning algorithm ie. PCA which does the same task. So you might be thinking why do we need
Autoencoders then? Let’s continue this Autoencoders Tutorial and nd out the reason behind using Autoencoders.

Autoencoders Tutorial: Its Emergence


Autoencoders are preferred over PCA because:

An autoencoder can learn non-linear transformations with a non-linear


activation function and multiple layers.
It doesn’t have to learn dense layers. It can use convolutional layers to learn
which is better for video, image and series data.
It is more e cient to learn several layers with an autoencoder rather than
learn one huge transformation with PCA.
An autoencoder provides a representation of each layer as the output.
It can make use of pre-trained layers from another model to apply transfer
learning to enhance the encoder/decoder.

Now let’s have a look at a few Industrial Applications of Autoencoders.

Subscribe to our youtube channel to get new updates..!


edureka!

YouTube 2M

Applications of Autoencoders

https://www.edureka.co/blog/autoencoders-tutorial/ 1/10
9/4/2020 Autoencoders Tutorial | What are Autoencoders? | Edureka

Image Coloring

Autoencoders are used for converting any black and


white picture into a colored image. Depending on
what is in the picture, it is possible to tell what the
color should be.

Feature variation
It extracts only the required features of an image and
generates the output by removing any noise or
unnecessary interruption.

Dimensionality Reduction
The reconstructed image is the same as our input but
with reduced dimensions. It helps in providing the
similar image with a reduced pixel value.

Denoising Image
The input seen by the autoencoder is not the raw input

but a stochastically corrupted version. A denoising


autoencoder is thus trained to reconstruct the original
input from the noisy version.

Watermark Removal

It is also used for removing watermarks from images or to


remove any object while lming a video or a movie.

Now that you have an idea of the di erent industrial


applications of Autoencoders, let’s continue our Autoencoders Tutorial Blog and understand the complex architecture of
Autoencoders.

Architecture of Autoencoders
An Autoencoder consist of three layers:

1. Encoder
2. Code
3. Decoder

https://www.edureka.co/blog/autoencoders-tutorial/ 2/10
9/4/2020 Autoencoders Tutorial | What are Autoencoders? | Edureka

AI & Deep Learning with TensorFlow


Instructor-led Sessions
Real-life Case Studies
Assignments
Lifetime Access

Explore Curriculum

Encoder: This part of the network compresses the input into a latent space representation. The encoder
layer encodes the input image as a compressed representation in a reduced dimension. The compressed image is the
distorted version of the original image.
Code: This part of the network represents the compressed input which is fed to the decoder.
Decoder: This layer decodes the encoded image back to the original dimension. The decoded image is a lossy
reconstruction of the original image and it is reconstructed from the latent space representation.

The layer between the encoder and decoder, ie. the code is also known as Bottleneck.
This is a well-designed approach to decide which aspects of observed data are relevant
information and what aspects can be discarded. It does this by balancing two criteria :

Compactness of representation, measured as the compressibility.


It retains some behaviourally relevant variables from the input.

Now that you have an idea of the architecture of an Autoencoder. Let’s continue our Autoencoders Tutorial and understand the
di erent properties and the Hyperparameters involved while training Autoencoders.

Undersatnd Deep Learning from Experts


Enroll Now

Properties and Hyperparameters


Properties of Autoencoders:

Data-speci c: Autoencoders are only able to compress data similar to what they have been trained on.

Lossy: The decompressed outputs will be degraded compared to the original inputs. 

https://www.edureka.co/blog/autoencoders-tutorial/ 3/10
9/4/2020 Autoencoders Tutorial | What are Autoencoders? | Edureka

Learned automatically from examples: It is easy to train specialized instances of the algorithm that will perform well on a
speci c type of input.

Hyperparameters of Autoencoders:

There are 4 hyperparameters that we need to set before training an autoencoder:

Code size: It represents the number of nodes in the middle layer. Smaller size
results in more compression.
Number of layers: The autoencoder can consist of as many layers as we want.
Number of nodes per layer: The number of nodes per layer decreases with
each subsequent layer of the encoder, and increases back in the decoder. The
decoder is symmetric to the encoder in terms of the layer structure.
Loss function: We either use mean squared error or binary cross-entropy. If
the input values are in the range [0, 1] then we typically use cross-entropy,
otherwise, we use the mean squared error.

Now that you know the properties and hyperparameters involved in the training of Autoencoders. Let’s move forward with our
Autoencoders Tutorial and understand the di erent types of autoencoders and how they di er from each other.

Types of Autoencoders
Convolution Autoencoders

Autoencoders in their traditional formulation does not take into account the fact that a signal can be seen as a sum of other
signals. Convolutional Autoencoders use the convolution operator to exploit this observation. They learn to encode the input in a
set of simple signals and then try to reconstruct the input from them, modify the geometry or the re ectance of the image.

Use cases of CAE:

Image Reconstruction
Image Colorization
latent space clustering
generating higher resolution images

Sparse Autoencoders

Sparse autoencoders o er us an alternative method for introducing an information bottleneck without requiring a reduction
in the number of nodes at our hidden layers. Instead, we’ll construct our loss function such that we penalize activations within a
layer.

ti cial Intelligence Training

https://www.edureka.co/blog/autoencoders-tutorial/ 4/10
9/4/2020 Autoencoders Tutorial | What are Autoencoders? | Edureka

NATURAL LANGUAGE
PROCESSING WITH GRAPHICAL MODELS

󡍃 󡚘 󡝹 󥀀
AI & DEEP LEARNING REINFORC
PYTHON CERTIFICATION
WITH TENSORFLOW LEARNING
CERTIFICATION TRAINING
COURSE

Natural Language
AI & Deep Learning with Graphical Models
Processing with Python Reinforcement Lea
TensorFlow Certi cation Training
Certi cation Course
Reviews Reviews Reviews Reviews

     5(18085)      5(2183)      5(1280)      5(1321)

Deep Autoencoders

The extension of the simple Autoencoder is the Deep Autoencoder. The rst layer of the Deep Autoencoder is used for rst-
order features in the raw input. The second layer is used for second-order features corresponding to patterns in the
appearance of rst-order features. Deeper layers of the Deep Autoencoder tend to learn even higher-order features.

A deep autoencoder is composed of two, symmetrical deep-belief networks-

1. First four or ve shallow layers representing the encoding half of the net.
2. The second set of four or ve layers that make up the decoding half.

Use cases of Deep Autoencoders

Image Search
Data Compression
Topic Modeling & Information Retrieval (IR)

Contractive Autoencoders

A contractive autoencoder is an unsupervised deep learning technique that helps a neural network encode unlabeled training
data. This is accomplished by constructing a loss term which penalizes large derivatives of our hidden layer activations with 
respect to the input training examples, essentially penalizing instances where a small change in the input leads to a large
change in the encoding space.

https://www.edureka.co/blog/autoencoders-tutorial/ 5/10
9/4/2020 Autoencoders Tutorial | What are Autoencoders? | Edureka

Now that you have an idea of what Autoencoders is, it’s di erent types and it’s properties. Let’s move ahead with our
Autoencoders Tutorial and understand a simple implementation of it using TensorFlow in Python.

Subscribe to our youtube channel to get new updates..!


edureka!

YouTube 2M

Data Compression using Autoencoders(Demo)

Let’s import the required libraries

1 import numpy as np
2 from keras.layers import Input, Dense
3 from keras.models import Model
4 from keras.datasets import mnist
5 import matplotlib.pyplot as plt

Declaration of Hidden Layers and Variables

1 # this is the size of our encoded representations


2 encoding_dim = 32 # 32 floats -> compression of factor 24.5, assuming the input is 784 floats
3
4 # this is our input placeholder
5 input_img = Input(shape=(784,))
6 # "encoded" is the encoded representation of the input
7 encoded = Dense(encoding_dim, activation='relu')(input_img)
8 # "decoded" is the lossy reconstruction of the input
9 decoded = Dense(784, activation='sigmoid')(encoded)
10 # this model maps an input to its reconstruction
11 autoencoder = Model(input_img, decoded)
12 # this model maps an input to its encoded representation
13 encoder = Model(input_img, encoded)
14 # create a placeholder for an encoded (32-dimensional) input
15 encoded_input = Input(shape=(encoding_dim,))
16 # retrieve the last layer of the autoencoder model
17 decoder_layer = autoencoder.layers[-1]
18 # create the decoder model
19 decoder = Model(encoded_input, decoder_layer(encoded_input))
20 # configure our model to use a per-pixel binary crossentropy loss, and the Adadelta optimizer:
21 autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')

Preparing the input data (MNIST Dataset)


https://www.edureka.co/blog/autoencoders-tutorial/ 6/10
9/4/2020 Autoencoders Tutorial | What are Autoencoders? | Edureka

1 (x_train, _), (x_test, _) = mnist.load_data()


2 # normalize all values between 0 and 1 and we will flatten the 28x28 images into vectors of size 784.
3 x_train = x_train.astype('float32') / 255.
4 x_test = x_test.astype('float32') / 255.
5 x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
6 x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
7 print x_train.shape
8 print x_test.shape

Training Autoencoders for 50 epochs

1 autoencoder.fit(x_train, x_train,
2 epochs=50,
3 batch_size=256,
4 shuffle=True,
5 validation_data=(x_test, x_test))
6 # encode and decode some digits
7 # note that we take them from the *test* set
8 encoded_imgs = encoder.predict(x_test)
9 decoded_imgs = decoder.predict(encoded_imgs)

Visualizing the reconstructed inputs and the encoded representations using Matplotlib

1 n = 20 # how many digits we will display


2 plt.figure(figsize=(20, 4))
3 for i in range(n):
4 # display original
5 ax = plt.subplot(2, n, i + 1)
6 plt.imshow(x_test[i].reshape(28, 28))
7 plt.gray()
8 ax.get_xaxis().set_visible(False)
9 ax.get_yaxis().set_visible(False)
10
11
12
13 # display reconstruction
14 ax = plt.subplot(2, n, i + 1 + n)
15 plt.imshow(decoded_imgs[i].reshape(28, 28))
16 plt.gray()
17 ax.get_xaxis().set_visible(False)
18 ax.get_yaxis().set_visible(False)
19 plt.show()

Input

AI & Deep Learning with TensorFlow


Weekday / Weekend Batches

See Batch Details

Output

Now with this, we come to an end to this Autoencoders Tutorial. I Hope you guys enjoyed this article and understood the power
of Tensor ow, and how easy it is to decompress images. So, if you have read this, you are no longer a newbie to Autoencoders.
Try out these examples and let me know if there are any challenges you are facing while deploying the code.

Autoencoders Tutorial using TensorFlow | Edureka

https://www.edureka.co/blog/autoencoders-tutorial/ 7/10
9/4/2020 Autoencoders Tutorial | What are Autoencoders? | Edureka

This video provides you with a brief introduction about autoencoders and how they compress unsupervised data.

Undersatnd Deep Learning from Experts

Enroll Now

Now that you have understood the basics of Autoencoders, check out the AI and Deep Learning With Tensor ow by Edureka, a
trusted online learning company with a network of more than 250,000 satis ed learners spread across the globe. This
Certi cation Training is curated by industry professionals as per the industry requirements & demands. You will master concepts
such as SoftMax function, Autoencoder Neural Networks, Restricted Boltzmann Machine (RBM) and work with libraries like Keras
& TFLearn. Got a question for us? Please mention it in the comments section of “Autoencoders Tutorial” and we will get back to
you.

Recommended videos for you

  
What Is Deep Learning – Deep Deep Learning Tutorial – Deep Introduction to Mahout
Learning Simpli ed Learning With TensorFlow

Watch Now Watch Now Watch Now

Recommended blogs for you

https://www.edureka.co/blog/autoencoders-tutorial/ 8/10
9/4/2020 Autoencoders Tutorial | What are Autoencoders? | Edureka

Object Detection Tutorial in Understanding Distance Most Frequently Asked Top 10 Machine Lea
TensorFlow: Real-Time Object Measures in Mahout Arti cial Intelligence Interview You Need to Know A
Detection Questions

Read Article Read Article Read Article Read Article

‹›

Comments 0 Comments

Join the discussion

Trending Courses in Arti cial Intelligence

AI & Deep Learning with Natural Language Graphical Models Reinforcement Le


TensorFlow Processing with Python Certi Certi cation Training
...
 19k Enrolled Learners  3k Enrolled Learners  2k Enrolled Learners  2k Enrolled Learners
 Weekend  Weekend  Weekend  Weekend
 Live Class  Live Class  Live Class  Live Class

Reviews Reviews Reviews Reviews

 5 (7250)  5 (900)  5 (550)  5 (550)

‹›
Browse Categories

BI and Visualization Big Data Blockchain Cloud Computing Cyber Security Data Science Data Warehousing and ETL Databases

DevOps Digital Marketing Front End Web Development Mobile Development Operating Systems Programming & Frameworks

Project Management and Methodologies Robotic Process Automation Software Testing Systems & Architecture

https://www.edureka.co/blog/autoencoders-tutorial/ 9/10
9/4/2020 Autoencoders Tutorial | What are Autoencoders? | Edureka


TRENDING CERTIFICATION COURSES TRENDING MASTERS COURSES

DevOps Certi cation Training Data Scientist Masters Program

AWS Architect Certi cation Training DevOps Engineer Masters Program

Big Data Hadoop Certi cation Training Cloud Architect Masters Program

Tableau Training & Certi cation Big Data Architect Masters Program

Python Certi cation Training for Data Science Machine Learning Engineer Masters Program

Selenium Certi cation Training Full Stack Web Developer Masters Program

PMP® Certi cation Exam Training Business Intelligence Masters Program

Robotic Process Automation Training using UiPath Data Analyst Masters Program

Apache Spark and Scala Certi cation Training Test Automation Engineer Masters Program

Microsoft Power BI Training Post-Graduate Program in Arti cial Intelligence & Machine Learning

Online Java Course and Training Post-Graduate Program in Big Data Engineering

Python Certi cation Course

COMPANY WORK WITH US

About us Careers

News & Media Become an Instructor

Reviews Become an A liate

Contact us Become a Partner

Blog Hire from Edureka

Community
DOWNLOAD APP
Sitemap

Blog Sitemap

Community Sitemap

Webinars

CATEGORIES 

CATEGORIES
Cloud Computing DevOps Big Data Data Science BI and Visualization Programming & Frameworks Software Testing

Project Management and Methodologies Robotic Process Automation Frontend Development Data Warehousing and ETL Arti cial Intelligence

Blockchain Databases Cyber Security Mobile Development Operating Systems Architecture & Design Patterns Digital Marketing

TRENDING BLOG ARTICLES 

TRENDING BLOG ARTICLES


Selenium tutorial Selenium interview questions Java tutorial What is HTML Java interview questions PHP tutorial JavaScript interview questions

Spring tutorial PHP interview questions Inheritance in Java Polymorphism in Java Spring interview questions Pointers in C Linux commands

Android tutorial JavaScript tutorial jQuery tutorial SQL interview questions MySQL tutorial Machine learning tutorial Python tutorial

What is machine learning Ethical hacking tutorial SQL injection AWS certi cation career opportunities AWS tutorial What Is cloud computing

What is blockchain Hadoop tutorial What is arti cial intelligence Node Tutorial Collections in Java Exception handling in java

Python Programming Language Python interview questions Multithreading in Java ReactJS Tutorial Data Science vs Big Data vs Data Analyt…

Software Testing Interview Questions R Tutorial Java Programs JavaScript Reserved Words and Keywor… Implement thread.yield() in Java: Exam…

Implement Optical Character Recogniti… All you Need to Know About Implemen…

© 2020 Brain4ce Education Solutions Pvt. Ltd. All rights Reserved. Terms & Conditions    

Legal & Privacy

"PMP®","PMI®", "PMI-ACP®" and "PMBOK®" are registered marks of the Project Management Institute, Inc. MongoDB®, Mongo and the leaf logo are the registered trademarks of

MongoDB, Inc.

https://www.edureka.co/blog/autoencoders-tutorial/ 10/10

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy