Autoencoder NPTEL Presentation
Autoencoder NPTEL Presentation
• Where:
• - x is the input
• - x̂ is the reconstructed output
• - Loss: Mean Squared Error (MSE)
Types of Autoencoders
• - Undercomplete Autoencoder
• - Sparse Autoencoder
• - Denoising Autoencoder
• - Variational Autoencoder (VAE)
• - Convolutional Autoencoder
Applications of Autoencoders
• - Dimensionality Reduction
• - Image Denoising
• - Anomaly Detection
• - Pretraining for Deep Networks
• - Data Generation (VAEs)
Example – Image Denoising
• - Input: Noisy image
• - Autoencoder learns to output clean image
• - Learns to remove noise via reconstruction
Comparison with PCA
• Feature | PCA | Autoencoder
• ----------------------|-------|-------------
• Linear | Yes | Not necessarily
• Interpretability | High | Low
• Complexity | Low | High
• Nonlinear capability | No | Yes
Advanced Topic – VAEs
• - VAEs learn a distribution over latent space.
• - Enable data generation via sampling.
• - Combine encoding with probabilistic models.
Limitations of Autoencoders
• - May memorize inputs if over-parameterized
• - Sensitive to hyperparameters
• - Reconstruction may not be meaningful
• - Limited generalization if not trained well
Summary
• - Autoencoders learn to encode and decode
data.
• - Useful in many ML tasks including generation
and compression.
• - Several variants exist to suit different tasks.
• - Based on neural networks, often
unsupervised.