Neural Networks Notes
Neural Networks Notes
Neural networks mimic the human brain's functioning to solve complex computational problems.
The perceptron, introduced by Frank Rosenblatt, is a simple model for binary classification.
Example: A perceptron can classify whether an email is spam or not based on inputs like subject
Neurons perform weighted summation of inputs and apply activation functions like sigmoid or
Example: Predicting house prices using inputs like area, location, and number of bedrooms.
These networks pass information forward through multiple layers, allowing them to model complex
patterns.
Example: Image recognition in which raw pixel data is processed through layers to identify objects.
5. Back-propagation Learning
This algorithm minimizes the error by propagating it backward through the network to adjust
weights.
Empirical risk minimization aims to minimize errors on training data, while the bias-variance
7. Regularization
- Hidden Units: Activation functions like tanh and ReLU introduce non-linearity.
Example: ReLU activation can be used in hidden layers to model complex relationships.
Deep Neural Networks (DNNs) have multiple hidden layers, allowing them to model more complex
data.
Training one layer at a time simplifies the process and helps initialize weights efficiently.
3. Optimization Methods
Example: Adam optimizer is widely used in NLP tasks for faster convergence.
4. Regularization Techniques
CNNs are specialized for processing structured grid data like images.
1. Introduction to CNN
2. Deep CNNs
Architectures like LeNet (for digit recognition) and AlexNet (for ImageNet classification)
3. Training CNNs
Proper weight initialization, batch normalization, and optimization methods ensure better training.
Example: Using a pre-trained convnet like ResNet to classify new images with transfer learning.
1. Sequence Modeling
Extends backpropagation for sequential data by unrolling the RNN over time.
3. Advanced Variants
- LSTM: Solves the vanishing gradient problem with gates controlling information flow.
1. Autoencoders