Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
66 views
33 pages
MLT Unit-4
Machine Learning Techniques Unit 4 Notes
Uploaded by
guptaraman600
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save MLT Unit-4 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
66 views
33 pages
MLT Unit-4
Machine Learning Techniques Unit 4 Notes
Uploaded by
guptaraman600
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save MLT Unit-4 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 33
Search
Fullscreen
MOST IMPORTANT QUESTIONS MACHINE LEARNING AKTU -ENGINEER BEL MODULE 4 PART-I ARTIFICIAL NEURAL NETWORKS ~ Perceptron’s, Multilayer perceptron, Gradient descent and the Delta rule, Multilayer networks, Derivation of Backpropagation Algorithm, Generalization, Unsupervised Learning -~ SOM. IV_| Algorithg and its variant; DEEP LEARNING - Introduction, concept of convolutional neural network , Types of layers — ( Convolutional Layers , Activation function , pooling , fully connected) , ‘Concept of Convolution (1D and 2D) layers, Training of network, Case study of CNN for eg on Diabetic Retinopathy, Building a smart speaker, Self-deriving car etc. © scanned with OKEN ScannerPerceptrons are the building blocks of neural networks. It is a supervised learning algorithm of binary classifiers. The perceptron consists of 4 parts. Comakiings 1. Input values or One input layer Wedgie 2. Weights and Bias @) weg wt 3. Net sum Q-8 Es 4. Activation Function a. All the inputs x are multiplied with their weights w © scanned with OKEN Scannerb. Add all the multiplied values and call them Weighted Sum. c. Apply that weighted sum to the correct Activation Function. Weights shows the strength of the particular node. A bias value allows you to shift the activation function curve up or down. In short, the activation functions are used to map the input between the required values like (0, 1) or (-1, 1). © scanned with OKEN Scanner‘=Perceptron is usually used to classify the data info tw Therefore, it is wo par ty aS known as a Linear Binary Classifias: ‘Ques 2)What is Gradient descent? 2021-22 2M Gradient descent is an optimization algorithm which is commonly-used to train machine learning models and neural networks, to find a local minimum/maximum of a given function. This method is commonly used in machine learning (ML) and deep learning(DL) to minimize a cost/loss function. © scanned with OKEN Scanner—ecawwmcraascae =, oO 2021-22 2M Gradient descent is an optimization algorithm which is commonly-used to train machine learning models and neural networks, to find a local minimum/maximum of a given function. This method is commonly used in machine learning (ML) and deep learning(DL) to minimize a cost/loss function. Rees 3) What isfGradient descend Delta rule? 2021-22 2M In machine learning, the delta rule is a gradient descent learning rule for updating the weights of the inputs to artificial neurons in a single-layer neural network. It is a special case of the more general backpropagation algorithm. © scanned with OKEN Scanner(© Scanned with OKEN Scanner«—#fack-propagation is used for the training of neural network. The Backpropagation algorithm looks for the minimum value of the error function in weight space using a technique called the delta rule or gradient descent. In an artificial neural network, the values of weights and biases are randomly initialized. Due to random initialization, the neural network probably has errors in giving the correct output. We need to reduce error values as much as possible. So, for reducing these error values, we need a mechanism that can compare the desired output of the neural network with the network’s output that consists of errors and adjusts its weights and biases such that it gets closer to the desired output after each iteration. For this, we train the network such that it back propagates and updates the weights and biases. This is the concept of the back propagation algorithm. © scanned with OKEN ScannerBackpropagation is a short form for "backward propagation of errors." It is a standard method of training artificial neural networks. _ackpropagation Algorithm: Step 1: Inputs X, arrive through the preconnected path Step 2: The input is modeled using true weights W. Weights are usually chosen randomly. Step 3: Calculate the output of each neuron from the input layer to the hidden layer to the output layer. Step 4: Calculate the error in the outputs Backpropagation Error= Actual Output - Desired Output Step 5: From the output layer, go back to the hidden layer to adjust the weights to reduce the error. Step 6: Repeat the process until the desired output is achieved © scanned with OKEN Scanner7-Why We Need Backpropagation? Most prominent advantages of Backpropagation are: + Backpropagation is fast, simple and easy to program + Itis a flexible method as it does not require prior knowledge about the network + Itis a standard method that generally works well + It does not need any special mention of the features of the function to be learned. Types of Backpropagation Networks Two Types of Backpropagation Networks are: + Static Back-propagation + Recurrent Backpropagation © scanned with OKEN Scanner\—~" The output two runs of a neural network compete among themselves to become active. Several output neurons may be active, but in competitive only single output neuron is active at one time. © scanned with OKEN Scannerdey‘Ques 6) Describe the Kohonen self organizing maps and its algorithm, 2020-21 10M Self Organizing Map (or Kohonen Map or SOM) It follows an unsupervised learning approach and trained its network through a competitive learning algorithm. SOM is used for clustering and mapping (or dimensionality reduction) techniques to map multidimensional data onto lower-dimensional which allows people to reduce complex problems for easy interpretation. © scanned with OKEN ScannerM has two layers, one is the Input layer and the other one is the Output fayer. The architecture of the Self Organizing Map with two clusters and n input features of any sample is given below: © scanned with OKEN ScannerStep 1: Initializing the Weights We have randoml: itialized the values of the weights Step 2: Take a sample training input vector from the input layer. Step 3: Calculating the Best Matching Unit/ winning neuron, To determine the best matching unit, one method is to iterate through all the nodes and calculate the Euclidean distance between each node’s weight vector and the current input vector. The node with a weight vector closest to the input vector is tagged as the winning neuron. Distance = © scanned with OKEN Scannerép 4: Find the new weight between input vector sample and winning output Neuron. New Weights = Old Weights + Learning Rate (Input Vector — Old Weights) Step 5: Repeat step 2 to 4 until weight updation is negligible. That is, new weight are similar to old weight or feature map stop changing. © scanned with OKEN ScannerConvolutional Neural Networks (CNNs) are specially designed to work with images. Convolutional Neural Networks (CNNs) are specially designed to work with images. An image consists of pixels. In deep learning, images are represented as arrays of pixel values. © scanned with OKEN Scanner\_ Mere are three main types of layers in a CNN: * Convolutional layers_- * Pooling layers —~ + Fully connected (dense) layers. - €~ In addition to that, activation layers are added after each convolutional layer and fully connected layer. © scanned with OKEN ScannerThere are four main types of operations in a CNN: Convolution operation, Pooling operation, Flatten operation and Classification (or other relevant) operation. volutional layers and convolution operation: The first layer in a CNN is a convolutional layer. It takes the images as the input and begins to process. There are three elements in the convolutional layer: Input image, Filters and Feature map. ‘Section (3x3) Convention peraaeeen — o};o ul fies Toler S13} 4/3 © scanned with OKEN ScannerFilter: This is also called Kernel or Feature Detector.o* | + 1*D + aH (mage section: The size of the image section should be equal to the size of the filter(s) we choose. The number of image sections depends on the Stride. feature map: The feature map stores the outputs of different convolution operations between different image sections and the filter(s). © scanned with OKEN Scanner4#The number of steps (pixels) that we shift the filter over the input image is called Stride. Padding adds additional pixels with zero values to each side of the image. That helps to get the feature map of the same size as the input. Pooling layers and pooling operation Pooling layers are the second type of layer used in a CNN. There can be multiple pooling layers in a CNN. Each convolutional layer is followed by a pooling layer. So, convolution and pooling layers are used © scanned with OKEN ScannerIt Reduce the dimensionality (number of pixels) of the output returned from previous convolutional layers. There are three elements in the pooling layer: Feature map, Filter and Pooled feature map. There are two types of pooling operations. © scanned with OKEN ScannerThere are two types of pooling operations. \--Max pooling: Get the maximum value in the area where the filter is applied. Average pooling: Get the average of the values in the area where the filter is applied. © scanned with OKEN ScannerThen, we can flatten a pooled feature map that contains multiple channels. Fully connected (dense) layers These are the final layers in a CNN. The input is the previous flattened layer. There can be multiple fully connected layers. The final layer does the classification (or other relevant) task. An activation function is used in each fully connected layer. It Classify the detected features in the image into a class label. CNN Overall Architecture Kacceeenn> © scanned with OKEN Scanner2020-21 10M. Input Matrix ct ad —— a oo © scanned with OKEN ScannerStep I ; to construct convolutional matrix a Input Matix |_| t i 1 jo |O Hie lo o |O |\ im jo ff Rune Mabax [ fttecs 1). )' Te [oll * v {1 fo |y |e jo Jo slo |o vio [sto | [2] 4 oll ‘Toto | [i | a|s lo of) [fale |! tt 8x3 gtrtde = 4 CGtven) © scanned with OKEN ScannerSize of kernel or filter is 3*3 hence the size of image section is also 3*3 Fragy suaten fuer | 1_|o jo 9 |o Xx jo | 1]! [lake tt jo Tx oot OXO+ oxO+ Oxs+ IAS +124 ax it saat 140 © scanned with OKEN Scanner4 LATS Is Input Matetx [ |7K o jo|ti\s lotr oj )' | im U[Efo fe [ofr] qe Aeaton je olo eh 1 ofols 1jo |o OXI 4OK0+ KO. po +178 1 Put] fo fa fr [= jikoties tine] 2 4 (a 7 oe) s|4 lo [XL 41K 0x0) ‘ol {fale |' | © scanned with OKEN Scanner© scanned with OKEN ScannerOXI +0K0 +i xo. IXotixl +ixie] 2 4 IXL4 Kl tOxK0, © scanned with OKEN ScannerInput Matetx T Ly t_lo tle |e 0 [0 ‘Jo [1 | Pat oli jolt (age Action sey fal 0 |0 iit (ne) jm | re a 9 9 “0 ed jo of bo iho Blo fu fa [= |ixotixt tixie] 2 4 oll 0 |o ! cn mar |o IXL4Kl t0x0) ol! 4 {0 |" I Image suet” foun oily toy dal Sela OX) +1Ko+ (Ko+ [Vt] oO; yt|= IKO+ IXL+OKLt 2 yg V jo '}1fo LXt + Oxt + Lxo (© Scanned with OKEN ScannerLe Matrix: (© Scanned with OKEN ScannerAs Stride is not mentioned > max Pooling so T will assume Filter = 2*2 44 3 {3 uals |g [2| 3/3 |¢ |\ i 4 |2 |3 |o 4 | |3 |2 |4 © scanned with OKEN Scanner
You might also like
Business Data Mining Week 12
PDF
No ratings yet
Business Data Mining Week 12
24 pages
Unit 5
PDF
No ratings yet
Unit 5
219 pages
A Beginner's Tutorial For CNN
PDF
100% (1)
A Beginner's Tutorial For CNN
35 pages
Unit II
PDF
No ratings yet
Unit II
38 pages
Big Data Unit-2 PPT Part1
PDF
No ratings yet
Big Data Unit-2 PPT Part1
76 pages
Module 4 Continued
PDF
No ratings yet
Module 4 Continued
244 pages
Deep Learning Interview Q&a
PDF
No ratings yet
Deep Learning Interview Q&a
7 pages
L6 Neural Network
PDF
No ratings yet
L6 Neural Network
57 pages
MLT Unit-3
PDF
No ratings yet
MLT Unit-3
39 pages
Chapter 5 Summary
PDF
No ratings yet
Chapter 5 Summary
5 pages
Lec14 CNNRNNModels
PDF
No ratings yet
Lec14 CNNRNNModels
64 pages
MLT Unit-2
PDF
No ratings yet
MLT Unit-2
30 pages
Unit 1
PDF
No ratings yet
Unit 1
16 pages
Lecture - 07 (Convolutional Neural Networks)
PDF
No ratings yet
Lecture - 07 (Convolutional Neural Networks)
57 pages
Unit-4 Full
PDF
No ratings yet
Unit-4 Full
48 pages
Unit 1
PDF
No ratings yet
Unit 1
29 pages
Deep Learning
PDF
No ratings yet
Deep Learning
95 pages
ML Unit 04
PDF
No ratings yet
ML Unit 04
11 pages
DL Question Paper Solved
PDF
No ratings yet
DL Question Paper Solved
12 pages
ML Lec 10 Neural Networks
PDF
No ratings yet
ML Lec 10 Neural Networks
87 pages
Unit 2 - ML
PDF
No ratings yet
Unit 2 - ML
18 pages
Convolutional Neural Network With An Optimized Backpropagation Technique
PDF
No ratings yet
Convolutional Neural Network With An Optimized Backpropagation Technique
5 pages
Antim Prahar AI and ML For Business 2025
PDF
No ratings yet
Antim Prahar AI and ML For Business 2025
45 pages
Unit - 4 ANN
PDF
No ratings yet
Unit - 4 ANN
46 pages
Unit 4
PDF
No ratings yet
Unit 4
38 pages
Unit 3
PDF
No ratings yet
Unit 3
17 pages
Unit III
PDF
No ratings yet
Unit III
29 pages
MLT UNIT-4 & 5 Imp Sol
PDF
No ratings yet
MLT UNIT-4 & 5 Imp Sol
22 pages
Lect 12 - Deep Feed Forward NN - Review
PDF
No ratings yet
Lect 12 - Deep Feed Forward NN - Review
93 pages
CS601 Machine Learning Unit 3
PDF
No ratings yet
CS601 Machine Learning Unit 3
47 pages
Neural Networks
PDF
No ratings yet
Neural Networks
45 pages
Introduction Deep Eng
PDF
No ratings yet
Introduction Deep Eng
50 pages
Assignment - 4
PDF
No ratings yet
Assignment - 4
24 pages
Unit 1
PDF
No ratings yet
Unit 1
72 pages
Machine Learning: Feed Forward Neural Networks Backpropagation Algorithm Cnns and Rnns
PDF
No ratings yet
Machine Learning: Feed Forward Neural Networks Backpropagation Algorithm Cnns and Rnns
127 pages
3ML.05.NeuralNetworks DeepLearning
PDF
No ratings yet
3ML.05.NeuralNetworks DeepLearning
67 pages
Big Data Unit-5
PDF
No ratings yet
Big Data Unit-5
9 pages
MLT Unit - 1
PDF
No ratings yet
MLT Unit - 1
38 pages
Convolutional Neural Network - Towards Data Science PDF
PDF
No ratings yet
Convolutional Neural Network - Towards Data Science PDF
10 pages
Artificial Neural Networks: Introduction To Computational Neuroscience
PDF
No ratings yet
Artificial Neural Networks: Introduction To Computational Neuroscience
42 pages
Computer Vision NN Architecture
PDF
No ratings yet
Computer Vision NN Architecture
19 pages
Deep Learning Notes For Easy Access
PDF
No ratings yet
Deep Learning Notes For Easy Access
14 pages
3 - DeepLearning - and - CNN v3
PDF
No ratings yet
3 - DeepLearning - and - CNN v3
50 pages
Unit 1
PDF
No ratings yet
Unit 1
19 pages
Machine Learning
PDF
No ratings yet
Machine Learning
11 pages
Chapter 4 Neural Network
PDF
No ratings yet
Chapter 4 Neural Network
46 pages
Deep Learning PDF
PDF
No ratings yet
Deep Learning PDF
55 pages
Notes DL-1
PDF
No ratings yet
Notes DL-1
10 pages
Ca 3 DL
PDF
No ratings yet
Ca 3 DL
6 pages
ECSE484 Intro v2
PDF
No ratings yet
ECSE484 Intro v2
67 pages
Super VIP Cheetsheet - Deep Learning, AI, ML
PDF
No ratings yet
Super VIP Cheetsheet - Deep Learning, AI, ML
47 pages
An Introduction To Neural Networks: Instituto Tecgraf PUC-Rio Nome: Fernanda Duarte Orientador: Marcelo Gattass
PDF
No ratings yet
An Introduction To Neural Networks: Instituto Tecgraf PUC-Rio Nome: Fernanda Duarte Orientador: Marcelo Gattass
45 pages
Deep Learning
PDF
100% (2)
Deep Learning
49 pages
Unit 1
PDF
No ratings yet
Unit 1
20 pages
19 - Introduction To Neural Networks
PDF
No ratings yet
19 - Introduction To Neural Networks
7 pages
Convolutional Neural Networks
PDF
No ratings yet
Convolutional Neural Networks
21 pages
Super VIP Cheatsheet - Deep Learning
PDF
No ratings yet
Super VIP Cheatsheet - Deep Learning
47 pages
Convolutional Neural Networks
PDF
No ratings yet
Convolutional Neural Networks
5 pages
Neural Net 3rdclass
PDF
No ratings yet
Neural Net 3rdclass
35 pages