0% found this document useful (0 votes)
66 views33 pages

MLT Unit-4

Machine Learning Techniques Unit 4 Notes

Uploaded by

guptaraman600
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
66 views33 pages

MLT Unit-4

Machine Learning Techniques Unit 4 Notes

Uploaded by

guptaraman600
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 33
MOST IMPORTANT QUESTIONS MACHINE LEARNING AKTU -ENGINEER BEL MODULE 4 PART-I ARTIFICIAL NEURAL NETWORKS ~ Perceptron’s, Multilayer perceptron, Gradient descent and the Delta rule, Multilayer networks, Derivation of Backpropagation Algorithm, Generalization, Unsupervised Learning -~ SOM. IV_| Algorithg and its variant; DEEP LEARNING - Introduction, concept of convolutional neural network , Types of layers — ( Convolutional Layers , Activation function , pooling , fully connected) , ‘Concept of Convolution (1D and 2D) layers, Training of network, Case study of CNN for eg on Diabetic Retinopathy, Building a smart speaker, Self-deriving car etc. © scanned with OKEN Scanner Perceptrons are the building blocks of neural networks. It is a supervised learning algorithm of binary classifiers. The perceptron consists of 4 parts. Comakiings 1. Input values or One input layer Wedgie 2. Weights and Bias @) weg wt 3. Net sum Q-8 Es 4. Activation Function a. All the inputs x are multiplied with their weights w © scanned with OKEN Scanner b. Add all the multiplied values and call them Weighted Sum. c. Apply that weighted sum to the correct Activation Function. Weights shows the strength of the particular node. A bias value allows you to shift the activation function curve up or down. In short, the activation functions are used to map the input between the required values like (0, 1) or (-1, 1). © scanned with OKEN Scanner ‘=Perceptron is usually used to classify the data info tw Therefore, it is wo par ty aS known as a Linear Binary Classifias: ‘Ques 2)What is Gradient descent? 2021-22 2M Gradient descent is an optimization algorithm which is commonly-used to train machine learning models and neural networks, to find a local minimum/maximum of a given function. This method is commonly used in machine learning (ML) and deep learning(DL) to minimize a cost/loss function. © scanned with OKEN Scanner —ecawwmcraascae =, oO 2021-22 2M Gradient descent is an optimization algorithm which is commonly-used to train machine learning models and neural networks, to find a local minimum/maximum of a given function. This method is commonly used in machine learning (ML) and deep learning(DL) to minimize a cost/loss function. Rees 3) What isfGradient descend Delta rule? 2021-22 2M In machine learning, the delta rule is a gradient descent learning rule for updating the weights of the inputs to artificial neurons in a single-layer neural network. It is a special case of the more general backpropagation algorithm. © scanned with OKEN Scanner (© Scanned with OKEN Scanner «—#fack-propagation is used for the training of neural network. The Backpropagation algorithm looks for the minimum value of the error function in weight space using a technique called the delta rule or gradient descent. In an artificial neural network, the values of weights and biases are randomly initialized. Due to random initialization, the neural network probably has errors in giving the correct output. We need to reduce error values as much as possible. So, for reducing these error values, we need a mechanism that can compare the desired output of the neural network with the network’s output that consists of errors and adjusts its weights and biases such that it gets closer to the desired output after each iteration. For this, we train the network such that it back propagates and updates the weights and biases. This is the concept of the back propagation algorithm. © scanned with OKEN Scanner Backpropagation is a short form for "backward propagation of errors." It is a standard method of training artificial neural networks. _ackpropagation Algorithm: Step 1: Inputs X, arrive through the preconnected path Step 2: The input is modeled using true weights W. Weights are usually chosen randomly. Step 3: Calculate the output of each neuron from the input layer to the hidden layer to the output layer. Step 4: Calculate the error in the outputs Backpropagation Error= Actual Output - Desired Output Step 5: From the output layer, go back to the hidden layer to adjust the weights to reduce the error. Step 6: Repeat the process until the desired output is achieved © scanned with OKEN Scanner 7-Why We Need Backpropagation? Most prominent advantages of Backpropagation are: + Backpropagation is fast, simple and easy to program + Itis a flexible method as it does not require prior knowledge about the network + Itis a standard method that generally works well + It does not need any special mention of the features of the function to be learned. Types of Backpropagation Networks Two Types of Backpropagation Networks are: + Static Back-propagation + Recurrent Backpropagation © scanned with OKEN Scanner \—~" The output two runs of a neural network compete among themselves to become active. Several output neurons may be active, but in competitive only single output neuron is active at one time. © scanned with OKEN Scanner dey ‘Ques 6) Describe the Kohonen self organizing maps and its algorithm, 2020-21 10M Self Organizing Map (or Kohonen Map or SOM) It follows an unsupervised learning approach and trained its network through a competitive learning algorithm. SOM is used for clustering and mapping (or dimensionality reduction) techniques to map multidimensional data onto lower-dimensional which allows people to reduce complex problems for easy interpretation. © scanned with OKEN Scanner M has two layers, one is the Input layer and the other one is the Output fayer. The architecture of the Self Organizing Map with two clusters and n input features of any sample is given below: © scanned with OKEN Scanner Step 1: Initializing the Weights We have randoml: itialized the values of the weights Step 2: Take a sample training input vector from the input layer. Step 3: Calculating the Best Matching Unit/ winning neuron, To determine the best matching unit, one method is to iterate through all the nodes and calculate the Euclidean distance between each node’s weight vector and the current input vector. The node with a weight vector closest to the input vector is tagged as the winning neuron. Distance = © scanned with OKEN Scanner ép 4: Find the new weight between input vector sample and winning output Neuron. New Weights = Old Weights + Learning Rate (Input Vector — Old Weights) Step 5: Repeat step 2 to 4 until weight updation is negligible. That is, new weight are similar to old weight or feature map stop changing. © scanned with OKEN Scanner Convolutional Neural Networks (CNNs) are specially designed to work with images. Convolutional Neural Networks (CNNs) are specially designed to work with images. An image consists of pixels. In deep learning, images are represented as arrays of pixel values. © scanned with OKEN Scanner \_ Mere are three main types of layers in a CNN: * Convolutional layers_- * Pooling layers —~ + Fully connected (dense) layers. - €~ In addition to that, activation layers are added after each convolutional layer and fully connected layer. © scanned with OKEN Scanner There are four main types of operations in a CNN: Convolution operation, Pooling operation, Flatten operation and Classification (or other relevant) operation. volutional layers and convolution operation: The first layer in a CNN is a convolutional layer. It takes the images as the input and begins to process. There are three elements in the convolutional layer: Input image, Filters and Feature map. ‘Section (3x3) Convention peraaeeen — o};o ul fies Toler S13} 4/3 © scanned with OKEN Scanner Filter: This is also called Kernel or Feature Detector.o* | + 1*D + aH (mage section: The size of the image section should be equal to the size of the filter(s) we choose. The number of image sections depends on the Stride. feature map: The feature map stores the outputs of different convolution operations between different image sections and the filter(s). © scanned with OKEN Scanner 4#The number of steps (pixels) that we shift the filter over the input image is called Stride. Padding adds additional pixels with zero values to each side of the image. That helps to get the feature map of the same size as the input. Pooling layers and pooling operation Pooling layers are the second type of layer used in a CNN. There can be multiple pooling layers in a CNN. Each convolutional layer is followed by a pooling layer. So, convolution and pooling layers are used © scanned with OKEN Scanner It Reduce the dimensionality (number of pixels) of the output returned from previous convolutional layers. There are three elements in the pooling layer: Feature map, Filter and Pooled feature map. There are two types of pooling operations. © scanned with OKEN Scanner There are two types of pooling operations. \--Max pooling: Get the maximum value in the area where the filter is applied. Average pooling: Get the average of the values in the area where the filter is applied. © scanned with OKEN Scanner Then, we can flatten a pooled feature map that contains multiple channels. Fully connected (dense) layers These are the final layers in a CNN. The input is the previous flattened layer. There can be multiple fully connected layers. The final layer does the classification (or other relevant) task. An activation function is used in each fully connected layer. It Classify the detected features in the image into a class label. CNN Overall Architecture Kacceeenn> © scanned with OKEN Scanner 2020-21 10M. Input Matrix ct ad —— a oo © scanned with OKEN Scanner Step I ; to construct convolutional matrix a Input Matix |_| t i 1 jo |O Hie lo o |O |\ im jo ff Rune Mabax [ fttecs 1). )' Te [oll * v {1 fo |y |e jo Jo slo |o vio [sto | [2] 4 oll ‘Toto | [i | a|s lo of) [fale |! tt 8x3 gtrtde = 4 CGtven) © scanned with OKEN Scanner Size of kernel or filter is 3*3 hence the size of image section is also 3*3 Fragy suaten fuer | 1_|o jo 9 |o Xx jo | 1]! [lake tt jo Tx oot OXO+ oxO+ Oxs+ IAS +124 ax it saat 140 © scanned with OKEN Scanner 4 LATS Is Input Matetx [ |7K o jo|ti\s lotr oj )' | im U[Efo fe [ofr] qe Aeaton je olo eh 1 ofols 1jo |o OXI 4OK0+ KO. po +178 1 Put] fo fa fr [= jikoties tine] 2 4 (a 7 oe) s|4 lo [XL 41K 0x0) ‘ol {fale |' | © scanned with OKEN Scanner © scanned with OKEN Scanner OXI +0K0 +i xo. IXotixl +ixie] 2 4 IXL4 Kl tOxK0, © scanned with OKEN Scanner Input Matetx T Ly t_lo tle |e 0 [0 ‘Jo [1 | Pat oli jolt (age Action sey fal 0 |0 iit (ne) jm | re a 9 9 “0 ed jo of bo iho Blo fu fa [= |ixotixt tixie] 2 4 oll 0 |o ! cn mar |o IXL4Kl t0x0) ol! 4 {0 |" I Image suet” foun oily toy dal Sela OX) +1Ko+ (Ko+ [Vt] oO; yt|= IKO+ IXL+OKLt 2 yg V jo '}1fo LXt + Oxt + Lxo (© Scanned with OKEN Scanner Le Matrix: (© Scanned with OKEN Scanner As Stride is not mentioned > max Pooling so T will assume Filter = 2*2 44 3 {3 uals |g [2| 3/3 |¢ |\ i 4 |2 |3 |o 4 | |3 |2 |4 © scanned with OKEN Scanner

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy