0% found this document useful (0 votes)
13 views7 pages

Kak Anncho

The document outlines the course details for 'Artificial Neural Networks' offered by the School of Computing, including course structure, outcomes, content modules, evaluation methods, and resources. It covers fundamental concepts of neural learning, various network architectures, error functions, and learning techniques. The course aims to equip students with practical skills through experiential learning and evaluation through exams and projects.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views7 pages

Kak Anncho

The document outlines the course details for 'Artificial Neural Networks' offered by the School of Computing, including course structure, outcomes, content modules, evaluation methods, and resources. It covers fundamental concepts of neural learning, various network architectures, error functions, and learning techniques. The course aims to equip students with practical skills through experiential learning and evaluation through exams and projects.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

COURSE HAND OUT

School: School of Computing Dept.: CSE

Course Code : 22AI102005


Course Title : ARTIFICIAL NEURAL NETWORKS

Course Credit L T P S C
Structure 3 - 2 - 4
Semester B. Tech VI Semester

Contact Hours : 45 Hours


Instructor : Mr K Arun Kumar
Instructor’s :
arunkumar.k@mbu.asia
Email
Office Hours : All days
Academic Year : 2024-25
Date of Issue : 15–12–24

Course Code Course Title


22AI102003 ARTIFICIAL NEURAL NETWORKS
Pre-Requisite -

COURSE DESCRIPTION: Fundamentals of neural learning, ANN architectures,


Perceptron networks, ADALINE, Radial basis functions, recurrent hopfield networks,
ART networks and applications of ANN.

COURSE OUTCOMES: After successful completion of the course, students will be able
to:
Understand the fundamentals of neural learning and concepts of
CO1
probability density estimation

Construct/ Design single or multi-layer neural networks for solving real


CO2
time problems.

Use appropriate error functions and apply suitable optimization techniques


CO3
to improve the performance of the model.

Solve the bias-variance tradeoff issues using techniques including


CO4
regularization and pruning algorithms.

CO5 Work independently or in team to solve problems with effective


communication.
CO-PO-PSO Mapping Table:

Program Specific
Program Outcomes
Course Outcomes
Outcome
PO PO PO PO1 PO1 PO
PO1 PO2 PO3 PO4 PO5 PO7 PSO1 PSO2 PSO3 PSO4
6 8 9 0 1 12
CO1 3 1 - - - - - - - - - - - - 3 -
CO2 3 3 2 - - - - - - - - - - - 3 -

CO3 3 3 2 - - - - - - - - - - - 3 -

CO4 3 3 2 - - - - - - - - - - - 3 -
CO5 - - - - - 3 - - - - - - 3 - - -
Course
Correlat
3 3 2 - - 3 - - - - - - 3 - 3 -
ion
Mapping
Correlation Level: 3-High 2-Medium 1-Low

COURSE CONTENT

Module 1 INTRODUCTION (09 Periods)


Fundamentals of Neural Learning - Classification and regression, Pre-processing and feature
extraction, the curse of dimensionality, Polynomial curve fitting, Model complexity, Multivariate
non-linear functions, Bayes' theorem, Decision boundaries, Minimizing risk; Probability Density
Estimation- Parametric methods, Maximum likelihood, Bayesian inference

Module 2 SINGLE LAYER NETWORKS (09 Periods)


Linear discriminant functions, Linear separability, Generalized linear discriminants, Least-
squares techniques, The perceptron, Fisher's linear discriminant.
Gradient based Strategies: Learning Rate Decay, Momentum-Based Learning, Parameter-
Specific Learning Rates, Cliffs and Higher-Order Instability, Gradient Clipping, Second-Order
Derivatives, Polyak Averaging, Local and Spurious Minima

Module 3 MULTI-LAYER PERCEPTRON & RADIAL (8 Periods)


BASIS FUNCTIONS
Multi-Layer Perceptron - Feed-forward network mappings, Threshold units, Sigmoidal units,
Weight-space symmetries, Error back-propagation.
Radial Basis Functions - Radial basis function networks, Network training.

Module 4 ERROR FUNCTIONS (10 Periods)


Sum-of-squares error, Minkowski error, Input-dependent variance, modelling conditional
distributions, Estimating posterior probabilities, Sum-of-squares for classification, Cross-entropy
for two classes, Multiple independent attributes, Cross-entropy for multiple classes, Entropy;
Parameter Optimization Algorithms - Gradient descent.
Module 5 LEARNING AND GENERALIZATION: (09 Periods)
Bias and variance, Regularization, Training with noise, Soft weight sharing, Growing and pruning
algorithms, Committees of networks, Mixtures of experts.

Total Periods: 45
EXPERIENTIAL LEARNING

1. Classify spectral remote sensing data using Ordinary Least Squares and
estimate the confusion matrix.
Fit, evaluate, and make predictions with the Linear Discriminant Analysis model.
Also tune the hyperparameters of the model.
Implementing the Perceptron Algorithm for binary classification task and plot the
decision boundary.
2. Build an employee churn prediction model using Multi-layer perceptron and
evaluate the model performance with different learning rate.
3. Analyze the efficiency of a Radial Basis function network for a classificationtask
and assess its convergence ability.
4. Evaluate the cross entropy error functions and compare its performance withthe Log
Loss.
5. Analyze the efficiency of L1 and L2 regularization techniques in resolving
overfitting issues in multi-layer neural network model.
6. Investigate the efficiency of SGD in terms of convergence speed andaccuracy
with different learning rate and momentum values.
Using stacked generalization, an ensemble method build a new model tobest
combine the predictions from multiple existing models.
7. Using pruning algorithm reduce the complexity of a trained neural network.
Classify spectral remote sensing data using Ordinary Least Squares and
estimate the confusion matrix.
Fit, evaluate, and make predictions with the Linear Discriminant Analysismodel.
Also tune the hyperparameters of the model.
8. Implementing the Perceptron Algorithm for binary classification task and plot the
decision boundary.
9. Build an employee churn prediction model using Multi-layer perceptron and
evaluate the model performance with different learning rate.
10. Analyze the efficiency of a Radial Basis function network for a classificationtask
and assess its convergence ability.

RESOURCES

TEXT BOOKS:

3 Christopher M. Bishop, Neural Networks for Pattern Recognition, OXFORD, 1995.

REFERENCE BOOKS:
3. Simon Haykin, Neural Networks and Learning Machines, Pearson, 2016

SOFTWARE/TOOLS:
3. python
4. MAT LAB

VIDEO LECTURES:
4 https://www.youtube.com/watch?v=Zxs-f4HsTDk
5 https://www.youtube.com/watch?v=VQ1O-pSPX20
6 https://www.youtube.com/watch?v=zCwEdKy2OJI
WEB RESOURCES:

3. https://www.ai.rug.nl/minds/uploads/LN_NN_RUG.pdf

4. https://ocw.mit.edu/courses/9-641j-introduction-to-neural-networks-spring-
2005/pages/lecture-notes/
COURSE DELIVERY SCHEDULE:
S. Contact CO
Topic Pedagogy Resources
No Hours Mapping
MODULE-1: INTRODUCTION TO NEURAL LEARNING
Introduction to Fundamentals PPT, Open
1. 01 C01 TB2, RB1
of Neural Learning Discussion Session
2. Classification and regression 01 C01 Blended Learning TB2, RB1
Pre-processing and feature
3. 01 C01 Comparative study TB2, RB1
extraction
PPT and Open
4. The curse of dimensionality 01 C01 TB2, RB1
Discussion Session
Polynomial curve fitting,
5. 01 C01 PPT TB2, RB1
Model complexity
Multivariate non-linear PPT and Open
6. 01 C01 TB2, RB1
functions Discussion Session
Bayes' theorem, Decision PPT and Open
7. 01 C01 TB2, RB1
boundaries Discussion Session
Probability Density C01 PPT
8. Estimation- Parametric 01 TB2, RB1
methods
Maximum likelihood, C01 PPT and Open
9. 01 TB2, RB1
Bayesian inference Discussion Session
MODULE-2: SINGLE LAYER NETWORKS
Linear discriminant
TB2, RB1, WR-1,
10 functions, Linear 01 CO2 PPT and Live demo
VL-1
separability

Generalized linear TB2, RB1, WR2


11. 01 CO2 PPT and Live demo
discriminants VL2, VL3

12. Least- squares techniques 01 CO2 PPT and Live demo TB2, WR2
The perceptron, Fisher's
13. linear discriminant. 01 CO2 PPT and Live demo TB2, WR3

TB2, RB12 WR22


14. Gradient based Strategies 01 CO2 PPT and Live demo
VL2
Learning Rate Decay,
15. 01 CO2 PPT and Live demo TB2, RB1
Momentum-Based Learning
Parameter- Specific Learning
16. 01 CO2 PPT and Live demo TB2, RB1
Rates
Cliffs and Higher-Order
17. 01 CO2 PPT and Live demo TB2, RB1
Instability, Gradient Clipping
Second-Order Derivatives,
18. Polyak Averaging, Local and 01 CO2 PPT and Live demo TB2, RB1
Spurious Minima
MODULE-3: MULTI-LAYER PERCEPTRON & RADIALBASIS FUNCTIONS
Multi-Layer Perceptron, Feed-
19. 01 CO3 Blended Learning TB2, VL2
forward network mappings
20. Support vector machines 01 CO3 PPT and Live demo TB2, RB1, VL2
Threshold units, Sigmoidal
21. 01 CO3 PPT and Live demo TB1, RB-1, VL2
units
22. Weight-space symmetries 01 CO3 PPT and Live demo TB1, WR5
23. Error back-propagation 01 CO3 PPT and Live demo TB2, WR5
24. Radial Basis Functions 01 CO3 PPT and Live demo TB2, WR5
S. Contact CO
Topic Pedagogy Resources
No Hours Mapping
25. Radial basis function networks 01 CO3 PPT and Live demo TB2, WR4
26. Network training 01 CO3 PPT and Live demo TB2, WR4
MODULE-4: ERROR FUNCTIONS
Sum-of-squares error, PPT Open TB2, RB2, WR4,
27. 01 CO4
Minkowski error Discussion Session VL2
28. Input-dependent variance 01 CO4 Blended Learning TB2, RB1, WR4
Modelling conditional PPT and
29. 01 CO4 TB2, RB3, WR4
distributions Comparative Study
Estimating posterior PPT and Open
30. 01 CO4 TB2, RB2, WR4
probabilities Discussion Session
Sum-of-squares for
31. 01 CO4 Blended Learning TB2, RB2, WR4
classification
PPT and Case
32. Cross-entropy for two classes 01 CO4 TB2, RB2, WR4
Study
Multiple independent PPT and Open
33. 01 CO4 TB1, VL6, WR3
attributes Discussion
Cross-entropy for multiple PPT and Case
34. 01 CO4 TB2, RB2, WR4
classes Study
Parameter Optimization PPT and Open
35. 01 CO4 TB2, RB2, WR4
Algorithms Discussion Session
PPT and Open
36. Gradient descent 01 CO4 TB1, VL6, WR3
Discussion
MODULE-5: LEARNING AND GENERALIZATION
Bias PPT and Open
37. 01 CO5 TB1, VL5, WR3
Discussion
Variance PPT and Open
38. 01 CO5 TB1, VL6, WR3
Discussion
PPT and Case
39. Regularization 01 CO5 TB1, VL7, WR3
Study
PPT and Case
40. Training with noise 01 CO5 TB1, VL7, WR3
Study
PPT and Case
41. Soft weight sharing 01 CO5 TB1, VL7, WR3
Study
PPT and Open
42. Growing algorithms 01 CO5 TB1, VL6, WR3
Discussion
PPT and Open
43. Pruningalgorithms 01 CO5 TB1, VL6, WR3
Discussion
PPT and Open
44. Committees of networks 01 CO5 TB1, VL6, WR3
Discussion
45. Mixtures of experts 01 CO5 PPT and Live demo TB2 VL5, WR3
COURSE EVALUATION:
Duration
S. Marks for Scaling
Evaluation Type Contents in Max. Marks
No. Evaluation Factor
Minutes
1. Mid Term Exam -1 Module - I & II 90 50 30 30*
Module – III,
2. Mid Term Exam -2 90 50 30
IV & V
3. End Term Exam All Modules 180 100 50 50
Content
4. Experiential Learning - 20 -NA- 20
specified above
Total Marks 100

* For a total of 30 marks, 80% of better one of the two CIAT and 20% of the other one are added
and finalized, any fraction shall be rounded off to the higher integer number.

K ARUN KUMAR
Signature of the Course Instructor Signature of the Chairperson BOS

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy