NNDL Question Bank MID 1
NNDL Question Bank MID 1
Subjective Questions
1. Describe the basic structure of an ANN and explain the roles of the input, hidden, and
output layers.
2. What is forward propagation in ANNs? Explain with the equation used for weighted
sum.
3. Describe the various types of activation functions in ANNs, and explain where each is
commonly used.
4. Compare and contrast different types of ANNs, such as Feedforward Neural Networks,
Recurrent Neural Networks, and Convolutional Neural Networks.
5. Explain the architecture and working of a Perceptron, including the Perceptron model
equation.
6. Describe the Adaptive Linear Neuron (ADALINE) and how it differs from the
Perceptron.
7. Explain the concept and algorithm of Back-Propagation Networks and its importance
in training neural networks.
8. What is an Associative Memory Network, and how does it differ from other types of
neural networks?
10. Explain the Bidirectional Associative Memory (BAM) network and its significance.
Objective Questions
1. In a neural network, the connections between neurons are called ________. (weights)
2. A single layer perceptron can solve only ________ problems. (linearly separable)
3. ________ is the process of passing data through the network to make predictions.
(Forward propagation)
4. The ________ function in a neuron decides whether the neuron should activate or not.
(activation)
5. Backpropagation uses ________ to calculate the gradient of the loss function. (chain
rule)
6. The ________ function is commonly used as an activation function in binary
classification tasks. (sigmoid)
7. ________ is a basic model of ANN that can classify input data into two categories.
(Perceptron)
8. The ________ function maps input values to a range between 0 and 1. (sigmoid)
9. The process of adjusting weights in a neural network based on the error is known as
________. (backpropagation)
10. The ________ activation function is commonly used in hidden layers for non-linearity.
(ReLU)
11. The ________ function is used in the output layer for multi-class classification.
(softmax)
12. The Adaptive Linear Neuron (ADALINE) minimizes error using the ________ cost
function. (mean squared error)
13. In ADALINE, the activation function is ________, which makes it different from
perceptrons. (linear)
14. A ________ network stores patterns and retrieves them based on input patterns, similar
to human memory. (associative memory)
15. The perceptron training algorithm uses ________ to update weights based on
classification error. (gradient descent)
16. In supervised learning, the ________ layer is the layer where the model receives the
target labels during training. (output)
17. In a Hopfield network, stored patterns are known as ________. (attractors)
18. The backpropagation algorithm adjusts weights in the direction of the ________
gradient to reduce error. (negative)
19. ________ is a type of associative memory network that stores pattern pairs and retrieves
them in both directions. (Bidirectional Associative Memory or BAM)
20. The technique used to prevent a neural network from learning irrelevant details and
overfitting is called ________. (regularization)
Subjective Questions
1. Define unsupervised learning and discuss its importance in neural networks.
3. Explain Maxnet, its architecture, and how it applies the winner-takes-all strategy.
4. What is a Hamming Network, and how is it used for binary pattern recognition?
5. Describe the Kohonen Self-Organizing Maps (SOMs) and explain how they preserve
topological properties.
6. Describe the types of ART Networks, including ART1 and ART2, and their
applications.
7. Explain the correlation layer and competitive layer in Hamming Networks with an
example.
8. Describe the role of inhibitory weights in Maxnet and their importance in network
functionality.
10. What are the key differences between competitive learning and supervised learning?
Objective Questions
Subjective Questions
1. What is deep learning, and how does it differ from traditional machine learning?
5. What are the key characteristics of deep learning that contribute to its performance on
large datasets?
Objective Questions