4 - Mcq-Ann-Ann-Quiz - Selected
4 - Mcq-Ann-Ann-Quiz - Selected
AI Neural Networks
MCQ
This section focuses on "Neural Networks" in Artificial Intelligence. These Multiple Choice
Questions (mcq) should be practiced to improve the AI skills required for various interviews
(campus interviews, walk-in interviews, company interviews), placements, entrance exams and
other competitive examinations.
A. 2
B. 3
C. 4
D. 5
A. FeedForward ANN
B. FeedBack ANN
C. Both A and B
D. None of the Above
A. Linear Functions
B. Nonlinear Functions
C. Discrete Functions
D. Exponential Functions
A. node value
B. Weight
C. neurons
D. axons
6. What is full form of ANNs?
A. unidirectional
B. bidirectional
C. multidirectional
D. All of the above
A. Unsupervised Learning
B. Reinforcement Learning
C. Supreme Learning
D. Supervised Learning
A. Automotive
B. Aerospace
C. Electronics
D. All of the above
11. A 4-input neuron has weights 1, 2, 3 and 4. The transfer function is linear with the
constant of proportionality being equal to 2. The inputs are 4, 3, 2 and 1 respectively. What
will be the output?
A. 30
B. 40
C. 50
D. 60
13. The network that involves backward links from output to the input and hidden layers is
lOMoARcPSD|18922087
called
a) Pattern Recognition
b) Classification
c) Clustering
d) All of these
16. A neural network model is said to be inspired from the human brain.
The neural network consists of many neurons, each neuron takes an input, processes it and
gives an output. Here’s a diagrammatic representation of a real neuron.
x1, x2,…, xN: These are inputs to the neuron. These can either be the actualobservations
from input layer or an intermediate value from one of the hidden layers.
w1, w2,…,wN: The Weight of each input.
bi: Is termed as Bias units. These are constant values added to the input of theactivation
function corresponding to each weight. It works similar to an intercept term.
a: Is termed as the activation of the neuron which can be represented as
and y: is the output of the neuron
Considering the above notations, will a line equation (y = mx + c) fall into the category of a
neuron?
A. Yes
B. No
18. Let us assume we implement an AND function to a single neuron. Below is a tabular
representation of an AND function:
X1 X2 X1 AND X2
0 0 0
0 1 0
1 0 0
1 1 1
lOMoARcPSD|18922087
(Hint: For which values of w1, w2 and b does our neuron implement an AND function?)
A. Bias = -1.5, w1 = 1, w2 = 1
B. Bias = 1.5, w1 = 2, w2 = 2
D. None of these
19. In a neural network, knowing the weight and bias of each neuron is the most important
step. If you can somehow get the correct value of weight and bias for each neuron, you can
approximate any function. What would be the best way to approach this?
B. Search every possible combination of weights and biases till you get the best value
C. Iteratively check that after assigning a value how far you are from the best values, and
slightly change the assigned values values to make them better
D. None of these
20. What are the steps for using a gradient descent algorithm?
1. Calculate error between the actual value and the predicted value
lOMoARcPSD|18922087
A. 1, 2, 3, 4, 5
B. 5, 4, 3, 2, 1
C. 3, 2, 1, 5, 4
D. 4, 3, 1, 5, 2
21. Suppose you have inputs as x, y, and z with values -2, 5, and -4 respectively. You have a
neuron ‘q’ and neuron ‘f’ with functions:
q=x+y
f=q*z
(HINT: To calculate gradient, you must find (df/dx), (df/dy) and (df/dz))
A. (-3,4,4)
B. (4,4,3)
C. (-4,-4,3)
D. (3,-4,-4)
22. In training a neural network, you notice that the loss does not decrease in the few starting
epochs.
lOMoARcPSD|18922087
A. 1 and 2
B. 2 and 3
C. 1 and 3
D. Any of these
23. Which of the following is true about model capacity (where model capacity means the
ability of neural network to approximate complex functions) ?
D. None of these
24. If you increase the number of hidden layers in a Multi Layer Perceptron, the
classification error of test data always decreases. True or False?
A. True
B. False
lOMoARcPSD|18922087
25. You are building a neural network where it gets input from the previous layer as well as
from itself.
D. None of these
A. 1, 2, 3, 4
B. 4, 3, 2, 1
C. 3, 1, 2, 4
D. 1, 4, 3, 2
27. The below graph shows the accuracy of a trained 3-layer convolutional neural network vs
the number of parameters (i.e. number of feature kernels).
lOMoARcPSD|18922087
The trend suggests that as you increase the width of a neural network, the accuracy increases
till a certain threshold value, and then starts decreasing.
A. Even if number of kernels increase, only few of them are used for prediction
B. As the number of kernels increase, the predictive power of neural network decrease
C. As the number of kernels increase, they start to correlate with each other which in turn
helps overfitting
D. None of these
D. Both A and B
29. In a neural network, which of the following techniques is used to deal with overfitting?
A. Dropout
B. Regularization
C. Batch Normalization
D. All of these
lOMoARcPSD|18922087
Can this equation be represented by a neural network of single hidden layer with linear
threshold?
A. Yes
B. No
31. Which of the following statement is the best description of early stopping?
A. Train the network until a local minimum in the error function is reached
B. Simulate the network on a test dataset after every epoch of training. Stop training when the
generalization error starts to increase
C. Add a momentum term to the weight update in the Generalized Delta Rule, so that training
converges more quickly
33. When pooling layer is added in a convolutional neural network, translation in- variance is
preserved. True or False?
A. True
B. False
34. The graph represents gradient flow of a four-hidden layer neural network which is trained
using sigmoid activation function per epoch of training. The neural network suffers with the
vanishing gradient problem.
lOMoARcPSD|18922087
35. For a classification task, instead of random weight initializations in a neural network, we
set all the weights to zero. Which of the following statements is true?
A. There will not be any problem and the neural network will train properly
B. The neural network will train but all the neurons will end up recognizing the same thing
C. The neural network will not train as there is no net gradient change
D. None of these
36. For an image recognition problem (recognizing a cat in a photo), which architecture of
neural network would be better suited to solve the problem?
37. What are the factors to select the depth of neural network?
A. 1, 2, 4, 5
B. 2, 3, 4, 5
C. 1, 3, 4, 5
D. All of these
38. Consider the scenario. The problem you are trying to solve has a small amount of data.
Fortunately, you have a pre-trained neural network that was trained on a similar problem.
Which of the following methodologies would you choose to make use of this pre-trained
network?
B. Assess on every layer how the model performs and only select a few of them
D. Freeze all the layers except the last, re-train the last layer
A. Sales forecasting
B. Data validation
C. Risk management
D. All of these
41 Which of the following is the model used for learning
A. Decision trees
B. Neural networks
C. Propositional and FOL rules
D. All of these
a. Supervised learning
b. Unsupervised learning
c. Active learning
d. Reinforcement learning