0% found this document useful (0 votes)
104 views13 pages

4 - Mcq-Ann-Ann-Quiz - Selected

This document contains a section about neural networks in artificial intelligence with multiple choice questions to test knowledge. It includes 30 questions covering topics like types of neural networks, applications, backpropagation, overfitting, and more. The questions are meant to improve skills for interviews, exams, and other assessments.

Uploaded by

21070496
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
104 views13 pages

4 - Mcq-Ann-Ann-Quiz - Selected

This document contains a section about neural networks in artificial intelligence with multiple choice questions to test knowledge. It includes 30 questions covering topics like types of neural networks, applications, backpropagation, overfitting, and more. The questions are meant to improve skills for interviews, exams, and other assessments.

Uploaded by

21070496
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

lOMoARcPSD|18922087

AI Neural Networks
MCQ

This section focuses on "Neural Networks" in Artificial Intelligence. These Multiple Choice
Questions (mcq) should be practiced to improve the AI skills required for various interviews
(campus interviews, walk-in interviews, company interviews), placements, entrance exams and
other competitive examinations.

1. How many types of Artificial Neural Networks?

A. 2
B. 3
C. 4
D. 5

2. In which ANN, loops are allowed?

A. FeedForward ANN
B. FeedBack ANN
C. Both A and B
D. None of the Above

3. Neural Networks are complex with many parameters.

A. Linear Functions
B. Nonlinear Functions
C. Discrete Functions
D. Exponential Functions

4. Which of the following is not the promise of artificial neural network?

A. It can explain result


B. It can survive the failure of some nodes
C. It has inherent parallelism
D. It can handle noise

5. The output at each node is called .

A. node value
B. Weight
C. neurons
D. axons
6. What is full form of ANNs?

A. Artificial Neural Node


B. AI Neural Networks
lOMoARcPSD|18922087

C. Artificial Neural Networks


D. Artificial Neural numbers

7. In FeedForward ANN, information flow is .

A. unidirectional
B. bidirectional
C. multidirectional
D. All of the above

8. Which of the following is not an Machine Learning strategies in ANNs?

A. Unsupervised Learning
B. Reinforcement Learning
C. Supreme Learning
D. Supervised Learning

9. Which of the following is an Applications of Neural Networks?

A. Automotive
B. Aerospace
C. Electronics
D. All of the above

10. What is perceptron?

A. a single layer feed-forward neural network with pre-processing


B. an auto-associative neural network
C. a double layer auto-associative neural network
D. a neural network that contains feedback

11. A 4-input neuron has weights 1, 2, 3 and 4. The transfer function is linear with the
constant of proportionality being equal to 2. The inputs are 4, 3, 2 and 1 respectively. What
will be the output?

A. 30
B. 40
C. 50
D. 60

12. What is back propagation?

A. It is another name given to the curvy function in the perceptron


B. It is the transmission of error back through the network to adjust the inputs
C. It is the transmission of error back through the network to allow weights to be adjusted
so that the network can learn
D. None of the Above

13. The network that involves backward links from output to the input and hidden layers is
lOMoARcPSD|18922087

called

A. Self organizing map


B. Perceptrons
C. Recurrent neural network
D. Multi layered perceptron

14: ANN is composed of large number of highly interconnected processing


elements(neurons) working in unison to solve problems.
a) True
b) False

15: Artificial neural network used for

a) Pattern Recognition
b) Classification
c) Clustering
d) All of these

16. A neural network model is said to be inspired from the human brain.

The neural network consists of many neurons, each neuron takes an input, processes it and
gives an output. Here’s a diagrammatic representation of a real neuron.

Which of the following statement(s) correctly represents a real neuron?

A. A neuron has a single input and a single output only

B. A neuron has multiple inputs but a single output only

C. A neuron has a single input but multiple outputs

D. A neuron has multiple inputs and multiple outputs


lOMoARcPSD|18922087

E. All of the above statements are valid

17. Below is a mathematical representation of a neuron.

The different components of the neuron are denoted as:

x1, x2,…, xN: These are inputs to the neuron. These can either be the actualobservations
from input layer or an intermediate value from one of the hidden layers.
w1, w2,…,wN: The Weight of each input.
bi: Is termed as Bias units. These are constant values added to the input of theactivation
function corresponding to each weight. It works similar to an intercept term.
a: Is termed as the activation of the neuron which can be represented as
and y: is the output of the neuron

Considering the above notations, will a line equation (y = mx + c) fall into the category of a
neuron?

A. Yes

B. No

A single neuron with no non-linearity can be considered as a linear regression function.

18. Let us assume we implement an AND function to a single neuron. Below is a tabular
representation of an AND function:

X1 X2 X1 AND X2

0 0 0

0 1 0

1 0 0

1 1 1
lOMoARcPSD|18922087

The activation function of our neuron is denoted as:

What would be the weights and bias?

(Hint: For which values of w1, w2 and b does our neuron implement an AND function?)

A. Bias = -1.5, w1 = 1, w2 = 1

B. Bias = 1.5, w1 = 2, w2 = 2

C. Bias = 1, w1 = 1.5, w2 = 1.5

D. None of these

19. In a neural network, knowing the weight and bias of each neuron is the most important
step. If you can somehow get the correct value of weight and bias for each neuron, you can
approximate any function. What would be the best way to approach this?

A. Assign random values and pray to God they are correct

B. Search every possible combination of weights and biases till you get the best value

C. Iteratively check that after assigning a value how far you are from the best values, and
slightly change the assigned values values to make them better

D. None of these

20. What are the steps for using a gradient descent algorithm?

1. Calculate error between the actual value and the predicted value
lOMoARcPSD|18922087

2. Reiterate until you find the best weights of network


3. Pass an input through the network and get values from output layer
4. Initialize random weight and bias
5. Go to each neurons which contributes to the error and change its respective values to
reduce the error

A. 1, 2, 3, 4, 5

B. 5, 4, 3, 2, 1

C. 3, 2, 1, 5, 4

D. 4, 3, 1, 5, 2

21. Suppose you have inputs as x, y, and z with values -2, 5, and -4 respectively. You have a
neuron ‘q’ and neuron ‘f’ with functions:

q=x+y

f=q*z

Graphical representation of the functions is as follows:

What is the gradient of F with respect to x, y, and z?

(HINT: To calculate gradient, you must find (df/dx), (df/dy) and (df/dz))

A. (-3,4,4)

B. (4,4,3)

C. (-4,-4,3)

D. (3,-4,-4)

22. In training a neural network, you notice that the loss does not decrease in the few starting
epochs.
lOMoARcPSD|18922087

The reasons for this could be:

1. The learning is rate is low


2. Regularization parameter is high
3. Stuck at local minima

What according to you are the probable reasons?

A. 1 and 2

B. 2 and 3

C. 1 and 3

D. Any of these

23. Which of the following is true about model capacity (where model capacity means the
ability of neural network to approximate complex functions) ?

A. As number of hidden layers increase, model capacity increases

B. As dropout ratio increases, model capacity increases

C. As learning rate increases, model capacity increases

D. None of these

24. If you increase the number of hidden layers in a Multi Layer Perceptron, the
classification error of test data always decreases. True or False?

A. True

B. False
lOMoARcPSD|18922087

25. You are building a neural network where it gets input from the previous layer as well as
from itself.

Which of the following architecture has feedback connections?

A. Recurrent Neural network

B. Convolutional Neural Network

C. Restricted Boltzmann Machine

D. None of these

26. What is the sequence of the following tasks in a perceptron?

1. Initialize weights of perceptron randomly


2. Go to the next batch of dataset
3. If the prediction does not match the output, change the weights
4. For a sample input, compute an output

A. 1, 2, 3, 4

B. 4, 3, 2, 1

C. 3, 1, 2, 4
D. 1, 4, 3, 2

27. The below graph shows the accuracy of a trained 3-layer convolutional neural network vs
the number of parameters (i.e. number of feature kernels).
lOMoARcPSD|18922087

The trend suggests that as you increase the width of a neural network, the accuracy increases
till a certain threshold value, and then starts decreasing.

What could be the possible reason for this decrease?

A. Even if number of kernels increase, only few of them are used for prediction

B. As the number of kernels increase, the predictive power of neural network decrease

C. As the number of kernels increase, they start to correlate with each other which in turn
helps overfitting

D. None of these

28. In which neural net architecture, does weight sharing occur?

A. Convolutional neural Network

B. Recurrent Neural Network


C. Fully Connected Neural Network

D. Both A and B

29. In a neural network, which of the following techniques is used to deal with overfitting?

A. Dropout

B. Regularization

C. Batch Normalization

D. All of these
lOMoARcPSD|18922087

30. Y = ax^2 + bx + c (polynomial equation of degree 2)

Can this equation be represented by a neural network of single hidden layer with linear
threshold?

A. Yes

B. No

31. Which of the following statement is the best description of early stopping?

A. Train the network until a local minimum in the error function is reached

B. Simulate the network on a test dataset after every epoch of training. Stop training when the
generalization error starts to increase

C. Add a momentum term to the weight update in the Generalized Delta Rule, so that training
converges more quickly

D. A faster version of backpropagation, such as the `Quickprop’ algorithm

32. What if we use a learning rate that’s too large?

A. Network will converge

B. Network will not converge

33. When pooling layer is added in a convolutional neural network, translation in- variance is
preserved. True or False?

A. True

B. False

34. The graph represents gradient flow of a four-hidden layer neural network which is trained
using sigmoid activation function per epoch of training. The neural network suffers with the
vanishing gradient problem.
lOMoARcPSD|18922087

Which of the following statements is true?

A. Hidden layer 1 corresponds to D, Hidden layer 2 corresponds to C, Hidden layer 3


corresponds to B and Hidden layer 4 corresponds to A

B. Hidden layer 1 corresponds to A, Hidden layer 2 corresponds to B, Hidden layer 3


corresponds to C and Hidden layer 4 corresponds to D

35. For a classification task, instead of random weight initializations in a neural network, we
set all the weights to zero. Which of the following statements is true?

A. There will not be any problem and the neural network will train properly

B. The neural network will train but all the neurons will end up recognizing the same thing

C. The neural network will not train as there is no net gradient change

D. None of these

36. For an image recognition problem (recognizing a cat in a photo), which architecture of
neural network would be better suited to solve the problem?

A. Multi Layer Perceptron

B. Convolutional Neural Network

C. Recurrent Neural network


D. Perceptron
lOMoARcPSD|18922087

37. What are the factors to select the depth of neural network?

1. Type of neural network (eg. MLP, CNN etc)


2. Input data
3. Computation power, i.e. Hardware capabilities and software capabilities
4. Learning Rate
5. The output function to map

A. 1, 2, 4, 5

B. 2, 3, 4, 5

C. 1, 3, 4, 5

D. All of these

38. Consider the scenario. The problem you are trying to solve has a small amount of data.
Fortunately, you have a pre-trained neural network that was trained on a similar problem.
Which of the following methodologies would you choose to make use of this pre-trained
network?

A. Re-train the model for the new dataset

B. Assess on every layer how the model performs and only select a few of them

C. Fine tune the last couple of layers only

D. Freeze all the layers except the last, re-train the last layer

39 Which is true for neural networks

A. It has set of nodes and connections


B. Each node computes it’s weighted input
C. Node could be in excited state or non-excited state
D. All of these

40 Which of the following is an application of NN (Neural Network)

A. Sales forecasting
B. Data validation
C. Risk management
D. All of these
41 Which of the following is the model used for learning

A. Decision trees
B. Neural networks
C. Propositional and FOL rules
D. All of these

42Automated vehicle is an example of


lOMoARcPSD|18922087

a. Supervised learning
b. Unsupervised learning
c. Active learning
d. Reinforcement learning

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy