UNIT 4 (MCQS)
UNIT 4 (MCQS)
1. Natural Language Processing with Sequence Models and Neural Networks for
Sentiment Analysis
2. Which type of neural network is best suited for sentiment analysis in text?
A. Convolutional Neural Network (CNN)
B. Recurrent Neural Network (RNN)
C. Fully Connected Network
D. Decision Tree
Answer: A. 1 unit
4. Which activation function is commonly used in the output layer for sentiment
classification tasks?
A. ReLU
B. Tanh
C. Sigmoid
D. Softmax
Answer: C. Sigmoid
Answer: A. Backpropagation
8. Which of the following embedding techniques is used to convert words into vector
representations for NLP tasks?
A. One-hot encoding
B. Word2Vec
C. Binary encoding
D. Frequency encoding
Answer: B. Word2Vec
9. What is the purpose of using padding in sequence models for sentiment analysis?
A. To introduce randomness
B. To equalize sequence lengths
C. To improve model performance
D. To make predictions accurate
12. Which of the following architectures is commonly used for language modeling?
A. CNN
B. RNN
C. SVM
D. KNN
Answer: B. RNN
15. The formula ht=σ(W⋅xt+U⋅ht−1+b)h_t = \sigma(W \cdot x_t + U \cdot h_{t-1} + b)ht
=σ(W⋅xt+U⋅ht−1+b) in RNNs computes:
A. Output vector
B. Input sequence
C. Next hidden state
D. Initial state
22. Which neural network component in LSTM prevents the vanishing gradient
problem?
A. Tanh activation
B. Cell state
C. Bias
D. Learning rate
25. NER models using LSTMs are popular because they can:
A. Process data quickly
B. Retain contextual information over long sequences
C. Require no training
D. Be trained with minimal data
Answer: C. Person
27. A Named Entity Recognition system using an LSTM generally outputs:
A. A probability distribution over possible entities
B. A single label for the whole sequence
C. A random guess
D. Only positive sentiment
Answer: C. LSTM
31. In a simple RNN, the hidden state at each time step is calculated based on:
A. Only the current input
B. The previous hidden state and the current input
C. The next hidden state
D. Only the previous hidden state
33. In an RNN, the activation function used to calculate the hidden state is typically:
A. Softmax
B. ReLU
C. Tanh or Sigmoid
D. Linear
8. Which activation function is typically used in the forget, input, and output gates of
an LSTM?
A. ReLU
B. Softmax
C. Sigmoid
D. Tanh
Answer: C. Sigmoid
11. Which component in an LSTM cell is responsible for addressing the vanishing
gradient problem?
A. Forget gate
B. Output gate
C. Cell state
D. Dropout
23. The candidate hidden state ht~\tilde{h_t}ht~ in GRUs is influenced by which gate?
A. Reset gate
B. Update gate
C. Output gate
D. Forget gate