0% found this document useful (0 votes)
66 views13 pages

UNIT 4 (MCQS)

The document provides multiple-choice questions (MCQs) related to Natural Language Processing (NLP), focusing on sequence models, neural networks, sentiment analysis, language modeling, and Named Entity Recognition (NER). Key topics include the use of Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, and Gated Recurrent Units (GRUs) for various NLP tasks. Each question includes the correct answer, emphasizing important concepts and functionalities of these models.

Uploaded by

Priyanshu Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views13 pages

UNIT 4 (MCQS)

The document provides multiple-choice questions (MCQs) related to Natural Language Processing (NLP), focusing on sequence models, neural networks, sentiment analysis, language modeling, and Named Entity Recognition (NER). Key topics include the use of Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, and Gated Recurrent Units (GRUs) for various NLP tasks. Each question includes the correct answer, emphasizing important concepts and functionalities of these models.

Uploaded by

Priyanshu Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

UNIT- 4

MCQs Suggestions Questions

1. Natural Language Processing with Sequence Models and Neural Networks for
Sentiment Analysis

1. In NLP, a sequence model aims to model dependencies between:


A. Individual letters
B. Entire documents
C. Sequential words
D. Sentiment scores

Answer: C. Sequential words

2. Which type of neural network is best suited for sentiment analysis in text?
A. Convolutional Neural Network (CNN)
B. Recurrent Neural Network (RNN)
C. Fully Connected Network
D. Decision Tree

Answer: B. Recurrent Neural Network (RNN)

3. In a binary sentiment classification task, the output layer typically has:


A. 1 unit
B. 2 units
C. 3 units
D. Variable units depending on input size

Answer: A. 1 unit

4. Which activation function is commonly used in the output layer for sentiment
classification tasks?
A. ReLU
B. Tanh
C. Sigmoid
D. Softmax

Answer: C. Sigmoid

5. Which of the following algorithms is commonly used to train neural networks in


sentiment analysis tasks?
A. Backpropagation
B. Forward propagation
C. Decision Trees
D. Linear Regression

Answer: A. Backpropagation

6. The vanishing gradient problem often occurs in RNNs when:


A. Gradients get too large
B. Gradients become too small over time
C. The model has too few layers
D. We use batch normalization

Answer: B. Gradients become too small over time

7. Sentiment analysis often uses which type of loss function?


A. Mean Squared Error
B. Cross-Entropy Loss
C. Hinge Loss
D. KL Divergence

Answer: B. Cross-Entropy Loss

8. Which of the following embedding techniques is used to convert words into vector
representations for NLP tasks?
A. One-hot encoding
B. Word2Vec
C. Binary encoding
D. Frequency encoding

Answer: B. Word2Vec

9. What is the purpose of using padding in sequence models for sentiment analysis?
A. To introduce randomness
B. To equalize sequence lengths
C. To improve model performance
D. To make predictions accurate

Answer: B. To equalize sequence lengths

10. In sentiment analysis, a positive sentiment is typically represented by:


A. The number 0
B. The number 1
C. -1
D. Any non-zero value

Answer: B. The number 1


2. Recurrent Neural Networks for Language Modeling

11. Language modeling aims to:


A. Generate new sentences
B. Predict the next word in a sequence
C. Classify text sentiment
D. Summarize text

Answer: B. Predict the next word in a sequence

12. Which of the following architectures is commonly used for language modeling?
A. CNN
B. RNN
C. SVM
D. KNN

Answer: B. RNN

13. The hidden state in RNNs is:


A. A random variable
B. A static vector
C. Updated at each time step
D. Irrelevant for predictions

Answer: C. Updated at each time step

14. GRU is an RNN variant that helps with:


A. Handling images
B. Decreasing training time
C. Addressing the vanishing gradient problem
D. Increasing the number of parameters

Answer: C. Addressing the vanishing gradient problem

15. The formula ht=σ(W⋅xt+U⋅ht−1+b)h_t = \sigma(W \cdot x_t + U \cdot h_{t-1} + b)ht
=σ(W⋅xt+U⋅ht−1+b) in RNNs computes:
A. Output vector
B. Input sequence
C. Next hidden state
D. Initial state

Answer: C. Next hidden state


16. In the context of RNNs, "sequential data" refers to:
A. Randomly arranged data points
B. Data with temporal or ordered structure
C. Independent data points
D. Unstructured text

Answer: B. Data with temporal or ordered structure

17. Which gate in an LSTM cell helps retain long-term dependencies?


A. Input gate
B. Output gate
C. Forget gate
D. Memory gate

Answer: C. Forget gate

18. GRU differs from LSTM in that it:


A. Uses fewer gates
B. Has more memory capacity
C. Is slower to train
D. Requires more data

Answer: A. Uses fewer gates

19. The purpose of a next-word generator model is to:


A. Classify documents
B. Predict the subsequent word in a text sequence
C. Detect sentiment
D. Label entities in text

Answer: B. Predict the subsequent word in a text sequence

20. In NLP, a "language model" assigns a probability to a:


A. Sentence or sequence of words
B. Single character
C. Random number
D. Topic label

Answer: A. Sentence or sequence of words

3. Named Entity Recognition (NER) Using LSTMs

21. Named Entity Recognition (NER) is the process of identifying:


A. The sentiment of a text
B. Specific entities like names, locations, and dates in text
C. The language of a document
D. Keywords in a document

Answer: B. Specific entities like names, locations, and dates in text

22. Which neural network component in LSTM prevents the vanishing gradient
problem?
A. Tanh activation
B. Cell state
C. Bias
D. Learning rate

Answer: B. Cell state

23. The purpose of an LSTM’s output gate is to:


A. Remove irrelevant information
B. Store the cell state
C. Control the output at each time step
D. Initialize hidden states

Answer: C. Control the output at each time step

24. An LSTM cell’s forget gate decides:


A. What information to discard
B. Which information to add
C. The final output
D. The learning rate

Answer: A. What information to discard

25. NER models using LSTMs are popular because they can:
A. Process data quickly
B. Retain contextual information over long sequences
C. Require no training
D. Be trained with minimal data

Answer: B. Retain contextual information over long sequences

26. In an NER system, "John" in "John bought a car" is classified as:


A. Organization
B. Location
C. Person
D. Date

Answer: C. Person
27. A Named Entity Recognition system using an LSTM generally outputs:
A. A probability distribution over possible entities
B. A single label for the whole sequence
C. A random guess
D. Only positive sentiment

Answer: A. A probability distribution over possible entities

28 Which model is often used for Named Entity Recognition?


A. Decision Trees
B. Naive Bayes
C. LSTM
D. CNN

Answer: C. LSTM

29 In a language model, "bi-LSTM" refers to an LSTM that:


A. Processes data only forwards
B. Processes data forwards and backwards
C. Ignores the input sequence
D. Requires more memory

Answer: B. Processes data forwards and backwards

30. In LSTMs, the "cell state" is responsible for:


A. Storing short-term information
B. Maintaining long-term dependencies
C. Generating random data
D. Decreasing model accuracy

Answer: B. Maintaining long-term dependencies

31. In a simple RNN, the hidden state at each time step is calculated based on:
A. Only the current input
B. The previous hidden state and the current input
C. The next hidden state
D. Only the previous hidden state

 Answer: B. The previous hidden state and the current input

32. The main limitation of an RNN is the issue of:


A. High accuracy
B. Vanishing and exploding gradients
C. Low computational cost
D. High interpretability

Answer: B. Vanishing and exploding gradients

33. In an RNN, the activation function used to calculate the hidden state is typically:
A. Softmax
B. ReLU
C. Tanh or Sigmoid
D. Linear

Answer: C. Tanh or Sigmoid

34. The vanishing gradient problem in RNNs is primarily caused by:


A. Short input sequences
B. Constant gradients
C. Small gradients diminishing during backpropagation
D. A high learning rate

Answer: C. Small gradients diminishing during backpropagation

35. An RNN’s performance on long sequences can be improved using:


A. More hidden layers
B. Batch normalization
C. LSTM or GRU units
D. Higher learning rates

Answer: C. LSTM or GRU units

2. Long Short-Term Memory (LSTM) Networks

6. In an LSTM cell, the forget gate is responsible for:


A. Deciding what information to discard from the cell state
B. Adding new information to the cell state
C. Generating the final output
D. Controlling the learning rate

Answer: A. Deciding what information to discard from the cell state

7. The input gate in an LSTM:


A. Controls the current hidden state
B. Decides what new information to store in the cell state
C. Ignores the cell state
D. Controls gradient flow
Answer: B. Decides what new information to store in the cell state

8. Which activation function is typically used in the forget, input, and output gates of
an LSTM?
A. ReLU
B. Softmax
C. Sigmoid
D. Tanh

Answer: C. Sigmoid

9. The cell state in an LSTM helps to:


A. Track short-term dependencies
B. Maintain long-term dependencies
C. Initialize the model
D. Update the learning rate

Answer: B. Maintain long-term dependencies

10. The output gate in an LSTM controls:


A. The information to be output from the cell state
B. The information to be added to the cell state
C. The learning rate
D. The initialization of weights

Answer: A. The information to be output from the cell state

11. Which component in an LSTM cell is responsible for addressing the vanishing
gradient problem?
A. Forget gate
B. Output gate
C. Cell state
D. Dropout

Answer: C. Cell state

3. Gated Recurrent Unit (GRU) Networks

16. GRU is a variant of RNNs designed to address:


A. Image processing issues
B. The vanishing gradient problem
C. Increase in model complexity
D. Sequential data sorting

Answer: B. The vanishing gradient problem


17. Which gates are present in a GRU?
A. Forget and Input gates
B. Reset and Update gates
C. Output and Input gates
D. Memory and Forget gates

Answer: B. Reset and Update gates

18. The purpose of the reset gate in a GRU is to:


A. Control the amount of past information to forget
B. Update the learning rate
C. Compute the output
D. Initialize the model

Answer: A. Control the amount of past information to forget

19. The update gate in a GRU decides:


A. Whether to forget the previous hidden state
B. How much of the previous hidden state to carry forward
C. The learning rate
D. The training batch size

Answer: B. How much of the previous hidden state to carry forward

20. Compared to an LSTM, a GRU has:


A. More gates
B. Fewer gates
C. No cell state
D. Both B and C

Answer: D. Both B and C

21. Which of the following is a key advantage of GRU over LSTM?


A. Less computationally intensive
B. Higher memory requirements
C. Better accuracy on image data
D. Lower prediction accuracy

Answer: A. Less computationally intensive

22. In GRUs, the hidden state hth_tht is calculated using:


ht=(1−zt)∗ht−1+zt∗ht~
Here, tzt is the:
A. Reset gate output
B. Update gate output
C. Forget gate output
D. Learning rate

Answer: B. Update gate output

23. The candidate hidden state ht~\tilde{h_t}ht~ in GRUs is influenced by which gate?
A. Reset gate
B. Update gate
C. Output gate
D. Forget gate

Answer: A. Reset gate

24. Compared to a simple RNN, the GRU can:


A. Better capture long-term dependencies
B. Reduce the need for regularization
C. Lower the vanishing gradient effect
D. Both A and C

Answer: D. Both A and C

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy