DL 8
DL 8
Experiment No - 08
Title: Design and implement RNN for classification of temporal data, sequence to sequence data
modeling etc.
Theory:
Prior Concepts:
Recurrent Neural Networks (RNNs) are designed for tasks involving sequential data, where the current input depends on
previous inputs. RNNs are widely used in various domains like time-series forecasting, sequence classification, and
sequence-to-sequence tasks such as language translation. In RNNs, information cycles through a loop in the network,
allowing them to maintain memory of previous inputs while processing the current input.
1. Input Layer: Receives sequential data as input, such as time-series data, text, or sensor data. Each time step in the
sequence corresponds to one data point.
2. RNN Layers: Process the input data over time steps, maintaining a hidden state that carries information from
previous time steps. The hidden state is updated at each time step based on the current input and the previous
hidden state. Common RNN variants include Simple RNN, GRU, and LSTM, but here we focus on Simple RNN.
3. Fully Connected (Dense) Layers: After the RNN layers, the hidden states can be passed to one or more fully
connected layers to make final predictions.
4. Output Layer:
○ Classification Task: For temporal data classification, the output is typically a softmax layer that outputs
the probability of each class.
○ Sequence-to-Sequence Task: For tasks like language translation or time-series prediction, the output can
be a sequence of predictions (e.g., next word, next time step in the sequence).
Forward Propagation:
● The input sequence is processed step by step by the RNN, where each input at time step ttt is
combined with the previous hidden state ht−1h_{t-1}ht−1 to produce a new hidden state hth_tht.
● In classification tasks, after processing the entire sequence, the final hidden state is passed through a fully
connected layer to output class probabilities.
● In sequence-to-sequence tasks, the output is a sequence of predictions generated at each time step based on the
current hidden state.
Loss Function:
● Cross-Entropy Loss: Used for classification tasks to measure the difference between predicted class probabilities
and the actual class labels.
● Mean Squared Error (MSE): Used in sequence-to-sequence tasks like time-series forecasting, where the goal is
to predict continuous values.
Backpropagation:
● Gradients of the loss function are computed with respect to the RNN weights using Backpropagation Through
Time (BPTT), a special case of backpropagation adapted for RNNs to account for the sequence structure.
● The weights are updated using optimization algorithms like Adam or Stochastic Gradient Descent (SGD) to
minimize the loss.
Training:
● The RNN is trained by feeding in sequences of data and computing the loss between the predicted output and the
target labels or sequences.
● Gradients are backpropagated, and weights are updated over multiple epochs to improve performance.
Activation Functions:
● Tanh or ReLU: Typically used in the hidden layers of RNNs to introduce non-linearity.
● Softmax: Used in the output layer for classification tasks, providing a probability distribution over the possible
output classes.
Code: -
import numpy as np
import tensorflow as tf
maxlen = 500 # Cut texts after this number of words (among top max_features most common words)
model.add(layers.SimpleRNN(64, return_sequences=False))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
epochs=10,
batch_size=32,
validation_split=0.2)
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label='val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0, 1])
plt.legend(loc='lower right')
plt.show()
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend(loc='upper right')
plt.show()
R1 R2 R3
DOP DOS Conductio File Viva -Voce Total Signatur
n Record e
5 Marks 5 Marks 5 Marks 15
Marks