Nndl Presentation Report Full (1)
Nndl Presentation Report Full (1)
Presentation Report
Manoj M
4SF21AD026
Under the Guidance of
Dr. Gurusiddayya Hiremath
Associate Professor
Department of Computer Science and Engineering
(Artificial Intelligence and Machine Learning)
SCEM, Mangaluru
Academic Year: 2024-25
Recurrent Neural Networks (RNNs)
1. Definition:
RNNs are a class of neural networks designed for sequential data. Unlike feedforward
networks, they have connections forming directed cycles, enabling them to retain
information from previous inputs.
2. Importance:
o Suited for time-series data, natural language processing (NLP), and speech
recognition.
3. Key Feature:
Types of RNNs
1. One-to-One
• Description:
This is the simplest type of neural network, where there is a single input and a single
output. Although not strictly an RNN, this structure is included as the base case.
• Example Applications:
• Key Characteristics:
o No temporal dependencies.
• Description:
In this structure, a single input generates a sequence of outputs. The input is processed,
and the network predicts multiple outputs over time.
• Example Applications:
• Key Characteristics:
o Requires the network to learn how to expand a single input into a meaningful
sequence.
Many-to-One
• Description:
In this structure, a sequence of inputs produces a single output. The network processes
the entire input sequence and generates a consolidated result at the end.
• Example Applications:
• Key Characteristics:
o The final hidden state carries the combined information from all prior inputs.
Many-to-Many (Two Variants)
This structure processes sequences where both input and output are sequences. It can be further
categorized into:
• Description:
The number of outputs matches the number of inputs. Each input corresponds to a
specific output.
• Example Applications:
o Video Frame Labeling: Each frame of a video is tagged with a specific label
(e.g., identifying objects in each frame).
• Key Characteristics:
o The network must maintain a strict mapping between input and output sequences.
• Description:
The input and output sequences differ in length. The network learns to align and process
sequences of varying sizes.
• Example Applications:
• Key Characteristics:
1. One-to-Many:
o Creative applications like music or poetry generation.
2. Many-to-One:
3. Many-to-Many:
• Vanishing and Exploding Gradients: Difficulty in training RNNs for long sequences
due to unstable gradients.
Conclusion
The variety of RNN structures enables them to handle a wide range of sequence-based tasks
effectively. However, their limitations, such as difficulty in handling long-term dependencies,
have led to innovations like LSTMs, GRUs, and attention mechanisms to improve their
performance.
PPT SLIDES:
.
Github(CLASS ASSESSMENT):