HKBK College of Engineering Department of Computer Science and Engineering
HKBK College of Engineering Department of Computer Science and Engineering
A Technical Seminar On
“BIDIRECTIONAL ENCODER REPRESENTATIONS FROM TRANSFORMER”
By
Sheikh Junaid Nazir (1HK16CS143)
Guided By
J.Suneetha
Associate Professor
Contents
Introduction //Introduction to topic
Related Work //Literature Survey/Existing System
Methodology/Architecture // Proposed System,
design process, implementation details
Applications/Usage // Uses
Advantages/Disadvantages
Conclusion and future scope //Summary
References
Introduction
BERT (Bidirectional Encoder Representations from Transformers) is a
recent paper published by researchers at Google AI Language.
It has caused a stir in the Machine Learning community by presenting
state-of-the-art results in a wide variety of NLP tasks, including
Question Answering (SQuAD v1.1), Natural Language Inference, and
others.
BERT’s key technical innovation is applying the bidirectional training
of Transformer, a popular attention model, to language modelling.
This is in contrast to previous efforts which looked at a text sequence
either from left to right or combined left-to-right and right-to-left
training.
The paper’s results show that a language model which is bi-
directionally trained can have a deeper sense of language context and
flow than single-direction language models
Dept. of CSE@ HKBKCE 3 2020-21
Related work
BERT has its origins from pre-training contextual
representations including Semi-supervised Sequence
Learning, Generative Pre-Training, ELMo, and ULMFit.
Unlike previous models, BERT is a deeply bidirectional,
unsupervised language representation, pre-trained using
only a plain text corpus. Context-free models such as
word2vec or GloVe generate a single word embedding
representation for each word in the vocabulary, where
BERT is deeply bidirectional.