NLP 1
NLP 1
NATURAL LANGUAGE
PROCESSING(NLP)
J SWAPNA
20ME1A0590
Concepts:
Reference Link:
https://youtu.be/6M5VXKLf4D4
NATURAL LANGUAGE PROCESSING(NLP):
2. Language Translation: NLP assists companies that translate languages by giving them computer-
generated suggestions. It helps change words from one language (like English) into another (like German or
Mandarin). Sometimes, it can even do this automatically, but it might not always be perfect.
3. Search Engines: You know when you start typing in a search box, and it guesses what you're looking for?
NLP helps with that. It also predicts what website or information you want to find.
4. Talking to Computers: NLP makes it possible for computers to understand what you're saying and follow
your instructions. This is why virtual assistants like Alexa, Siri, or Cortana can respond when you talk to them.
5. Chatbots: You might have talked to a computer program that tries to chat with you. While they might not
be perfect at having long conversations, they're helpful for simple, back-and-forth talks about specific topics,
like customer service questions.
APPLICATIONS OF NLP
A Brief History of Deep Learning for NLP:
❖ Starting in 2011, scientists at the University of Toronto and Microsoft Research made a big
breakthrough using deep learning for language. They trained a computer to understand lots
of words from spoken human speech.
❖ Then, in 2012, there was another success in Toronto with a program called Alex Net that was
excellent at understanding images. It was much better than older methods.
❖ Around 2015, the success with images started helping with language. Computers learned
how to translate languages using deep learning, and it was really accurate. This made it
possible to do translations on phones without needing a strong internet connection.
❖ In 2016 and 2017, computers using deep learning got even better at language. They
became faster and more accurate than the old ways. The rest of the chapter will explain
how they did it.
One-Hot Representations of Words:
When we want computers to understand and work with human language, one
common way is to turn words into numbers. Imagine each word as a row in a big
chart, and each column in the chart shows a different word. If there are many
different words, the chart will have lots of rows. For example, if you have 100
different words in your writing, the chart will have 100 rows. If there are 1000
different words, then the chart will have 1000 rows, and so on. This helps computers
process and make sense of words.
Reference Link:
https://youtu.be/v_4KWmkwmsU
Word Vectors
➢ Word vectors are like packed versions of words, unlike the simple one-hot
codes. While one-hot codes only show where words are used, word vectors
also show what words mean. This extra information makes word vectors
useful in various ways.
➢ When we make word vectors, we want each word to have a special place in a
big space with many dimensions. At first, each word gets a random spot in
this space. But, by looking at the words near a specific word in real language,
we can slowly move their spots in the space to show what the words mean.
➢ Imagine a small example: we start with the first word and look at each word
around it. Right now, let's say the word "word" is our focus. The words "a,"
"know," and "shall" on the left, and "by," "company," and "the" on the right,
make up its "context." We do this for each word in our text, using a window of
three words on each side.
❑ Imagine a special space called vector space, shown in a picture
(like a cartoon) . This space can have lots of dimensions, like different
aspects. We'll call it an n-dimensional vector space. Depending on how
many words we're working with and what we're trying to do, we might
use spaces with a few, many, or even thousands of dimensions.
❑ Each word, like "king," gets its own spot in this space. If we have, let's
say, a 3-dimensional space, it's like drawing on paper with three
coordinates: x, y, and z. For example, if "king" is at x = 1.1, y = 2.4, and
z = 3.0, we can write it as [1.1, 2.4, 3.0]. This helps us understand where
words are and what they mean.
❑ In this space, words that are close together have similar meanings.
This makes it easier for computers to understand word meanings. This
idea is like putting related words near each other on a map, so we
can see their connections.
Localist Versus Distributed Representations:
https://youtu.be/aWFllV6WsAs
Google Duplex:
https://youtu.be/D5VN56jQMWM
❑ A really impressive example of using deep learning for language was shown by Google in May
2018. They introduced Google Duplex at their event. Imagine, Google Assistant could call a
restaurant to make a reservation, and it sounded like a real person talking. The audience was
amazed because Duplex talked just like a human, with pauses and thinking sounds.
❑ Even though this was a demonstration and not live, it showed how powerful deep learning can
be. Think about the conversation between Duplex and the person at the restaurant: Duplex had
to understand what was said, even with different accents and background noise.
❑ First, it needed to quickly recognize spoken words, even with noise and accents. Then, it had to
understand what was said and decide what to do. All of this was made possible by a
combination of advanced technology. So, it's like Google made a computer that can
understand and talk like a human, even in difficult situations.