0% found this document useful (0 votes)
10 views15 pages

NLP 1

nlp description

Uploaded by

Swetha Sastry
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views15 pages

NLP 1

nlp description

Uploaded by

Swetha Sastry
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

DEEP LEARNING AND

NATURAL LANGUAGE
PROCESSING(NLP)

J SWAPNA
20ME1A0590
Concepts:

▶ Deep Learning for Natural Language Processing


▶ Deep Learning Networks Learn Representations Automatically
▶ Natural Language Processing
▶ A Brief History of Deep Learning for NLP
▶ One-Hot Representations of Words
▶ Word Vectors, Word-Vector Arithmetic
▶ Localist Versus Distributed Representations
▶ Elements of Natural Human Language
▶ Google Duplex
Deep Learning Networks Learn Representations
Automatically

▶ Deep learning can be defined as the layering of simple algorithms called


artificial neurons into networks several layers.

Reference Link:
https://youtu.be/6M5VXKLf4D4
NATURAL LANGUAGE PROCESSING(NLP):

Natural language processing (NLP) is a type of


research that combines computer science,
linguistics, and artificial intelligence. It deals with
using computers to understand and work with
human language, both spoken and written. This
helps computers perform tasks on their own or assist
humans in completing tasks more easily. It's like
teaching machines to understand and use human
language, making things simpler and more
efficient. Just think of it as making computers
understand and talk like people do!
Applications of Natural Language Processing(NLP):
1. Sorting Documents: NLP helps organize different documents, like emails, tweets, or movie reviews, into
specific groups based on their content. For example, it can figure out if a document is really important,
positive in feeling, or related to predicting a company's stock price.

2. Language Translation: NLP assists companies that translate languages by giving them computer-
generated suggestions. It helps change words from one language (like English) into another (like German or
Mandarin). Sometimes, it can even do this automatically, but it might not always be perfect.

3. Search Engines: You know when you start typing in a search box, and it guesses what you're looking for?
NLP helps with that. It also predicts what website or information you want to find.

4. Talking to Computers: NLP makes it possible for computers to understand what you're saying and follow
your instructions. This is why virtual assistants like Alexa, Siri, or Cortana can respond when you talk to them.

5. Chatbots: You might have talked to a computer program that tries to chat with you. While they might not
be perfect at having long conversations, they're helpful for simple, back-and-forth talks about specific topics,
like customer service questions.

APPLICATIONS OF NLP
A Brief History of Deep Learning for NLP:

❖ Starting in 2011, scientists at the University of Toronto and Microsoft Research made a big
breakthrough using deep learning for language. They trained a computer to understand lots
of words from spoken human speech.

❖ Then, in 2012, there was another success in Toronto with a program called Alex Net that was
excellent at understanding images. It was much better than older methods.

❖ Around 2015, the success with images started helping with language. Computers learned
how to translate languages using deep learning, and it was really accurate. This made it
possible to do translations on phones without needing a strong internet connection.

❖ In 2016 and 2017, computers using deep learning got even better at language. They
became faster and more accurate than the old ways. The rest of the chapter will explain
how they did it.
One-Hot Representations of Words:

When we want computers to understand and work with human language, one
common way is to turn words into numbers. Imagine each word as a row in a big
chart, and each column in the chart shows a different word. If there are many
different words, the chart will have lots of rows. For example, if you have 100
different words in your writing, the chart will have 100 rows. If there are 1000
different words, then the chart will have 1000 rows, and so on. This helps computers
process and make sense of words.

Reference Link:
https://youtu.be/v_4KWmkwmsU
Word Vectors

➢ Word vectors are like packed versions of words, unlike the simple one-hot
codes. While one-hot codes only show where words are used, word vectors
also show what words mean. This extra information makes word vectors
useful in various ways.
➢ When we make word vectors, we want each word to have a special place in a
big space with many dimensions. At first, each word gets a random spot in
this space. But, by looking at the words near a specific word in real language,
we can slowly move their spots in the space to show what the words mean.
➢ Imagine a small example: we start with the first word and look at each word
around it. Right now, let's say the word "word" is our focus. The words "a,"
"know," and "shall" on the left, and "by," "company," and "the" on the right,
make up its "context." We do this for each word in our text, using a window of
three words on each side.
❑ Imagine a special space called vector space, shown in a picture
(like a cartoon) . This space can have lots of dimensions, like different
aspects. We'll call it an n-dimensional vector space. Depending on how
many words we're working with and what we're trying to do, we might
use spaces with a few, many, or even thousands of dimensions.

❑ Each word, like "king," gets its own spot in this space. If we have, let's
say, a 3-dimensional space, it's like drawing on paper with three
coordinates: x, y, and z. For example, if "king" is at x = 1.1, y = 2.4, and
z = 3.0, we can write it as [1.1, 2.4, 3.0]. This helps us understand where
words are and what they mean.

❑ In this space, words that are close together have similar meanings.
This makes it easier for computers to understand word meanings. This
idea is like putting related words near each other on a map, so we
can see their connections.
Localist Versus Distributed Representations:

Let's compare word vectors to one-hot representations which people


have used in NLP for a while. Word vectors store word meanings in a
special way, spreading the meaning across different parts of a
space with many dimensions. It's like the meaning of words is softly
spread out as we move around this space.
On the other hand, one-hot representations are like flags. They show if
a word is there or not, but they don't capture details. Word vectors,
however, are very detailed.
Elements of Natural Human Language:

▪ We've been looking at just one part of human


language: words. But words are like building blocks.
They're made from smaller parts, and they come
together to form more complex language pieces.
▪ Imagine it like building with blocks: words are like big
blocks, and these blocks are made from even smaller
pieces. We're going to start with these smaller parts
and then build up to the bigger ones.
Word Vector Arithmetic:
✓ Another example involves calculating the direction and
distance between the words "man" and "woman." This
movement through the vector space reflects the
concept of gender, and it's shown as green arrows in
the cube. If we follow these green arrows from any
word that refers to males (like "king" or "uncle"), we'll
eventually arrive at a spot near words that refer to
females (like "queen" or "aunt").

https://youtu.be/aWFllV6WsAs
Google Duplex:
https://youtu.be/D5VN56jQMWM

❑ A really impressive example of using deep learning for language was shown by Google in May
2018. They introduced Google Duplex at their event. Imagine, Google Assistant could call a
restaurant to make a reservation, and it sounded like a real person talking. The audience was
amazed because Duplex talked just like a human, with pauses and thinking sounds.
❑ Even though this was a demonstration and not live, it showed how powerful deep learning can
be. Think about the conversation between Duplex and the person at the restaurant: Duplex had
to understand what was said, even with different accents and background noise.
❑ First, it needed to quickly recognize spoken words, even with noise and accents. Then, it had to
understand what was said and decide what to do. All of this was made possible by a
combination of advanced technology. So, it's like Google made a computer that can
understand and talk like a human, even in difficult situations.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy