0% found this document useful (0 votes)
219 views37 pages

Seminar Paper On Deep Learning 21

The document discusses deep learning and its applications. It begins with an introduction to artificial intelligence and machine learning. It then covers the basics of deep learning, including how deep learning works using methods like deep learning neural networks. It discusses applications of deep learning such as driverless cars, voice assistants, and more. Finally, it discusses the future of deep learning and concludes that deep learning is achieving powerful results and being applied in many areas.

Uploaded by

Robel Asfaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
219 views37 pages

Seminar Paper On Deep Learning 21

The document discusses deep learning and its applications. It begins with an introduction to artificial intelligence and machine learning. It then covers the basics of deep learning, including how deep learning works using methods like deep learning neural networks. It discusses applications of deep learning such as driverless cars, voice assistants, and more. Finally, it discusses the future of deep learning and concludes that deep learning is achieving powerful results and being applied in many areas.

Uploaded by

Robel Asfaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 37

SEMINAR INDIVIDUAL ASSIGNMET

NAME: TADIWOS ANTENEHID: 1102080

Section-B

.
.

. Submitted to –Mrs.
Solome
SEMINAR INDIVIDUAL ASSIGNMET

ACKNOWLEDGMENT
SEMINAR INDIVIDUAL ASSIGNMET
SEMINAR INDIVIDUAL ASSIGNMET
Abstract
Deep learning is a machine learning technique that teaches computers to do what
comes naturally to humans: learn by example. Deep learning is a key technology
behind driverless cars, enabling them to recognize a stop sign, or to distinguish a
pedestrian from a lamppost. It is the key to voice control in consumer devices like
phones, tablets, TVs, and hands-free speakers. Deep learning is getting lots of
attention lately and for good reason. It’s achieving results that were not possible
before.

1
SEMINAR INDIVIDUAL ASSIGNMET
Contents
Chapter 1.....................................................................................................................................................3
Introduction.............................................................................................................................................3
What is Artificial Intelligence?.................................................................................................................5
What is Machine Learning?.....................................................................................................................6
Chapter 2.....................................................................................................................................................8
The Basics of Deep Learning....................................................................................................................8
Chapter 3...................................................................................................................................................18
How Deep Learning Works?..................................................................................................................18
Deep learning methods.........................................................................................................................19
Deep learning neural networks.............................................................................................................21
Chapter 4...................................................................................................................................................22
Deep Learning Applications...................................................................................................................22
Chapter 5...................................................................................................................................................30
The Future of Deep Learning.................................................................................................................30
Conclusion.............................................................................................................................................33
References.................................................................................................................................................34

2
SEMINAR INDIVIDUAL ASSIGNMET
Chapter 1
Introduction

Artificial intelligence (AI) is the intelligence exhibited by machines or software,


and the branch that develops machines and software with human-like intelligence.
The goal of AI is to invent a machine which can sense, remember, learn, and
recognize like a real human being. Perceptron is the first machine which can sense
and learn but has fundamentally limited learning abilities. The later neural network
with multiple hidden layers can learn more complicated functions but it lacks a
good learning algorithm. The appearance of SVM enlightens people within a short
time since it facilitates the learning procedures and performs well in many practical
problems, but SVM also encounters its bottlenecks due to its shallow architectures.
Deep learning is a learning method with the deep architecture and the good
learning algorithms, which can perform the intellectual learning like learning the
features. This ability, together with the efficient learning algorithms that can ensure
this ability, point out a new direction toward.

In statistical machine learning, a major issue is the selection of an appropriate


feature space where input instances have desired properties for solving a particular
problem. For example, in the context of supervised learning for binary
classification, it is often required that the two classes are separable by a
hyperplane. In the case where this property is not directly satisfied in the input
space, one is given the possibility to map instances into an intermediate feature
space where the classes are linearly separable. This intermediate space can either
be specified explicitly by hand-coded features, be defined implicitly with a so-

3
SEMINAR INDIVIDUAL ASSIGNMET
called kernel function, or be automatically learned. In both of the first cases, it is
the user’s responsibility to design the feature space. This can incur a huge cost in
terms of computational time or expert knowledge, especially with highly
dimensional input spaces, such as when dealing with images.

4
SEMINAR INDIVIDUAL ASSIGNMET

What is Artificial Intelligence?

For years, people viewed computers as machines which can perform mathematical
operations at a much quicker rate than humans. They were initially viewed as
computational machines much like glorified calculators. Early scientists felt that
computers could never simulate the human brain. Then, scientists, researchers and
(probably most importantly) science fiction authors started asking “or could it?”
The biggest obstacle to solving this problem came down to one major issue: the
human mind can do things that scientists couldn’t understand, much less
approximate. For example, how would we write algorithms for these tasks:

1. A song comes on the radio, most listeners of music can quickly identify the
genre, maybe the artist, and probably the song.

2. An art critic sees a painting he’s never seen before, yet he could most likely
identify the era, the medium, and probably the artist.

3. A baby can recognize her mom’s face at a very early age.

The simple answer is that you can’t write algorithms for these. Algorithms use
mathematics. Humans that accomplish these tasks couldn’t explain mathematically
how they drew these conclusions. They were able to achieve these results because
they learned to do these things over time. Artificial Intelligence and Machine
Learning were designed to simulate human learning on a computer. [1]

5
SEMINAR INDIVIDUAL ASSIGNMET

The terms Artificial Intelligence(AI) have been used since the 1950s. At that time,
it was viewed as a futuristic, theoretical part of computer science. Now, due to
increases in computing capacity and extensive research into Algorithms, AI is now
a viable reality. So much so, that many products we use every day have some
variation of AI built into them (Siri, Alexa, Snapchat facial filters, background
noise filtration for phones/headphones, etc…). But what do these terms mean?

Simply put, AI means to program a machine to behave like a human. In the


beginning, researchers developed algorithms to try to approximate human intuition.
[1]

What is Machine Learning?


Machine Learning is a branch of Computer Science that is concerned with the use
of data and algorithms that enable machines to imitate human learning so that they
are capable of performing some sort of predictions by learning from input
examples. [2]

Around the world, computers and smart devices collect and store petabytes of data
in just one day — from biomarkers, to financial and environmental data.
Information extraction by humans is almost unattainable due to the size and
complexity of these datasets.

Humans are conditioned to operate and think in a three-dimensional world and


therefore, it’s extremely hard for us to extract information from higher dimensions.
This means we pretty much have two options; we can either reduce the number of
6
SEMINAR INDIVIDUAL ASSIGNMET

dimensions until we are able to figure things out (but this is going to come at the
cost of throwing information out of the window) or train machines so that they are
capable of extracting information, despite the high dimensionality of the data we
need to deal with.

The field of Machine Learning combines the computational power that modern
machines offer with statistical algorithms that are capable of learning from data in
a way that can help humans understand complex concepts, make better decisions
and eventually solve the practical problem. [2]

7
SEMINAR INDIVIDUAL ASSIGNMET

Chapter 2
The Basics of Deep Learning
Have you ever wondered how Google’s translator App is able to translate entire
paragraphs from one language into another in a matter of milliseconds?

How are Netflix and YouTube able to figure out our taste in movies or videos and
give us appropriate recommendations? Or how self-driving cars are even possible?
[3]

Figure 1 : Application of deep learning

All of this is a product of Deep Learning and Artificial Neural Networks. The
definition of Deep Learning and Neural networks will be addressed in the
following.

Let us begin with the definition of Deep Learning first.

8
SEMINAR INDIVIDUAL ASSIGNMET

1. What exactly is Deep Learning?

Deep Learning is a subset of Machine Learning, which on the other hand is a


subset of Artificial Intelligence. Artificial Intelligence is a general term that refers
to techniques that enable computers to mimic human behavior. Machine Learning
represents a set of algorithms trained on data that make all of this possible. [3]

Figure 2 : AI vs ML vs DL

Deep Learning, on the other hand, is just a type of Machine Learning, inspired by
the structure of a human brain. Deep learning algorithms attempt to draw similar
9
SEMINAR INDIVIDUAL ASSIGNMET

conclusions as humans would by continually analyzing data with a given logical


structure. To achieve this, deep learning uses a multi-layered structure of
algorithms called neural networks. [3]

Figure 3 : A typical Neural Network

The design of the neural network is based on the structure of the human brain. Just
as we use our brains to identify patterns and classify different types of information,
neural networks can be taught to perform the same tasks on data.

The individual layers of neural networks can also be thought of as a sort of filter
that works from gross to subtle, increasing the likelihood of detecting and
outputting a correct result. [3]

The human brain works similarly. Whenever we receive new information, the brain
tries to compare it with known objects. The same concept is also used by deep
neural networks.
10
SEMINAR INDIVIDUAL ASSIGNMET

Neural networks enable us to perform many tasks, such as clustering, classification


or regression. With neural networks, we can group or sort unlabeled data according
to similarities among the samples in this data. Or in the case of classification, we
can train the network on a labeled dataset in order to classify the samples in this
dataset into different categories. [3]

Artificial neural networks have unique capabilities that enable deep learning
models to solve tasks that machine learning models can never solve.

All recent advances in artificial intelligence in recent years are due to deep
learning. Without deep learning, we would not have self-driving cars, chatbots or
personal assistants like Alexa and Siri. The Google Translate app would continue
to be as primitive as 10 years ago (before Google switched to neural networks for
this App), and Netflix or Youtube would have no idea which movies or TV series
we like or dislike. Behind all these technologies are neural networks. [3]

In general, neural networks can perform the same tasks as classical algorithms of
machine learning. However, it is not the other way around.

We can even go so far as to say that today a new industrial revolution is taking
place, driven by artificial neural networks and deep learning. [3]

At the end of the day, deep learning is the best and most obvious approach to real
machine intelligence we’ve had so far.
11
SEMINAR INDIVIDUAL ASSIGNMET

2. Why is Deep Learning Popular these Days?

Why is deep learning and artificial neural networks so powerful and unique in
today’s industry? And above all, why are deep learning models more powerful than
machine learning models? Let me explain it to you.

The first advantage of deep learning over machine learning is the needlessness of
the so-called feature extraction.

Long before deep learning was used, traditional machine learning methods were
mainly used. Such as Decision Trees, SVM, Naïve Bayes Classifier and Logistic
Regression. [3]

These algorithms are also called flat algorithms. Flat here means that these
algorithms can not normally be applied directly to the raw data (such as .csv,
images, text, etc.). We need a preprocessing step called Feature Extraction. [3]

The result of Feature Extraction is a representation of the given raw data that can
now be used by these classic machine learning algorithms to perform a task. For
example, the classification of the data into several categories or classes. [3]

Feature Extraction is usually quite complex and requires detailed knowledge of the
problem domain. This preprocessing layer must be adapted, tested and refined over
several iterations for optimal results.

12
SEMINAR INDIVIDUAL ASSIGNMET

On the other side are the artificial neural networks of Deep Learning. These do not
need the Feature Extraction step.

The layers are able to learn an implicit representation of the raw data directly and
on their own. Here, a more and more abstract and compressed representation of the
raw data is produced over several layers of an artificial neural-net. This
compressed representation of the input data is then used to produce the result. The
result can be, for example, the classification of the input data into different classes.
[3]

Figure 4 : Feature Extraction is only required for ML Algorithms

In other words, we can also say that the feature extraction step is already part of the
process that takes place in an artificial neural network. [3]

13
SEMINAR INDIVIDUAL ASSIGNMET

During the training process, this step is also optimized by the neural network to
obtain the best possible abstract representation of the input data. This means that
the models of deep learning thus require little to no manual effort to perform and
optimize the feature extraction process. [3]

Let us look at a concrete example. For example, if you want to use a machine
learning model to determine if a particular image is showing a car or not, we
humans first need to identify the unique features or features of a car (shape, size,
windows, wheels, etc.) extract the feature and give them to the algorithm as input
data. [3]

In this way, the algorithm would perform a classification of the images. That is, in
machine learning, a programmer must intervene directly in the action for the model
to come to a conclusion.

In the case of a deep learning model, the feature extraction step is completely
unnecessary. The model would recognize these unique characteristics of a car and
make correct predictions. [3]

That completely without the help of a human.

In fact, refraining from extracting the characteristics of data applies to every other
task you’ll ever do with neural networks. Just give the raw data to the neural
network, the rest is done by the model. [3]
14
SEMINAR INDIVIDUAL ASSIGNMET

The Era of Big Data…

The second huge advantage of Deep Learning and a key part in understanding why
it’s becoming so popular is that it’s powered by massive amounts of data. The “Big
Data Era” of technology will provide huge amounts of opportunities for new
innovations in deep learning. As per Andrew Ng, the chief scientist of China’s
major search engine Baidu and one of the leaders of the Google Brain Project, said
“The analogy to deep learning is that the rocket engine is the deep learning models
and the fuel is the huge amounts of data we can feed to these algorithms.” [3]

Figure 5 : Deep Learning Algorithms get better with the increasing amount of data.

15
SEMINAR INDIVIDUAL ASSIGNMET

Deep Learning models tend to increase their accuracy with the increasing amount
of training data, whereas traditional machine learning models such as SVM and
Naive Bayes classifiers stop improving after a saturation point. [3]

3. When to use Deep Learning or not over others?

1. Deep Learning outperform other techniques if the data size is large. But with


small data size, traditional Machine Learning algorithms are preferable.

2. Deep Learning techniques need to have high end infrastructure to train in


reasonable time.

3. When there is lack of domain understanding for feature introspection, Deep


Learning techniques outshines others as you have to worry less about feature
engineering.

4. Deep Learning really shines when it comes to complex problems such as


image classification, natural language processing, and speech recognition. [3]

16
SEMINAR INDIVIDUAL ASSIGNMET

Chapter 3
How Deep Learning Works?

Computer programs that use deep learning go through much the same process as
the toddler learning to identify the dog. Each algorithm in the hierarchy applies a
nonlinear transformation to its input and uses what it learns to create a statistical
model as output. Iterations continue until the output has reached an acceptable
level of accuracy. The number of processing layers through which data must pass
is what inspired the label. [4]

In traditional machine learning, the learning process is supervised, and the


programmer has to be extremely specific when telling the computer what types of
things it should be looking for to decide if an image contains a dog or does not
contain a dog. This is a laborious process called feature extraction, and the
computer's success rate depends entirely upon the programmer's ability to
accurately define a feature set for a dog. The advantage of deep learning is the
program builds the feature set by itself without supervision. Unsupervised learning
is not only faster, but it is usually more accurate.

Initially, the computer program might be provided with training data -- a set of
images for which a human has labeled each image dog or not dog with metatags.
The program uses the information it receives from the training data to create a
feature set for dogs and build a predictive model. In this case, the model the
computer first creates might predict that anything in an image that has four legs

17
SEMINAR INDIVIDUAL ASSIGNMET

and a tail should be labeled dog. Of course, the program is not aware of the labels
four legs or tail. It will simply look for patterns of pixels in the digital data. With
each iteration, the predictive model becomes more complex and more accurate.

Unlike the toddler, who will take weeks or even months to understand the concept
of dog, a computer program that uses deep learning algorithms can be shown a
training set and sort through millions of images, accurately identifying which
images have dogs in them within a few minutes. [4]

To achieve an acceptable level of accuracy, deep learning programs require access


to immense amounts of training data and processing power, neither of which were
easily available to programmers until the era of big data and cloud computing.
Because deep learning programming can create complex statistical models directly
from its own iterative output, it is able to create accurate predictive models from
large quantities of unlabeled, unstructured data. This is important as the internet of
things (IoT) continues to become more pervasive because most of the data humans
and machines create is unstructured and is not labeled.

Deep learning methods

Various methods can be used to create strong deep learning models. These
techniques include learning rate decay, transfer learning, training from scratch and
dropout.

Learning rate decay: The learning rate is a hyperparameter -- a factor that defines
the system or sets conditions for its operation prior to the learning process -- that
controls how much change the model experiences in response to the estimated
18
SEMINAR INDIVIDUAL ASSIGNMET

error every time the model weights are altered. Learning rates that are too high
may result in unstable training processes or the learning of a suboptimal set of
weights. Learning rates that are too small may produce a lengthy training process
that has the potential to get stuck.

The learning rate decay method -- also called learning rate annealing or adaptive
learning rates -- is the process of adapting the learning rate to increase performance
and reduce training time. The easiest and most common adaptations of learning
rate during training include techniques to reduce the learning rate over time. [4]

Transfer learning: This process involves perfecting a previously trained model; it


requires an interface to the internals of a preexisting network. First, users feed the
existing network new data containing previously unknown classifications. Once
adjustments are made to the network, new tasks can be performed with more
specific categorizing abilities. This method has the advantage of requiring much
less data than others, thus reducing computation time to minutes or hours.

Training from scratch. This method requires a developer to collect a large labeled
data set and configure a network architecture that can learn the features and model.
This technique is especially useful for new applications, as well as applications
with a large number of output categories. However, overall, it is a less common
approach, as it requires inordinate amounts of data, causing training to take days or
weeks.

Dropout: This method attempts to solve the problem of overfitting in networks


with large amounts of parameters by randomly dropping units and their
19
SEMINAR INDIVIDUAL ASSIGNMET

connections from the neural network during training. It has been proven that the
dropout method can improve the performance of neural networks on supervised
learning tasks in areas such as speech recognition, document classification and
computational biology.

Deep learning neural networks

A type of advanced machine learning algorithm, known as an artificial neural


network, underpins most deep learning models. As a result, deep learning may
sometimes be referred to as deep neural learning or deep neural networking. [4]

Neural networks come in several different forms, including recurrent neural


networks, convolutional neural networks, artificial neural networks and
feedforward neural networks, and each has benefits for specific use cases.
However, they all function in somewhat similar ways -- by feeding data in and
letting the model figure out for itself whether it has made the right interpretation or
decision about a given data element.

Neural networks involve a trial-and-error process, so they need massive amounts


of data on which to train. It's no coincidence neural networks became popular only
after most enterprises embraced big data analytics and accumulated large stores of
data. Because the model's first few iterations involve somewhat educated guesses
on the contents of an image or parts of speech, the data used during the training
stage must be labeled so the model can see if its guess was accurate. This means,
though many enterprises that use big data have large amounts of data, unstructured
data is less helpful. Unstructured data can only be analyzed by a deep learning

20
SEMINAR INDIVIDUAL ASSIGNMET

model once it has been trained and reaches an acceptable level of accuracy, but
deep learning models can't train on unstructured data.

Chapter 4
Deep Learning Applications
 Fraud Detection

Fraud is a growing problem in the digital world. In 2020, consumers reported


losing more than $3.3 billion in fraud to the Federal Trade Commission — nearly
double the amount consumers lost the year prior, according to an FTC report.
Identify theft and imposter scams were the two most common fraud categories.

Companies like Two Sense and Signifyd, however, are using deep learning to
detect anomalies in a user’s transaction to help prevent fraud. Those companies
deploy deep learning to collect data from a variety of sources including the device
location, length of stride and credit card purchasing patterns to create a unique user
profile. Another company, Featurespace, works with banks to monitor real-time
customer data to spot suspicious activity and alert authorities to reduce fraud. [5]
Relevant companies: Twosense, Signifyd and Featurespace 

 Customer Relationship management

Customer relationship management systems are often referred to as the “single


source of truth” for revenue teams. They contain emails, phone call records and
notes about all of the company’s current and former customers as well as its

21
SEMINAR INDIVIDUAL ASSIGNMET

prospects. Aggregating that information has helped revenue teams provide a better
customer experience, but the introduction of deep learning in CRM systems has
unlocked another layer of customer insights.
Deep learning is able to sift through all of the scraps of data a company collects
about its prospects to reveal trends about why customers buy, when they buy and
what keeps them around. This includes predictive lead scoring, which helps
companies identify customers they have the best chances to close; scraping data
from customer notes to make it easier to identify trends; and predictions about
customer support needs.
Relevant Companies: Salesforce, Zoho, Marketo
 

 Computer Vision

Deep learning aims to mimic the way the human mind digests information and
detects patterns, which makes it a perfect way to train vision-based AI programs.
Using deep learning models, those platforms are able to take in a series of labeled
photo sets to learn to detect objects like airplanes, faces and guns. [5]

The application for image recognition is expansive. Neurala Brain uses an


algorithm it calls Lifelong-DNN to complete manufacturing quality inspections.
Others like, ZeroEyes, use deep learning to detect firearms in public places like
schools and government property. When a gun is detected, the system is designed
to alert police in an effort to prevent shootings. And finally, companies like
Tractable rely on deep learning to train its AI to take in images from a disaster and
estimate the financial damage from it.

22
SEMINAR INDIVIDUAL ASSIGNMET

Relevant companies: Neurala Brain, ZeroEyes, Tractable              


 

 Vocal AI

When it comes to recreating human speech or translating voice to text, deep


learning is increasingly playing a critical role in the process. Deep learning models
enable tools like Google Voice Search and Siri to take in audio, identify speech
patterns and translate it into text. Then there’s DeepMind’s WaveNet model, which
employs neural networks to take text and identify syllable patterns, inflection
points and more. This enables companies like Google to train its virtual assistant to
sound more human. In addition, Mozilla’s RRNoise Project uses it to identify
background noise in audio files and suppress it, providing users with clearer audio.
Relevant companies: Mozilla, DeepMind, Apple
 
 Natural Language Processing

The introduction of natural language processing technology has made it possible


for robots to read messages and divine meaning from them. Still, the process can
be somewhat oversimplified, failing to account for the ways that words combine
together to change the meaning or intent behind a sentence. 

Deep learning enables natural language processors to identify more complicated


patterns in sentences to provide a more accurate interpretation. Companies like
Gamalon use deep learning to power a chatbot that is able to respond to a larger
volume of messages and provide more accurate responses. Other companies like
Strong apply it in its NLP tool to help users translate text, categorize it to help
23
SEMINAR INDIVIDUAL ASSIGNMET

mine data from a collection of messages and identify sentiment in text.


Grammarly also uses deep learning in combination with grammatical rules and
patterns to help users identify the errors and tone of their messages. [5]
Relevant companies: Gamalon, Strong, Grammarly 

 Data refining

When large amounts of raw data are collected, it’s hard for data scientists to
identify patterns, draw insights or do much with it. It needs to be processed. Deep
learning models are able to take that raw data and make it accessible. Companies
like Descartes Labs use a cloud-based supercomputer to refine data. Making sense
of swaths of raw data can be useful for disease control, disaster mitigation, food
security and satellite imagery.
Relevant companies: Descartes Labs, IBM

 Autonomous Vehicles

Driving is all about taking in external factors like the cars around you, street signs
and pedestrians and reacting to them safely to get from point A to B. While we’re
still a ways away from fully autonomous vehicles, deep learning has played a
crucial role in helping the technology come to fruition. It allows autonomous
vehicles to take into account where you want to go, predict what the obstacles in
your environment will do and create a safe path to get you to that location. 
For instance, Pony.ai has used deep learning to power its planning and control
module within its autonomous vehicle tech to help cars navigate eight-lane
highways, sudden accidents and more. Other self-driving car companies that use

24
SEMINAR INDIVIDUAL ASSIGNMET

deep learning to power its technology include Tesla-owned DeepScale and


Waymo, a subsidiary of Google.     [5]
Relevant companies: Pony.ai, Tesla, Waymo
 

 Supercomputers

While some software uses deep learning in its solution, if you want to build your
own deep learning model, you need a supercomputer. Companies like Boxx and
Nvidia have built workstations that can handle the processing power needed to
build deep learning models. NVIDIA’s DGX Station claims to be the “equivalent
of hundreds of traditional servers,” and enables users to test and tweak their
models. Boxx’s APEXX Neutrino W works with a variety of deep learning
frameworks like Tensorflow and PyTorch. Its mission is to accelerate workflows
and expedite decision-making processes.  [5]
Relevant companies: Boxx, NVIDIA

 Investment modeling

Investment modeling is another industry that has benefited from deep learning.
Predicting the market requires tracking and interpreting dozens of data points from
earning call conversations to public events to stock pricing. Companies like Aiera
use an adaptive deep learning platform to provide institutional investors with real-
time analysis on individual equities, content from earnings calls and public
company events.
Relevant companies: Aiera

25
SEMINAR INDIVIDUAL ASSIGNMET

 E-commerce

Online shopping is now the de-facto way people purchase goods, but it can still be
frustrating to scroll through dozens of pages to find the right pair of shoes that
match your style. Several e-commerce companies are turning to deep learning to
make the hunt easier. Furniture website Cora allows users to upload a photo of
their favorite furniture item and then uses computer vision magic to find similar
items. And among Clarifai’s many deep learning offerings is a tool that helps
brands with image labeling to boost SEO traffic and surface alternative products
for users when an item is out of stock. Loop54, a company that helps e-commerce
websites personalize searches on their websites, also uses deep learning to identify
patterns in user behavior to anticipate their wants. [5]
Relevant companies: Cora, Clarifai, Loop54
 

 Emotional Intelligence

While computers may not be able to replicate human emotions, they are gaining a
better understanding of our moods thanks to deep learning. Patterns like a shift in
tone, a slight frown or a huff are all valuable data signals that can help AI detect
our moods.

Companies like Affectiva are using deep learning to track all of those vocal and
facial reactions to provide a nuanced understanding of our mood. Others like
Robbie AI scour photos and video footage to predict human emotions in real time.

26
SEMINAR INDIVIDUAL ASSIGNMET

Applications like that can be used to help companies connect emotion data to
advertising or even alert medical doctors to a patient’s emotional state. [5]
Relevant companies: Affectiva, Robbie AI
 
 Entertainment

Ever wonder how streaming platforms seem to intuit the exact perfect show for
you to binge-watch next? Well, you have deep learning to thank for that. Streaming
platforms aggregate tons of data on what content you choose to consume and what
you ignore. Take Netflix as an example. The streaming platform uses deep learning
to find patterns in what its viewers watch so that it can create a personalized
experience for its users. [5]
Relevant companies: Netflix

  Advertising

Companies can glean a lot of information from how a user interacts with its
marketing. It can signal intent to buy, show that the product resonates with them or
that they want to learn more information. Many marketing tech firms are using
deep learning to generate even more insights into customers. Companies like
6sense and Cognitiv use deep learning to train their softwares to better understand

27
SEMINAR INDIVIDUAL ASSIGNMET
buyers based on how they engage with an app or navigate a website. This can be
used to help businesses more accurately target potential buyers and create tailored
ad campaigns. Other firms like Dstillery use it to understand more about a
customer’s consumers to help each ad campaign reach the target audience for the
product.       [5]
Relevant companies: 6sense, Cognitiv, Dstillery
 

 Manufacturing

The success of a factory often hinges on machines, humans and robots working
together as efficiently as possible to produce a replicable product. When one part
of the production gets out of whack, it can come at a devastating cost to the
company. Deep learning is being used to make that process even more efficient
and eliminate those errors.

Companies like OneTrack are using it to scan factory floors for anomalies like a


teetering box or an improperly used forklift and alert workers to safety risks. The
goal is to prevent errors that can slow down production and cause harm. Then
there’s Fanuc, which uses it to train its robot to adapt to an array of tasks on a
factory floor. Energy giant General Electric also uses deep learning in its Predix
platform to track and find all possible points of failure on a factory floor. [5]
Relevant companies: OneTrack, Fanuc, General Electric

28
SEMINAR INDIVIDUAL ASSIGNMET
 Healthcare

Doctors can’t be with their patients 24/7, but the one thing we all almost always
carry with us is our phones. And thanks to deep learning, medical tools are able to
scour data from pictures we take and movement data to detect potential health
issues. Robbie.AI’s computer vision software uses this data, for instance, to track a
patient’s movement patterns to predict falls as well as changes in a user’s mental
state. Deep learning has also been proven to detect skin cancer through images,
according to a National Center for Biotechnology Report.    [5]
Relevant companies : Robbie AI

29
SEMINAR INDIVIDUAL ASSIGNMET

Chapter 5
The Future of Deep Learning

IEEE Spectrum released what might be the most important paper on deep learning,
critical of its long term viability — and the conclusion is grim for the church of
deep learning. [6]

The author, Neil C Thompson, is an AI researcher at MIT, so safe to say, these


aren’t the musings of an underqualified journalist; rather, this is a fair appraisal of
where deep learning is headed, not buoyed by corporate stock prices.

Early on, Thompson describes the pursuit of 100% accuracy. Say an arbitrary
model performs at 90% accuracy on some classification task. Each additional
increase in accuracy, marching toward 100% are exponentially more expensive
than the previous, computationally speaking. Moore’s law shows that overtime,
processing power is growing exponentially. But in spite of more efficient chips in
the future, our thirst for deep learning models will greatly outpace hardware
developments. This might seem abstract but when considered in the context of
monetary and environmental costs, the trend becomes clear: Deep learning will
never be financially or environmentally friendly. Whatever hardware developments
await, our expectations of deep learning will have already outpaced whatever
hardware can offer — and the costs will only increase.

30
SEMINAR INDIVIDUAL ASSIGNMET

To contextualize this phenomenon, consider the cost of training Alpha Go (the RL-
based bot that smoked the world’s best Go player.) The price tag paid by
Deepmind: $35,000,000. And related, reaching a 1% error rate on image
classification could be worse for the environment than one month’s worth of
carbon emissions by New York City! Deep learning has already garnered negative
attention for algorithmic bias, so it’s status as the golden boy of AI is already in
jeopardy. It’s only a matter of time before environmental impact takes an equally
loud stage and deep learning is perceived as wholly unhumanitarian. [6]

The cost of deep learning is becoming so extravagant that meta-learning and


transfer-learning have become the current band aid fix: An organization with deep
pockets trains a model, not to detect cats or dogs, people or faces, buildings or
bridges — or anything else per se — but everything. That’s what meta-learning is;
subsequently, data scientists, software engineers, and machine learning engineers
retrain these models on some downstream task, specific to their needs, which is
what transfer-learning is. The problem with this meta-transfer learning design is
that accuracy on the downstream tasks typically isn’t found to be passible,
certainly not attaining the state of the art thresholds that are advertised. The
solution? Bigger meta-learning models at a higher cost — both in monetary and
environmental terms. However, the cost will outpace progress hence the bandaid
nature of this solution.

31
SEMINAR INDIVIDUAL ASSIGNMET

With the above facts in mind, a grim future lies ahead for the deep learning
community: Exponentially rising costs with diminishing returns to performance.
It’s a recipe for another so-called AI Winter. [6]

So where was the community headed before the deep learning resurrection?
Largely expert-based systems, where domain experts leveraged their knowledge to
develop ontologies and rule based systems. It’s much more efficient
computationally, however, it doesn’t allow for experiential learning. One potential
way forward? Neuro-Symbolic methods — a sort of linkage between domain
knowledge and autonomous learning.

Anecdotally, I’ve never been enthused that deep learning models are black-box by
nature; with hundreds of millions of parameters, it’s very difficult to grab a given
parameter value and use it to increase our human understanding of the world
around us. However Neuro-Symbolic methods might be the missing link between
human understanding and autonomous learning. It’s far too early to tell. The only
thing we can be certain of right now — the deep learning community is in for a
reckoning. It might not be today or tomorrow, but winter is coming. [6]

32
SEMINAR INDIVIDUAL ASSIGNMET

Conclusion
Deep learning helps computers to derive meaningful links from a plethora of data
and make sense of unstructured data. Here, the mathematical algorithms are
combined with a lot of data and strong hardware to get qualified information. With
this method, information from digital data can be automatically extracted,
classified and analyzed.

Although deep learning has been around for several years, the trend has only really
picked up in the last three to four years. The reason for this was among other things
better hardware resources, more sophisticated algorithms and optimized neural
networks. Deep learning is not a new approach but a development of the older
approach of artificial neural networks.

33
SEMINAR INDIVIDUAL ASSIGNMET

References

[1] T. d. science, "what is exactly AI," [Online]. Available: https://towardsdatascience.com/but-what-


exactly-is-ai-59454770d39b. [Accessed 09 02 2022].

[2] IBM, "Machine Learning," [Online]. Available: https://www.ibm.com/cloud/learn/machine-learning.


[Accessed 09 02 2022].

[3] A. Oppermann, "Deep Learning," [Online]. Available: https://towardsdatascience.com/what-is-deep-


learning-and-how-does-it-work-2ce44bb692ac. [Accessed 09 02 2022].

[4] E. Burns, "Deep learning-deep-neural network," [Online]. Available:


https://www.techtarget.com/searchenterpriseai/definition/deep-learning-deep-neural-network.
[Accessed 09 02 2022].

[5] M. Thomas, "Deep learning Application," [Online]. Available: https://builtin.com/artificial-


intelligence/deep-learning-applications. [Accessed 09 02 2022].

[6] J. Moore, "Future of Deep Learning," [Online]. Available: https://towardsdatascience.com/the-


future-of-deep-learning-7e8574ad6ae3. [Accessed 09 02 2022].

[7] https://towardsdatascience.com/why-deep-learning-is-needed-over-traditional-machine-learning-
1b6a99177063, "What is Deep Learning".

34

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy