0% found this document useful (0 votes)
7 views13 pages

Neural Network As Universal Function Approximates

intro to neural networks

Uploaded by

sethuramanr1976
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views13 pages

Neural Network As Universal Function Approximates

intro to neural networks

Uploaded by

sethuramanr1976
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

NEURAL NETWORK AS

UNIVERSAL FUNCTION
APPROXIMATES

Presented By,
Swetha S
NEURAL NETWORKS

• A *neural network* is a computational model inspired


by the human brain, consisting of interconnected
layers of neurons. These networks process input data,
adjust the weights of connections between neurons,
and learn patterns through training. Neural networks
are widely used in AI for tasks like image recognition,
language translation, and more.
STRUCTURE OF NEURAL NETWORK
STRUCTURE OF NEURAL NETWORK

• ANNs consist of interconnected nodes –called


neurons, nodes or units– which process and transmit
data through a series of connections. The output of
one neuron might be fed as an input to a subsequent
neuron. It can therefore be represented by a directed
graph.
• The direction within the digraph represents the
information flow. Each arrow from a source to a target
neuron indicates that the output value of the source is
used as an input value to the target.
WEIGHTS

• *Weights:* Weights are like


adjustable settings for each
connection between neurons.
They determine how much
influence one neuron has on
another. Higher weights mean
stronger influence, and
adjusting these weights helps
the network learn from data.
ACTIVATION FUNCTION

• *Activation Function:* An activation function decides


whether a neuron should be activated or not based on
its input. It adds non-linearity to the network, allowing
it to handle more complex patterns. Essentially, it
helps the network make decisions or predictions.
SIGMOID FUNCTION

• Range: 0 to 1(binary
classification)
• Output close to 1: belongs to
class 1(positive class)
• Output close to 0: belongs to
class 0(negative class)
• Equation: A = 1/(1 + e pow-x)
TANH FUNCTION

• Range: -1 to 1
• Output close to 1:belongs to
class 1(positive class)
• Output close to -1:belongs to
class 0(negative class)
• Equation:
f(x)=tanh(x)=2/(1+e-2x)-1
FEEDFORWARD NEURAL NETWORK

• A Feedforward Neural Network (FNN) is a type of artificial


neural network where connections between the nodes do not
form cycles. This characteristic differentiates it from recurrent
neural networks (RNNs). The network consists of an input
layer, one or more hidden layers, and an output layer.
Information flows in one direction—from input to output—
hence the name “feedforward.”
NEURAL NETWORK AS UNIVERSAL
FUNCTION APPROXIMATES
• The *Universal Approximation Theorem* states that a
feedforward neural network with at least one hidden layer can
approximate any continuous function to a desired accuracy,
given enough neurons and proper activation functions. This
means neural networks have the capability to learn and model
complex relationships between inputs and outputs, making
them powerful tools for a wide range of tasks in AI and
machine learning.
UNIVERSAL OPPROXIMATION THEORM

• The Universal Approximation Theorem, formulated by


George Cybenko in 1989 and independently proven by Kurt
Hornik in 1991, provides a robust mathematical foundation
for understanding the capabilities of neural networks. In
essence, it asserts that a neural network with a single
hidden layer and a finite number of neurons can
approximate any continuous function on a compact input
space. This theorem serves as the cornerstone of neural
networks' universal function approximation prowess.
Mathematically the Universal approximation
theorem can be expressed as follows

• F(x)=the approximated function


• N=the number of neurons in the hidden layer
• Ci=are the output weights
• Sigma i=represents the activation function
• Wi=denotes the weight matrix
• Bi=bias term
THANK YOU

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy