5 - How Do NNS Learn
5 - How Do NNS Learn
Hardcoding
This is where you tell the programme specific rules and
outcomes, then guide it throughout the entire process,
accounting for every possible option the programme will
have to deal with. It is a more involved process with
more interaction between the programmer and
programme.
Neural Networking
With a Network, you create the facility for the programme to understand what it needs to
do independently. You provide the inputs, state the desired outputs, and let it work its own way
from one to the other.
Neural networks create a facility for the program to be able to understand what is doing
on its own.
For a neural network,
you provide inputs you tell it which
one has outputs code the neural network
and then you let it figure everything code the architecture
out on its own point the neural network at the folder of
cat and dog
Goal: network which then learns on its own
Distinguish dog vs cat
describe the conditions
categorize
Categorized data, you tell the neural network to go and learn, and the neural
network will on its own understand everything
once it's trained up when you give it a new image of a cat or dog it will be able to
understand or identify whether it's a cat or dog
When input variables go along the synapses and into the neuron, where an activation
function is applied, the resulting data is the output value, Ŷ.
1. adjust that output value, y=actual value
2. reality: output value is the predicted value by the algorithm by the neural network
y hat is the output value
3. denomination for the output value and the perceptron was first invented in 1957
by Frank Rosenblatt
- idea was to create something that can actually learn and adjust itself
We have input values that we have been supplied to the superceptron and basically to
our neural network and then the activation functions apply.
We have an output and now we're going to plot the output on its chart.
- output y hat
Comparison:
To be able to learn we need to compare the output value to the actual value that we want
the neural network to get and that is the value y so if we plot it here you'll see that there's
a bit of difference now we're going to calculate a function called the cost function
In order for our Network to learn we need to compare the output value with the
actual value.
As long as there exists a disparity between Y and Ŷ, we will need to adjust those weights.
Once we tweak them a little we run the Network again. A new cost function will be
produced, hopefully, smaller than the last.
Rinse and Repeat
We need to repeat this until we scrub the cost function down to as small a number
as possible, as close to 0 as it will go.
When the output value and actual value are almost touching we know we have optimal
weights and can therefore proceed to the testing phase, or application phase.
Example
Say we have three input values.
Hours of study
Hours of sleep
Result in a mid-semester quiz
Based on these variables we are trying to calculate the result in an upcoming exam. Let’s
say the result of the exam is 93%. That would be our actual value, Y.
We feed the variables through the weighted synapses and the neuron to calculate our
output value, Ŷ.
Then the cost function is applied and the data goes in reverse through the Neural
Network.
If there is a disparity between Y and Ŷ then the weights will be adjusted and the process
can begin all over again. Rinse and repeat until the cost function is minimized.
In this example that would mean our output value would equal the actual value of the 93%
test score.
Go Bigger
What if you wanted to apply this process to an entire class? You would simply need to
duplicate these smaller Networks and repeat the process again.
However, once you do this you will not have a number of smaller networks processing
separately side-by-side.
If you have thirty students the Y / Ŷ comparison will occur thirty times in each smaller
network but the cost function will be applied to all of them together.
As a result, the weights for every student will be adjusted accordingly, so on and so forth.
Additional Reading
For further reading on this process, I will direct you towards an article named A list of cost
functions used in neural networks, alongside applications. CrossValidated (2015).