Skip to content

Neural Net Notebook: Implementation Details + Pseudocode #617

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Aug 16, 2017
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
80 changes: 71 additions & 9 deletions neural_nets.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,9 @@
},
"outputs": [],
"source": [
"from learning import *"
"from learning import *\n",
"\n",
"from notebook import psource, pseudocode"
]
},
{
Expand Down Expand Up @@ -56,18 +58,18 @@
"\n",
"After that we will create our neural network in the `network` function. This function will make the necessary connections between the input layer, hidden layer and output layer. With the network ready, we will use the `BackPropagationLearner` to train the weights of our network for the examples provided in the dataset.\n",
"\n",
"The NeuralNetLearner returns the `predict` function, which can receive an example and feed-forward it into our network to generate a prediction."
"The NeuralNetLearner returns the `predict` function which, in short, can receive an example and feed-forward it into our network to generate a prediction.\n",
"\n",
"In more detail, the example values are first passed to the input layer and then they are passed through the rest of the layers. Each node calculates the dot product of its inputs and its weights, activates it and pushes it to the next layer. The final prediction is the node with the maximum value from the output layer."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": true
},
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%psource NeuralNetLearner"
"psource(NeuralNetLearner)"
]
},
{
Expand Down Expand Up @@ -101,6 +103,66 @@
"We can use the same technique for the weights in the input layer as well. After we have the gradients for both weights, we use gradient descent to update the weights of the network."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Pseudocode"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"### AIMA3e\n",
"__function__ BACK-PROP-LEARNING(_examples_, _network_) __returns__ a neural network \n",
" __inputs__ _examples_, a set of examples, each with input vector __x__ and output vector __y__ \n",
"&emsp;&emsp;&emsp;&emsp;_network_, a multilayer network with _L_ layers, weights _w<sub>i,j</sub>_, activation function _g_ \n",
"&emsp;__local variables__: &Delta;, a vector of errors, indexed by network node \n",
"\n",
"&emsp;__repeat__ \n",
"&emsp;&emsp;&emsp;__for each__ weight _w<sub>i,j</sub>_ in _network_ __do__ \n",
"&emsp;&emsp;&emsp;&emsp;&emsp;_w<sub>i,j</sub>_ &larr; a small random number \n",
"&emsp;&emsp;&emsp;__for each__ example (__x__, __y__) __in__ _examples_ __do__ \n",
"&emsp;&emsp;&emsp;&emsp;&emsp;/\\* _Propagate the inputs forward to compute the outputs_ \\*/ \n",
"&emsp;&emsp;&emsp;&emsp;&emsp;__for each__ node _i_ in the input layer __do__ \n",
"&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;_a<sub>i</sub>_ &larr; _x<sub>i</sub>_ \n",
"&emsp;&emsp;&emsp;&emsp;&emsp;__for__ _l_ = 2 __to__ _L_ __do__ \n",
"&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;__for each__ node _j_ in layer _l_ __do__ \n",
"&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;_in<sub>j</sub>_ &larr; &Sigma;<sub>_i_</sub> _w<sub>i,j</sub>_ _a<sub>i</sub>_ \n",
"&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;_a<sub>j</sub>_ &larr; _g_(_in<sub>j</sub>_) \n",
"&emsp;&emsp;&emsp;&emsp;&emsp;/\\* _Propagate deltas backward from output layer to input layer_ \\*/ \n",
"&emsp;&emsp;&emsp;&emsp;&emsp;__for each__ node _j_ in the output layer __do__ \n",
"&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&Delta;\\[_j_\\] &larr; _g_&prime;(_in<sub>j</sub>_) &times; (_y<sub>i</sub>_ &minus; _a<sub>j</sub>_) \n",
"&emsp;&emsp;&emsp;&emsp;&emsp;__for__ _l_ = _L_ &minus; 1 __to__ 1 __do__ \n",
"&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;__for each__ node _i_ in layer _l_ __do__ \n",
"&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&Delta;\\[_i_\\] &larr; _g_&prime;(_in<sub>i</sub>_) &Sigma;<sub>_j_</sub> _w<sub>i,j</sub>_ &Delta;\\[_j_\\] \n",
"&emsp;&emsp;&emsp;&emsp;&emsp;/\\* _Update every weight in network using deltas_ \\*/ \n",
"&emsp;&emsp;&emsp;&emsp;&emsp;__for each__ weight _w<sub>i,j</sub>_ in _network_ __do__ \n",
"&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;_w<sub>i,j</sub>_ &larr; _w<sub>i,j</sub>_ &plus; _&alpha;_ &times; _a<sub>i</sub>_ &times; &Delta;\\[_j_\\] \n",
" &emsp;__until__ some stopping criterion is satisfied \n",
" &emsp;__return__ _network_ \n",
"\n",
"---\n",
"__Figure ??__ The back\\-propagation algorithm for learning in multilayer networks."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pseudocode('Back-Prop-Learning')"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand All @@ -112,11 +174,11 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%psource BackPropagationLearner"
"psource(BackPropagationLearner)"
]
},
{
Expand Down
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy