0% found this document useful (0 votes)
137 views23 pages

Logistic Regression With A Neural Network Mindset: 1 - Packages

- The document describes building a logistic regression classifier to recognize cats vs non-cats from image data. - The data provided contains labeled training and test image sets, with each image flattened into a single vector. - Key steps outlined are initializing parameters, calculating the cost function and its gradient, and using gradient descent for optimization - gathered into a main model function.

Uploaded by

Gijacis Khaseng
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
137 views23 pages

Logistic Regression With A Neural Network Mindset: 1 - Packages

- The document describes building a logistic regression classifier to recognize cats vs non-cats from image data. - The data provided contains labeled training and test image sets, with each image flattened into a single vector. - Key steps outlined are initializing parameters, calculating the cost function and its gradient, and using gradient descent for optimization - gathered into a main model function.

Uploaded by

Gijacis Khaseng
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

11/8/2017 Logistic Regression with a Neural Network mindset v4

Logistic Regression with a Neural Network


mindset
Welcome to your first (required) programming assignment! You will build a logistic regression
classifier to recognize cats. This assignment will step you through how to do this with a Neural
Network mindset, and so will also hone your intuitions about deep learning.

Instructions:

Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do
so.

You will learn to:

Build the general architecture of a learning algorithm, including:


Initializing parameters
Calculating the cost function and its gradient
Using an optimization algorithm (gradient descent)
Gather all three functions above into a main model function, in the right order.

1 - Packages
First, let's run the cell below to import all the packages that you will need during this assignment.

numpy (www.numpy.org) is the fundamental package for scientific computing with Python.
h5py (http://www.h5py.org) is a common package to interact with a dataset that is stored
on an H5 file.
matplotlib (http://matplotlib.org) is a famous library to plot graphs in Python.
PIL (http://www.pythonware.com/products/pil/) and scipy (https://www.scipy.org/) are used
here to test your model with your own picture at the end.

In [15]: import numpy as np


import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset

%matplotlib inline

2 - Overview of the Problem set


Problem Statement: You are given a dataset ("data.h5") containing:

https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Network… 1/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

- a training set of m_train images labeled as cat (y=1) or non-cat (y=0)


- a test set of m_test images labeled as cat or non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channel
s (RGB). Thus, each image is square (height = num_px) and (width = num_p
x).

You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-
cat.

Let's get more familiar with the dataset. Load the data by running the following code.

In [16]: # Loading the data (cat/non-cat)


train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset

We added "_orig" at the end of image datasets (train and test) because we are going to preprocess
them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y
and test_set_y don't need any preprocessing).

Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can
visualize an example by running the following code. Feel free also to change the index value and
re-run to see other images.

In [17]: # Example of a picture


index = 25
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(tra

y = [1], it's a 'cat' picture.

Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If
you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many
https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Network… 2/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

bugs.

Exercise: Find the values for:

- m_train (number of training examples)


- m_test (number of test examples)
- num_px (= height = width of a training image)

Remember that train_set_x_orig is a numpy-array of shape (m_train, num_px, num_px, 3). For
instance, you can access m_train by writing train_set_x_orig.shape[0].

In [18]: ### START CODE HERE ### (≈ 3 lines of code)


m_train = None
m_test = None
num_px = None
m_train = train_set_x_orig.shape[0]
m_test = test_set_x_orig.shape[0]
num_px = train_set_x_orig.shape[1]
### END CODE HERE ###

print ("Number of training examples: m_train = " + str(m_train))


print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))

Number of training examples: m_train = 209


Number of testing examples: m_test = 50
Height/Width of each image: num_px = 64
Each image is of size: (64, 64, 3)
train_set_x shape: (209, 64, 64, 3)
train_set_y shape: (1, 209)
test_set_x shape: (50, 64, 64, 3)
test_set_y shape: (1, 50)

Expected Output for m_train, m_test and num_px:

m_train 209

m_test 50

num_px 64

For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array
∗ ∗
of shape (num_px num_px 3, 1). After this, our training (and test) dataset is a numpy-array
where each column represents a flattened image. There should be m_train (respectively m_test)
columns.

Exercise: Reshape the training and test data sets so that images of size (num_px, num_px, 3) are

flattened into single vectors of shape (num_px num_px 3, 1). ∗
https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Network… 3/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b c d, ∗∗
a) is to use:

X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X

In [19]: # Reshape the training and test examples

### START CODE HERE ### (≈ 2 lines of code)


train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0],-1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0],-1).T
### END CODE HERE ###

print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))


print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))

train_set_x_flatten shape: (12288, 209)


train_set_y shape: (1, 209)
test_set_x_flatten shape: (12288, 50)
test_set_y shape: (1, 50)
sanity check after reshaping: [17 31 56 22 33]

Expected Output:

train_set_x_flatten (12288,
shape 209)

train_set_y shape (1, 209)

test_set_x_flatten (12288,
shape 50)

test_set_y shape (1, 50)

sanity check after [17 31 56


reshaping 22 33]

To represent color images, the red, green and blue channels (RGB) must be specified for each
pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.

One common preprocessing step in machine learning is to center and standardize your dataset,
meaning that you substract the mean of the whole numpy array from each example, and then divide
each example by the standard deviation of the whole numpy array. But for picture datasets, it is
simpler and more convenient and works almost as well to just divide every row of the dataset by
255 (the maximum value of a pixel channel).

Let's standardize our dataset.

https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Network… 4/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

In [20]: train_set_x =train_set_x_flatten/255.


test_set_x =test_set_x_flatten/255.

What you need to remember:

Common steps for pre-processing a new dataset are:


Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)
Reshape the datasets such that each example is now a vector of size (num_px * num_px *
3, 1)
"Standardize" the data

3 - General Architecture of the learning algorithm


It's time to design a simple algorithm to distinguish cat images from non-cat images.

You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains
why Logistic Regression is actually a very simple Neural Network!

Mathematical expression of the algorithm:

For one example x(i) :


z (i) = wT x(i) + b
(i)̂  = a(i) = sigmoid(z(i) )
(1)
y (2)
(a(i) , y(i) ) = −y(i) log(a(i) ) − (1 − y(i) )log(1 − a(i) ) (3)
The cost is then computed by summing over all training examples:

https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Network… 5/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

1 m
J = m ∑ (a(i) , y(i) ) (6)
i=1
Key steps: In this exercise, you will carry out the following steps:

- Initialize the parameters of the model


- Learn the parameters for the model by minimizing the cost
- Use the learned parameters to make predictions (on the test set)
- Analyse the results and conclude

4 - Building the parts of our algorithm


The main steps for building a Neural Network are:

1. Define the model structure (such as number of input features)


2. Initialize the model's parameters
3. Loop:
Calculate current loss (forward propagation)
Calculate current gradient (backward propagation)
Update parameters (gradient descent)

You often build 1-3 separately and integrate them into one function we call model().

4.1 - Helper functions


Exercise: Using your code from "Python Basics", implement sigmoid(). As you've seen in the
figure above, you need to compute sigmoid(wT x + b) = 1+e−(1wT x+b) to make predictions. Use
np.exp().

In [21]: # GRADED FUNCTION: sigmoid

def sigmoid(z):
"""
Compute the sigmoid of z

Arguments:
z -- A scalar or numpy array of any size.

Return:
s -- sigmoid(z)
"""

### START CODE HERE ### (≈ 1 line of code)

s = 1/(1+ np.exp(-z))
### END CODE HERE ###

return s

https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Network… 6/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

In [22]: print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))

sigmoid([0, 2]) = [ 0.5 0.88079708]

Expected Output:

sigmoid([0, 2]) [ 0.5 0.88079708]

4.2 - Initializing parameters


Exercise: Implement parameter initialization in the cell below. You have to initialize w as a vector of
zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's
documentation.

In [23]: # GRADED FUNCTION: initialize_with_zeros

def initialize_with_zeros(dim):
"""
This function creates a vector of zeros of shape (dim, 1) for w and initialize

Argument:
dim -- size of the w vector we want (or number of parameters in this case)

Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
"""

### START CODE HERE ### (≈ 1 line of code)

w, b = np.zeros((dim,1)), 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))

return w, b

In [25]: dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))

w = [[ 0.]
[ 0.]]
b = 0

Expected Output:

w [[ 0.] [ 0.]]

b 0

https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Network… 7/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

For image inputs, w will be of shape (num_px × num_px × 3, 1).


4.3 - Forward and Backward propagation
Now that your parameters are initialized, you can do the "forward" and "backward" propagation
steps for learning the parameters.

Exercise: Implement a function propagate() that computes the cost function and its gradient.

Hints:

Forward Propagation:

You get X
You compute A = σ(wT X + b) = (a(0)1, a(1)m,...,(i) a(m−1)(i), a(m) ) (i)
J = − m ∑i=1 y log(a ) + (1 − y )log(1 − a(i) )
You calculate the cost function:

Here are the two formulas you will be using:

∂J = 1 X(A − Y )T (7)
∂w m
∂J = 1 m (a(i) − y(i) ) (8)
∂b m ∑ i=1

https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Network… 8/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

In [26]: # GRADED FUNCTION: propagate

def propagate(w, b, X, Y):


"""
Implement the cost function and its gradient for the propagation explained abo

Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, numbe

Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b

Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
"""

m = X.shape[1]

# FORWARD PROPAGATION (FROM X TO COST)


### START CODE HERE ### (≈ 2 lines of code)

A = sigmoid(np.dot(w.T,X)+b)
cost = -(1/m)*np.sum(Y*np.log(A) + (1-Y)*np.log(1-A))
# compute cost
### END CODE HERE ###

# BACKWARD PROPAGATION (TO FIND GRAD)


### START CODE HERE ### (≈ 2 lines of code)

dw = 1/m*np.dot(X,(A-Y).T)
db = 1/m*np.sum(A-Y)
### END CODE HERE ###

assert(dw.shape == w.shape)
assert(db.dtype == float)
cost =np.squeeze(cost)
assert(cost.shape == ())

grads = {"dw": dw,


"db": db}

return grads, cost

https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Network… 9/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

In [27]: w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.a


grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))

dw = [[ 0.99845601]
[ 2.39507239]]
db = 0.00145557813678
cost = 5.80154531939

Expected Output:

dw [[ 0.99845601] [ 2.39507239]]

db 0.00145557813678

cost 5.801545319394553

d) Optimization
You have initialized your parameters.
You are also able to compute a cost function and its gradient.
Now, you want to update the parameters using gradient descent.

w b
Exercise: Write down the optimization function. The goal is to learn and by minimizing the cost
J θ
function . For a parameter , the update rule is θ = θ − α dθ α
, where is the learning rate.

https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Networ… 10/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

In [28]: # GRADED FUNCTION: optimize

def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):


"""
This function optimizes w and b by running a gradient descent algorithm

Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, num
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps

Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with res
costs -- list of all the costs computed during the optimization, this will be

Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use pro
2) Update the parameters using gradient descent rule for w and b.
"""

costs = []

for i in range(num_iterations):

# Cost and gradient calculation (≈ 1-4 lines of code)


### START CODE HERE ###
grads, cost = propagate(w, b, X, Y)
### END CODE HERE ###

# Retrieve derivatives from grads


dw = grads["dw"]
db = grads["db"]

# update rule (≈ 2 lines of code)


### START CODE HERE ###
w = w - learning_rate * dw # need to broadcast
b = b - learning_rate * db
### END CODE HERE ###

# Record the costs


if i % 100 == 0:
costs.append(cost)

# Print the cost every 100 training examples


if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" % (i, cost))

params = {"w": w,

https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Networ… 11/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

"b": b}

grads = {"dw": dw,


"db": db}

return params, grads, costs

In [29]: params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0

print ("w = " + str(params["w"]))


print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))

w = [[ 0.19033591]
[ 0.12259159]]
b = 1.92535983008
dw = [[ 0.67752042]
[ 1.41625495]]
db = 0.219194504541

Expected Output:

w [[ 0.19033591] [ 0.12259159]]

b 1.92535983008

dw [[ 0.67752042] [ 1.41625495]]

db 0.219194504541

Exercise: The previous function will output the learned w and b. We are able to use w and b to
predict the labels for a dataset X. Implement the predict() function. There is two steps to
computing predictions:

1. Calculate Ŷ  = A = σ(wT X + b)


2. Convert the entries of a into 0 (if activation <= 0.5) or 1 (if activation > 0.5), stores the
predictions in a vector Y_prediction. If you wish, you can use an if/else statement in a
for loop (though there is also a way to vectorize this).

https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Networ… 12/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

In [30]: # GRADED FUNCTION: predict

def predict(w, b, X):


'''
Predict whether the label is 0 or 1 using learned logistic regression paramete

Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)

Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for t
'''

m = X.shape[1]
Y_prediction = np.zeros((1, m))
w = w.reshape(X.shape[0], 1)

# Compute vector "A" predicting the probabilities of a cat being present in th


### START CODE HERE ### (≈ 1 line of code)

A = sigmoid(np.dot(w.T, X) + b)
### END CODE HERE ###

for i in range(A.shape[1]):

# Convert probabilities A[0,i] to actual predictions p[0,i]


### START CODE HERE ### (≈ 4 lines of code)
Y_prediction[0, i] = 1 if A[0, i] > 0.5 else 0
### END CODE HERE ###

assert(Y_prediction.shape == (1, m))

return Y_prediction

In [31]: w = np.array([[0.1124579],[0.23106775]])
b = -0.3
X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]])
print ("predictions = " + str(predict(w, b, X)))

predictions = [[ 1. 1. 0.]]

Expected Output:

predictions [[ 1. 1. 0.]]

What to remember: You've implemented several functions that:

Initialize (w,b)
Optimize the loss iteratively to learn parameters (w,b):
https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Networ… 13/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

computing the cost and its gradient


updating the parameters using gradient descent
Use the learned (w,b) to predict the labels for a given set of examples

5 - Merge all functions into a model


You will now see how the overall model is structured by putting together all the building blocks
(functions implemented in the previous parts) together, in the right order.

Exercise: Implement the model function. Use the following notation:

- Y_prediction for your predictions on the test set


- Y_prediction_train for your predictions on the train set
- w, costs, grads for the outputs of optimize()

https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Networ… 14/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

In [32]: # GRADED FUNCTION: model

def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate =


"""
Builds the logistic regression model by calling the function you've implemente

Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px
Y_train -- training labels represented by a numpy array (vector) of shape (1,
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_te
num_iterations -- hyperparameter representing the number of iterations to opt
learning_rate -- hyperparameter representing the learning rate used in the upd
print_cost -- Set to true to print the cost every 100 iterations

Returns:
d -- dictionary containing information about the model.
"""

### START CODE HERE ###

# initialize parameters with zeros (≈ 1 line of code)

w, b = initialize_with_zeros(X_train.shape[0])
# Gradient descent (≈ 1 line of code)

parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, le

# Retrieve parameters w and b from dictionary "parameters"


w = parameters["w"]
b = parameters["b"]

# Predict test/train set examples (≈ 2 lines of code)

Y_prediction_test = predict(w, b, X_test)


Y_prediction_train = predict(w, b, X_train)
### END CODE HERE ###

# Print train/test Errors


print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train -
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_

d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}

return d

Run the following cell to train your model.

https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Networ… 15/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

In [34]: d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000

Cost after iteration 0: 0.693147


Cost after iteration 100: 0.584508
Cost after iteration 200: 0.466949
Cost after iteration 300: 0.376007
Cost after iteration 400: 0.331463
Cost after iteration 500: 0.303273
Cost after iteration 600: 0.279880
Cost after iteration 700: 0.260042
Cost after iteration 800: 0.242941
Cost after iteration 900: 0.228004
Cost after iteration 1000: 0.214820
Cost after iteration 1100: 0.203078
Cost after iteration 1200: 0.192544
Cost after iteration 1300: 0.183033
Cost after iteration 1400: 0.174399
Cost after iteration 1500: 0.166521
Cost after iteration 1600: 0.159305
Cost after iteration 1700: 0.152667
Cost after iteration 1800: 0.146542
Cost after iteration 1900: 0.140872
train accuracy: 99.04306220095694 %
test accuracy: 70.0 %

Expected Output:

Cost after
0.693147
iteration 0

⋮ ⋮
Train 99.04306220095694
Accuracy %

Test Accuracy 70.0 %

Comment: Training accuracy is close to 100%. This is a good sanity check: your model is working
and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this
simple model, given the small dataset we used and that logistic regression is a linear classifier. But
no worries, you'll build an even better classifier next week!

Also, you see that the model is clearly overfitting the training data. Later in this specialization you
will learn how to reduce overfitting, for example by using regularization. Using the code below (and
changing the index variable) you can look at predictions on pictures of the test set.

https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Networ… 16/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

In [35]: # Example of a picture that was wrongly classified.


index = 1
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index])+",you predicted that it is a \"" + classe

y = 1,you predicted that it is a "cat" picture.

Let's also plot the cost function and the gradients.

https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Networ… 17/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

In [155]: # Plot learning curve (with costs)


costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()

Interpretation: You can see the cost decreasing. It shows that the parameters are being learned.
However, you see that you could train the model even more on the training set. Try to increase the
number of iterations in the cell above and rerun the cells. You might see that the training set
accuracy goes up, but the test set accuracy goes down. This is called overfitting.

6 - Further analysis (optional/ungraded exercise)


Congratulations on building your first image classification model. Let's analyze it further, and
examine possible choices for the learning rate . α
Choice of learning rate

Reminder: In order for Gradient Descent to work you must choose the learning rate wisely. The
α
learning rate determines how rapidly we update the parameters. If the learning rate is too large
we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to
converge to the best values. That's why it is crucial to use a well-tuned learning rate.

Let's compare the learning curve of our model with several choices of learning rates. Run the cell
below. This should take about 1 minute. Feel free also to try different values than the three we have
initialized the learning_rates variable to contain, and see what happens.
https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Networ… 18/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Networ… 19/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

In [156]: learning_rates = [0.01, 0.001, 0.0001]


models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)]= model(train_set_x, train_set_y, test_set_x, test_set_y, num_it
print ('\n' +"-------------------------------------------------------"+ '\n')

for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learn

plt.ylabel('cost')
plt.xlabel('iterations')

legend = plt.legend(loc="upper center", shadow=True)


frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()

learning rate is: 0.01


train accuracy: 99.52153110047847 %
test accuracy: 68.0 %

-------------------------------------------------------

learning rate is: 0.001


train accuracy: 88.99521531100478 %
test accuracy: 64.0 %

-------------------------------------------------------

learning rate is: 0.0001


train accuracy: 68.42105263157895 %
test accuracy: 36.0 %

-------------------------------------------------------

https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Networ… 20/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

Interpretation:

Different learning rates give different costs and thus different predictions results.
If the learning rate is too large (0.01), the cost may oscillate up and down. It may even
diverge (though in this example, using 0.01 still eventually ends up at a good value for the
cost).
A lower cost doesn't mean a better model. You have to check if there is possibly overfitting.
It happens when the training accuracy is a lot higher than the test accuracy.
In deep learning, we usually recommend that you:
Choose the learning rate that better minimizes the cost function.
If your model overfits, use other techniques to reduce overfitting. (We'll talk about
this in later videos.)

7 - Test with your own image (optional/ungraded


exercise)
Congratulations on finishing this assignment. You can use your own image and see the output of
your model. To do that:

1. Click on "File" in the upper bar of this notebook, then click "Open" t
o go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" f
older
3. Change your image's name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-ca
t)!

https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Networ… 21/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

In [158]: ## START CODE HERE ## (PUT YOUR IMAGE NAME)


my_image = "my_image.jpg" # change this to the name of your image file
## END CODE HERE ##

# We preprocess the image to fit your algorithm.


fname = "images/"+ my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num
my_predicted_image = predict(d["w"], d["b"], my_image)

plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a

y = 0.0, your algorithm predicts a "non-cat" picture.

What to remember from this assignment:

1. Preprocessing the dataset is important.


2. You implemented each function separately: initialize(), propagate(), optimize(). Then you
built a model().
3. Tuning the learning rate (which is an example of a "hyperparameter") can make a big
difference to the algorithm. You will see more examples of this later in this course!

Finally, if you'd like, we invite you to try different things on this Notebook. Make sure you submit
before trying anything. Once you submit, things you can play with include:

- Play with the learning rate and the number of iterations


- Try different initialization methods and compare the results
- Test other preprocessings (center the data, or divide each row by its s
tandard deviation)

https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Networ… 22/23
11/8/2017 Logistic Regression with a Neural Network mindset v4

Bibliography:

http://www.wildml.com/2015/09/implementing-a-neural-network-from-scratch/
(http://www.wildml.com/2015/09/implementing-a-neural-network-from-scratch/)
https://stats.stackexchange.com/questions/211436/why-do-we-normalize-images-by-
subtracting-the-datasets-image-mean-and-not-the-c
(https://stats.stackexchange.com/questions/211436/why-do-we-normalize-images-by-
subtracting-the-datasets-image-mean-and-not-the-c)

In [ ]:

https://hub.coursera-notebooks.org/user/ceugcspkoirhiylurudnwn/notebooks/Week%202/Logistic%20Regression%20as%20a%20Neural%20Networ… 23/23

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy