Boston Housing
Boston Housing
In addition to implementing code, there will be questions that you must answer which relate to the project and
your implementation. Each section where you will answer a question is preceded by a 'Question X' header.
Carefully read each question and provide thorough answers in the following text boxes that begin with
'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the
implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In
addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
http://localhost:8888/nbconvert/html/Py/machine-learning/projects/boston_housing/boston_housing.ipynb?download=false 1/17
3/14/2019 boston_housing
Getting Started
In this project, you will evaluate the performance and predictive power of a model that has been trained and
tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is
seen as a good fit could then be used to make certain predictions about a home — in particular, its monetary
value. This model would prove to be invaluable for someone like a real estate agent who could make use of
such information on a daily basis.
The dataset for this project originates from the UCI Machine Learning Repository
(https://archive.ics.uci.edu/ml/datasets/Housing). The Boston housing data was collected in 1978 and each of
the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston,
Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the
dataset:
16 data points have an 'MEDV' value of 50.0. These data points likely contain missing or censored
values and have been removed.
1 data point has an 'RM' value of 8.78. This data point can be considered an outlier and has been
removed.
The features 'RM' , 'LSTAT' , 'PTRATIO' , and 'MEDV' are essential. The remaining non-relevant
features have been excluded.
The feature 'MEDV' has been multiplicatively scaled to account for 35 years of market inflation.
Run the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries
required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
# Success
print("Boston housing dataset has {} data points with {} variables eac
h.".format(*data.shape))
Boston housing dataset has 489 data points with 4 variables each.
http://localhost:8888/nbconvert/html/Py/machine-learning/projects/boston_housing/boston_housing.ipynb?download=false 2/17
3/14/2019 boston_housing
Data Exploration
In this first section of this project, you will make a cursory investigation about the Boston housing data and
provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental
practice to help you better understand and justify your results.
Since the main goal of this project is to construct a working model which has the capability of predicting the
value of houses, we will need to separate the dataset into features and the target variable. The features,
'RM' , 'LSTAT' , and 'PTRATIO' , give us quantitative information about each data point. The target
variable, 'MEDV' , will be the variable we seek to predict. These are stored in features and prices ,
respectively.
In the code cell below, you will need to implement the following:
Calculate the minimum, maximum, mean, median, and standard deviation of 'MEDV' , which is stored in
prices .
Store each calculation in their respective variable.
http://localhost:8888/nbconvert/html/Py/machine-learning/projects/boston_housing/boston_housing.ipynb?download=false 3/17
3/14/2019 boston_housing
Using your intuition, for each of the three features above, do you think that an increase in the value of
that feature would lead to an increase in the value of 'MEDV' or a decrease in the value of 'MEDV' ?
Justify your answer for each.
Would you expect a home that has an 'RM' value(number of rooms) of 6 be worth more or less than a
home that has an 'RM' value of 7?
Would you expect a neighborhood that has an 'LSTAT' value(percent of lower class workers) of 15 have
home prices be worth more or less than a neighborhood that has an 'LSTAT' value of 20?
Would you expect a neighborhood that has an 'PTRATIO' value(ratio of students to teachers) of 10 have
home prices be worth more or less than a neighborhood that has an 'PTRATIO' value of 15?
http://localhost:8888/nbconvert/html/Py/machine-learning/projects/boston_housing/boston_housing.ipynb?download=false 4/17
3/14/2019 boston_housing
Answer:'RM' value of 6 worth would be less than a home that has an 'RM' value of 7, Higher RM would
have higher MDEV. 'LSTAT' value of 15 have home prices be worth more than a neighborhood that has an
'LSTAT' value of 20, Higher LSTAT would have lower MDEV. 'PTRATIO' value of 10 have home prices
worth would be more than a neighborhood that has an 'PTRATIO' of 15, Higher PTRATIO would have
lower MDEV.
Developing a Model
In this second section of the project, you will develop the tools and techniques necessary for a model to make a
prediction. Being able to make accurate evaluations of each model's performance through the use of these
tools and techniques helps to greatly reinforce the confidence in your predictions.
The values for R2 range from 0 to 1, which captures the percentage of squared correlation between the
predicted and actual values of the target variable. A model with an R2 of 0 is no better than a model that
always predicts the mean of the target variable, whereas a model with an R2 of 1 perfectly predicts the target
variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be
explained by the features. A model can be given a negative R2 as well, which indicates that the model is
arbitrarily worse than one that always predicts the mean of the target variable.
For the performance_metric function in the code cell below, you will need to implement the following:
Use r2_score from sklearn.metrics to perform a performance calculation between y_true and
y_predict .
Assign the performance score to the score variable.
http://localhost:8888/nbconvert/html/Py/machine-learning/projects/boston_housing/boston_housing.ipynb?download=false 5/17
3/14/2019 boston_housing
3.0 2.5
-0.5 0.0
2.0 2.1
7.0 7.8
4.2 5.3
Run the code cell below to use the performance_metric function and calculate this model's coefficient of
determination.
Would you consider this model to have successfully captured the variation of the target variable?
Why or why not?
Hint: The R2 score is the proportion of the variance in the dependent variable that is predictable from the
independent variable. In other words:
R2 score of 0 means that the dependent variable cannot be predicted from the independent variable.
R2 score of 1 means the dependent variable can be predicted from the independent variable.
R2 score between 0 and 1 indicates the extent to which the dependent variable is predictable. An
R2 score of 0.40 means that 40 percent of the variance in Y is predictable from X.
http://localhost:8888/nbconvert/html/Py/machine-learning/projects/boston_housing/boston_housing.ipynb?download=false 6/17
3/14/2019 boston_housing
Answer: Yes, R2 score is the proportion of the variance in the dependent variable that is predictable from
the independent variable. It calculated as the square of the correlation between the response values and
the predicted response values. R2 value of 0.923 means that the 92% of the variance can be predict from
the independent data.
For the code cell below, you will need to implement the following:
Use train_test_split from sklearn.model_selection to shuffle and split the features and
prices data into training and testing sets.
Split the data into 80% training and 20% testing.
Set the random_state for train_test_split to a value of your choice. This ensures results are
consistent.
Assign the train and testing splits to X_train , X_test , y_train , and y_test .
# Success
print("Training and testing split was successful.")
Hint: Think about how overfitting or underfitting is contingent upon how splits on data is done.
Answer: Maximizing training accuracy rewards overly complex models that overfit the training data.
When we train our model, it tries its best to find out some kind of pattern in your training data while
minimizing the error rate. but model will also kind of memorized it. In this case if we check out the
training accuracy for the model, we will see that the accuracy is just 100% with simply 0 error. But this
model outperform the middle one in newer examples results in overfitting.
http://localhost:8888/nbconvert/html/Py/machine-learning/projects/boston_housing/boston_housing.ipynb?download=false 7/17
3/14/2019 boston_housing
Learning Curves
The following code cell produces four graphs for a decision tree model with different maximum depths. Each
graph visualizes the learning curves of the model for both training and testing as the size of the training set is
increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as
the standard deviation). The model is scored on both the training and testing sets using R2, the coefficient of
determination.
Run the code cell below and use these graphs to answer the following question.
In [31]: # Produce learning curves for varying training set sizes and maximum dep
ths
vs.ModelLearning(features, prices)
http://localhost:8888/nbconvert/html/Py/machine-learning/projects/boston_housing/boston_housing.ipynb?download=false 8/17
3/14/2019 boston_housing
Hint: Are the learning curves converging to particular scores? Generally speaking, the more data you have, the
better. But if your training and testing curves are converging with a score above your benchmark threshold,
would this be necessary? Think about the pros and cons of adding more training points based on if the training
and testing curves are converging.
Answer: max_depth = 1 (High Bias Scenario). If we add more training points, as the testing score and
training score has reached a plateau which mean the model may not improve from adding more training
points.
Complexity Curves
The following code cell produces a graph for a decision tree model that has been trained and validated on the
training data using different maximum depths. The graph produces two complexity curves — one for training
and one for validation. Similar to the learning curves, the shaded regions of both the complexity curves denote
the uncertainty in those curves, and the model is scored on both the training and validation sets using the
performance_metric function.
Run the code cell below and use this graph to answer the following two questions Q5 and Q6.
http://localhost:8888/nbconvert/html/Py/machine-learning/projects/boston_housing/boston_housing.ipynb?download=false 9/17
3/14/2019 boston_housing
Hint: High bias is a sign of underfitting(model is not complex enough to pick up the nuances in the data) and
high variance is a sign of overfitting(model is by-hearting the data and cannot generalize well). Think about
which model(depth 1 or 10) aligns with which part of the tradeoff.
Answer: With maximum depth of 1, model suffers from high bias. With maximum depth of 10 model suffer
from high variance. Training score is high while testing score is low There is a gap between the training
and testing scores which tell that the model is fitting the dataset well but not generalizing so the model is
suffering from high variance.
Hint: Look at the graph above Question 5 and see where the validation scores lie for the various depths that
have been assigned to the model. Does it get better with increased depth? At what point do we get our best
validation score without overcomplicating our model? And remember, Occams Razor states "Among competing
hypotheses, the one with the fewest assumptions should be selected."
Answer: Maximum depth with 5. The training score and testing score does not have much gap, indicating
that the model may not be suffering from a high variance scenario.
http://localhost:8888/nbconvert/html/Py/machine-learning/projects/boston_housing/boston_housing.ipynb?download=false 10/17
3/14/2019 boston_housing
Hint: When explaining the Grid Search technique, be sure to touch upon why it is used, what the 'grid' entails
and what the end goal of this method is. To solidify your answer, you can also give an example of a parameter in
a model that can be optimized using this approach.
Answer: Grid search is a technique which tends to find the right set of hyperparameters for the particular
model. It then calculates the mean square error or R-squared for various hyperparameter values,
allowing you to choose the best values. Input parameters we commonly use to pass in scikit GridSearch
function are estimator, param_grid, scoring and cv. Estimator: a object of that type is instantiated for
each grid point,this is assumed to implement the scikit-learn estimator interface. Param_grid: is a dict or
list of dictionaries, dictionary with parameters names (string) as keys and lists of parameter settings to
try as values, or a list of such dictionaries, in which case the grids spanned by each dictionary in the list
are explored. This enables searching over any sequence of parameter settings. scoring: a string or a
scorer callable object / function with signature scorer(estimator, X, y). If None, the score method of the
estimator is used, Scoring is necessary for grid search to identify how better model is performing, we will
R square scorein our grid search function. cv: cross-validation generator or an iterable determines the
cross-validation splitting strategy. If None is pass for cv then it will use the default 3-fold cross-
validation. Grid signifies a grid of hyperparameter values and for each combination, for which the model
is trained and calculate score on the validation data. Performance metric would vary for different kind of
problems,generally for Regression problem R2, RMSE, MAE metrics will be used and for Classification
problem Accuracy or ROC_AUC will be used. A model parameter is a configuration variable that is
internal to the model and whose value can be estimated from data.They are required by the model when
making predictions ( for example support vectors in a support vector machine and coefficients in a linear
regression or logistic regression ). A model hyperparameter is a configuration that is external to the
model and whose value cannot be estimated from data ( for example C and sigma hyperparameters for
support vector machines and k in k-nearest neighbors). To check each model with a specific hyper-
parameter combination in the pool of models we create a grid for each set of parameter and hyper-
parameter passed in the model then choose the best set on the basis of performance metric. Example
for using GridSearch: from sklearn.grid_search import GridSearchCV est =
DecisionTreeClassifier(min_samples_split=6, random_state=0) split_range = list(range(1, 20)) param_grid
= dict(min_samples_split=split_range) grid = GridSearchCV(est, param_grid, cv=10, scoring='accuracy')
grid.fit(X_train, y_train)
http://localhost:8888/nbconvert/html/Py/machine-learning/projects/boston_housing/boston_housing.ipynb?download=false 11/17
3/14/2019 boston_housing
Question 8 - Cross-Validation
What is the k-fold cross-validation training technique?
What benefit does this technique provide for grid search when optimizing a model?
Hint: When explaining the k-fold cross validation technique, be sure to touch upon what 'k' is, how the dataset
is split into different parts for training and testing and the number of times it is run based on the 'k' value.
When thinking about how k-fold cross validation helps grid search, think about the main drawbacks of grid
search which are hinged upon using a particular subset of data for training or testing and how k-fold cv
could help alleviate that. You can refer to the docs (http://scikit-
learn.org/stable/modules/cross_validation.html#cross-validation) for your answer.
Answer: K-fold cross-validation helps dataset to split into K(say 10) "folds" of equal size. Each fold acts
as the testing set 1 time, and acts as the training set K-1 (9)times. This prevents overfitting on the test
set because the parameters can be tweaked until the estimator performs optimally. If we have 1000 rows
in total in the data. Out of these 1000 rows, let us say 200 rows are taken out for final testing of the model
and 800 rows are kept for training. Now these 800 rows are divided into 10 folds. Now out of these folds,
one fold having 80 rows is kept as validation and rest 720 rows are used as training. This process is
repeated with each fold being one validation set.When we get different models from different folds, what
we do is average out the evaluation metric of all the models to get an 'unbiased estimate of model
generaliztion on unseen data. When using along with grid search we take one hyperparameter
combination from the grid and keep it constant for one round of whole k-fold cross validation process:
the whole splitting into k-folds, training and validating on one fold and so on. This helps us in getting an
unbiased estimate of model evaluation metric whether the given combination of hyperparameters is best
for the particular data set or not. This process is repeated for all combinations of hyperparameters in the
grid.
http://localhost:8888/nbconvert/html/Py/machine-learning/projects/boston_housing/boston_housing.ipynb?download=false 12/17
3/14/2019 boston_housing
In addition, you will find your implementation is using ShuffleSplit() for an alternative form of cross-
validation (see the 'cv_sets' variable). While it is not the K-Fold cross-validation technique you describe in
Question 8, this type of cross-validation technique is just as useful!. The ShuffleSplit() implementation
below will create 10 ( 'n_splits' ) shuffled sets, and for each shuffle, 20% ( 'test_size' ) of the data will
be used as the validation set. While you're working on your implementation, think about the contrasts and
similarities it has to the K-fold cross-validation technique.
For the fit_model function in the code cell below, you will need to implement the following:
http://localhost:8888/nbconvert/html/Py/machine-learning/projects/boston_housing/boston_housing.ipynb?download=false 13/17
3/14/2019 boston_housing
params = dict(max_depth=depth_range)
# Fit the grid search object to the data to compute the optimal mode
l
grid = grid.fit(X, y)
Making Predictions
Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of
input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about
the input data are, and can respond with a prediction for the target variable. You can use these predictions to
gain information about data where the value of the target variable is unknown — such as data the model was
not trained on.
http://localhost:8888/nbconvert/html/Py/machine-learning/projects/boston_housing/boston_housing.ipynb?download=false 14/17
3/14/2019 boston_housing
Run the code block below to fit the decision tree regressor to the training data and produce an optimal model.
In [35]: # Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
Hint: The answer comes from the output of the code snipped above.
Answer: Optimal model have max depth 4. My guess was optimal model with depth 5
What price would you recommend each client sell his/her home at?
Do these prices seem reasonable given the values for the respective features?
Hint: Use the statistics you calculated in the Data Exploration section to help justify your response. Of the
three clients, client 3 has has the biggest house, in the best public school neighborhood with the lowest poverty
level; while client 2 has the smallest house, in a neighborhood with a relatively high poverty rate and not the best
public schools.
Run the code block below to have your optimized model make predictions for each client's home.
http://localhost:8888/nbconvert/html/Py/machine-learning/projects/boston_housing/boston_housing.ipynb?download=false 15/17
3/14/2019 boston_housing
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print("Predicted selling price for Client {}'s home: ${:,.2f}".forma
t(i+1, price))
Answer: Prices to recommend Client 1: 403,025 Client 2: 237,478 Client 3: 931,636 These prices seems
reasonable given the values for the respective features. For Client 3 price seems reasonable due to more
number of rooms and less student-teacher ratio. For Client 2 price seems reasonable due to less rooms
and more student-teacher ration. For Client 1 price seems reasonable due to average number of rooms
and pretty good student-teacher ratio.
Sensitivity
An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to
sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate
for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to
allow a model to adequately capture the target variable — i.e., the model is underfitted.
Run the code cell below to run the fit_model function ten times with different training and testing sets
to see how the prediction for a specific client changes with respect to the data it's trained on.
Trial 1: $391,183.33
Trial 2: $419,700.00
Trial 3: $415,800.00
Trial 4: $420,622.22
Trial 5: $418,377.27
Trial 6: $411,931.58
Trial 7: $399,663.16
Trial 8: $407,232.00
Trial 9: $351,577.61
Trial 10: $413,700.00
http://localhost:8888/nbconvert/html/Py/machine-learning/projects/boston_housing/boston_housing.ipynb?download=false 16/17
3/14/2019 boston_housing
Question 11 - Applicability
In a few sentences, discuss whether the constructed model should or should not be used in a real-world
setting.
Hint: Take a look at the range in prices as calculated in the code snippet above. Some questions to answering:
How relevant today is data that was collected from 1978? How important is inflation?
Are the features present in the data sufficient to describe a home? Do you think factors like quality of
apppliances in the home, square feet of the plot area, presence of pool or not etc should factor in?
Is the model robust enough to make consistent predictions?
Would data collected in an urban city like Boston be applicable in a rural city?
Is it fair to judge the price of an individual home based on the characteristics of the entire neighborhood?
Answer: Data collected from 1978 would have not been much relevant today as the demographics would
change. Inflation is important due increasing cost of material used in house construction and features
added in house. These features are not sufficient to describe a home however other factors like quality of
appliances and square feet area, presence of pool would affect the house prices. Our model is robust of
make consistent predictions given the price range difference is not much with running the model multiple
times. Data collected from a rural city may not be applicable as the demographics would change. It is
somewhat fair to judge the price of an individual home based of characteristics of neighborhood if the
features of neighborhood are quite similar.
Note: Once you have completed all of the code implementations and successfully answered
each question above, you may finalize your work by exporting the iPython Notebook as an
HTML document. You can do this by using the menu above and navigating to
File -> Download as -> HTML (.html). Include the finished document along with this notebook
as your submission.
http://localhost:8888/nbconvert/html/Py/machine-learning/projects/boston_housing/boston_housing.ipynb?download=false 17/17