Management Science
Management Science
X=
∑ X = Average(mean) of X Values
n
Assumptions of Regression (L.I.N.E)
Y=
∑ Y = Average (mean)of Y Values Linearity – the relationship between
n
X and Y is linear
b 1=
∑ ( X −X )(Y −Y ¿) ¿ Independence of errors – the error
values (difference between observed
∑ ( X −X )2 and estimated values) are statistically
b 0=Y −b1 X independent
Normality of error – the error values
Interpreting an Estimated Regression are normally distributed for any given
Equation value of X
Equal variance or
o slope - tells us how much, and in
homoskedasticity – the probability
what direction, the dependent or
distribution of the errors has constant
variance
WHERE:
n = number of periods of the time Comparing the Three Measures
series
k = number of periods at the MAE and MSE
beginning of the time series for measures that avoid the
which a forecast cannot be problem of positive and
computed negative forecasting errors
B. Mean Squared Error (MSE) – offsetting one another
obtained by getting the average of the size depends upon the
the squared forecast errors, e i scale of the data
difficult to make comparisons
of different time intervals or
to make comparisons across
different time series
MAPE has the advantage
ALL THREE MEASURES
All three simply measure how THINGS TO REMEMBER:
well the forecasting method can
forecast accuracy may not assure
forecast historical values of the
of good forecasts.
time series
forecast accuracy gives an idea of
the smaller the value of the
how close the forecasted values,
forecast accuracy, the more
predicted by the estimated model,
accurate the forecast
or be alike with the actual values
of the corresponding historical
past data.
Simple Forecasting Methods
MAD computes absolute
1. Naïve Forecasting Method – difference.
most recent past observation as the MAPE computes relative to the
forecasted value for the next totality of the data and commonly
period used when comparing cases of
2. Average of All Historical Data – different size or degree/amount.
average of all previous data as the (e.g case 1 in millions; case 2 in
forecasted value for the next tens or units)
period ** the lower the value of any of
the 3 measures, the better quality
Example: of forecast accuracy
Forecast the gasoline time series using the Moving Averages and Exponential
naïve method. Then, Compute MAE, MSE,
Smoothing
and MAPE. Compare the results.
Three forecasting methods that are
appropriate for a time series with a
horizontal pattern:
Moving Averages
Weighted Moving Averages
Exponential Smoothing
Called smoothing methods
because their objective is to
Example 2: Rosco Drugs smooth out random fluctuations in
the time series
Sales of Comfort brand headache medicine Most appropriate for short-range
for the past 10 weeks at Rosco Drugs are forecasts
shown below.
Moving Averages
If Rosco uses the naïve forecast
method to forecast sales for weeks 2-
10, what are the resulting MAE,
MSE, and MAPE values?
- Uses the average of the recent k Example: Moving Average
data values in the time series as If Rosco Drugs uses a 3-period moving
the forecast for the next period average to forecast sales, what are the
forecasts for weeks 4-11?
Σ(most recent k data values)
Y^ t +1=
k
- Each observation in the moving
average calculation receives the
same weight
- Moving is used because every time
a new observation becomes
available for the time series, it
replaces the oldest observation in
Weighted Moving Averages
the equation
- This will result to a change in 1. Select the number of data values
average as new observations to be included in the average
become available 2. Choose the weight for each of the
- Kinda like compound average data values
(from compound interest) 2.1 The more recent observations
are typically given more
Application of Moving Averages
weight than older observations
1. Select the order k (number of time 2.2 For convenience, the weights
series values should equal to 1
1.1 Smaller value of k will track 2.3 Example: 3WMA = .2(110)
shifts in a time series more + .3(115) + .5(125) = 119
quickly than a larger value of 2.4 .2 + .3+ .5 = 1; 125 is the most
k recent observation
1.2 If more past observations are
Example: Weighted Moving Average
considered relevant, then a
larger value of k is better
2. Uses several values during recent
past to develop forecasts
3. Useful for forecasting items that
are relatively stable and do not
display any pronounced behavior
(e.g., a trend or a seasonal
Simple and weighted moving
pattern).
averages are effective in
4. The longer the moving average
smoothing out fluctuations to
period, the smoother the data will
provide stable estimates
be
Problems of moving averages
1. Increasing n does smooth Y^ t = forecast of the time series for
fluctuations better but it makes period t
the method less sensitive to ∝ = smoothing constant (0 ≤ ∝≤ 1)
real changes And let Y^ 2=Y 1 (to initiate
2. It cannot pick up trends very computations
well. Since they are averages,
they will always stay within Rewritten Formula for Exponential
past level and will not predict Smoothing Forecast:
a change.
Y^ t +1=∝ Y t + ( 1−∝) Y^ t
Y^ t +1=∝ Y t + ( 1−∝) Y^ t
Where: