Operation Research-Reviewer
Operation Research-Reviewer
Probability – is the chance that something will happen. Probabilities are expressed as fractions or as
decimals between 0 and 1.
*When you assign a probability of 0, you mean that something can never happen; when you assign a
probability of 1, you mean that something will always happen.
Event – is one or more of the possible outcomes of doing something. If we toss a coin, getting a tail
would be an event; getting a head would be still another event.
Mutually exclusive event – if one and only one of them can take place at a time.
Example: coin toss – either heads or tails may turn up, but not both.
3 TYPES OF PROBABILITY
Classical Probability / Approach – often called as a priori probability because - if we keep using
these orderly examples of fair coins, unbiased dice, or decks of cards - we can state the answer in
advance (a priori) without ever tossing a coin, rolling a die, or drawing a card. We do not even have
to perform experiments to make probability statements about fair coins, unbiased dice, or
decks of cards. Rather, we can make probability statements based on logical reasoning before
any experiments take place.
Subjective Probability – are based on the personal belief or feelings of the person who makes the
probability estimate. We can define subjective probability as the probability assigned to an event on
the basis of whatever evidence is available.
Symbols – are used to simplify the presentation of ideas.
Marginal or unconditional probability – a single probability means that only one event can take
place.
Venn Diagram – John Venn
*If two events are not mutually exclusive, their parts of the rectangle will overlap. If two event are
mutually exclusive, their parts of the rectangle will not overlap each other.
Independent events – are those whose probabilities are in no way affected by the occurrence of any
other events.
Statistical dependence – exists when the probability of some event is dependent upon or affected
by the occurrence of some other event.
Revised or posterior probabilities have value in decision making. The origin of the concept of
obtaining posterior probabilities with limited information is credited to the Reverend Thomas Bayes
(1702 - 1761), and the basic formula for conditional probability under conditions of statistical
dependence.
Random variable – is a variable that takes on different values as a result of the outcomes of a
random experiment; can either be a discrete or continuous random variable.
A priori probability – a probability estimate prior to receiving new information.
Bayes' theorem – the basic formula for conditional probability under statistical dependence.
Bernoulli process – a process in which each trial has only two possible outcomes, where the
probability of the outcome of any trial remains fixed over time, and where the trials are statistically
independent.
Classical probability – the number of outcomes favorable to the occurrence of an event divided by
the total number of possible outcomes.
Collectively exhaustive events – a list of events that represents all the possible outcomes of an
experiment.
Conditional probability – the probability of one event occurring given that another event has
occurred.
Continuous random variable – a random variable allowed to take on any value within a given
range.
Discrete probability distribution – a probability distribution in which the variable is allowed to take
on only a limited number of values.
Discrete random variable – A random variable allowed to take on only a limited number of values.
Event – one or more of the possible outcomes of doing something, or one of the outcomes of an
experiment
Marginal probability – the unconditional probability of one event occurring; the probability of a single
event.
Normal distribution – A distribution of a continuous random variable in which the curve has a single
peak, in which it is bell-shaped, in which the mean lies at the center of the distribution and the curve
is symmetrical around a vertical line erected at the mean, and in which the two tails extend indefinitely
and never touch the horizontal axis.
Poisson distribution – A discrete distribution in which the probability of the occurrence of an event
within a very small time period is very small, in which the probability that two or more such events will
occur within the same small time interval is effectively 0, and in which the probability of the
occurrence of the event within one time period is independent of where that time period is.
Posterior probability – A probability that has been revised after some new information was
obtained.
Probability density function – For continuous random variables, the probability that the random
variable falls within any given interval is the area under the density function in the interval in question.
Probability distribution – A list of the outcomes of an experiment with the probabilities we would
expect to see associated with these outcomes.
Random variable – a variable that takes on different values as a result of the outcomes of a random
experiment.
Relative frequency of occurrence – The proportion of times that an event occurs in the long run
when conditions are stable, or the observed relative frequency of an event in a very large number of
trials.
Standard deviation – A measure of the spread of the data around the mean.
Standard normal probability distribution – A normal probability distribution in which the values of
the random variable are expressed in standard units.
Statistical dependence – The condition when the probability of some event is dependent upon or
affected by the occurrence of some other event.
Statistical independence – The condition when the occurrence of one event has no effect upon the
probability of the occurrence of any other event.
Subjective probability – A probability based on the personal beliefs of the person who makes the
probability estimate.
Venn diagram – A pictorial representation of probability concepts in which the sample space is a
rectangle and the events in the sample space are portions of that rectangle.
CHAPTER 3 | FORECASTING
Additive seasonal pattern – A type of time series in which the seasonal fluctuations are of constant
size, regardless of trend.
Asymptote – The limiting value of the forecasts using a damped trend. When sales reach
Benchmark – A standard for evaluating accuracy. The naive model is often used as a benchmark.
Bias – a tendency for the forecast errors to be systematic rather than random.
Box-Jenkins – A sophisticated statistical forecasting method which attempts to fit an optimal model
to past history.
Causal method – A forecasting method which attempts to find a relationship between the variable to
be forecast and one or more other variables.
Classical decomposition – A method which attempts to separate a time series into as many as four
components: seasonality, trend, cycle, and randomness. In this chapter, only the seasonal component
was separated (in the form of seasonal indices).
Conservatism – A belief that the future will look like the past regardless of evidence to the contrary.
This is one of the major types of bias in judgmental forecasting.
Constant-level model – A model which assumes that the time series has a relatively constant
mean. The forecast is a horizontal line for any period in the future.
Customer expectations – Planned purchases by customers. These are based on formal or informal
surveys.
Damped trend – A model used for long-range forecasting in which the amount of trend declines each
period.
Exponential smoothing – A weighted moving average technique in which more weight is given to
recent data.
Exponential trend model – A model in which the amount of growth increases continuously in the
future.
Forecasting sample – The latter part of the historical data, used to measure forecast accuracy.
Forecast profile – A plot of the forecasts against time. This varies according to the type of trend and
the seasonality in the data.
Gambler's fallacy – The belief that nature will compensate for past injustices.
Least-squares method – A procedure for fitting a trend line so that the sum of the squares of the
errors (the amount that each data point differs from the line) is at a minimum.
Linear exponential smoothing – Exponential smoothing adjusted for a linear trend. The model
includes two components, smoothed level and trend, and two parameters.
Linear trend – A straight-line trend. The amount of change is constant each period.
Moving average – The unweighted or weighted average of a consecutive number of data points. It
can be used as a forecast or simply as a base figure for use in seasonal adjustment of the data.
Multiplicative seasonal pattern – A type of time series in which the seasonal fluctuations;
proportional in size to the data. As the trend increases, the seasonal fluctuations become larger.
Naive model – A forecasting model in which the forecast for the next period is the same as the actual
value of the time series this period. The naive model is used as a benchmark.
Noise Randomness in the data. – The greater the noise, the more difficult it is to forecast the future
data values.
Normalization factor – A number used to adjust the seasonal indices so they sum to 4.0 (for
quarterly data) or 12.0 (for monthly data).
Regression – A process of estimating the statistical relationship between two variables. It is usually
done by the least-squares method.
Robust Describes – a model which forecasts well on many different types of data.
Sales force composite – A sum of the judgmental forecasts made by a company's salespeople.
Seasonal index – The average seasonal fluctuation, expressed as a fraction of the average value of
the time series for the year.
Seasonalize – To put seasonality back into deseasonalized data. This is done by multiplying each
deseasonalized data point by its seasonal index.
Simple exponential smoothing – A constant-level model in which the new forecast is equal to the
last forecast plus a fraction of the error.
Smoothing parameter – A fraction of the error used to adjust the forecasts in exponential smoothing.
Warm-up sample – The first part of historical data, used to compute starting values and select
model parameters.