0% found this document useful (0 votes)
8 views10 pages

Operation Research-Reviewer

This document provides an overview of key probability concepts including probability, events, experiments, sample space, mutually exclusive events, classical probability, relative frequency, subjective probability, probability distributions, random variables, Bayes' theorem, and more. It also defines common probability terms and distributions such as binomial, normal, Poisson among others.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views10 pages

Operation Research-Reviewer

This document provides an overview of key probability concepts including probability, events, experiments, sample space, mutually exclusive events, classical probability, relative frequency, subjective probability, probability distributions, random variables, Bayes' theorem, and more. It also defines common probability terms and distributions such as binomial, normal, Poisson among others.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

CHAPTER 2 | A REVIEW OF PROBABILITY CONCEPTS

Probability – is the chance that something will happen. Probabilities are expressed as fractions or as
decimals between 0 and 1.

*When you assign a probability of 0, you mean that something can never happen; when you assign a
probability of 1, you mean that something will always happen.

Event – is one or more of the possible outcomes of doing something. If we toss a coin, getting a tail
would be an event; getting a head would be still another event.

Experiment – is the activity that produces an event.

Sample space – is the set of all possible outcomes of an experiment.

Mutually exclusive event – if one and only one of them can take place at a time.
Example: coin toss – either heads or tails may turn up, but not both.

3 TYPES OF PROBABILITY

Classical Probability / Approach – often called as a priori probability because - if we keep using
these orderly examples of fair coins, unbiased dice, or decks of cards - we can state the answer in
advance (a priori) without ever tossing a coin, rolling a die, or drawing a card. We do not even have
to perform experiments to make probability statements about fair coins, unbiased dice, or
decks of cards. Rather, we can make probability statements based on logical reasoning before
any experiments take place.

The Relative Frequency of Occurrence


Define probability as either:
– the proportion of time that an event occurs in the long run when conditions are stable, or
– the observed relative frequency of an event in a very large number of trials.

Subjective Probability – are based on the personal belief or feelings of the person who makes the
probability estimate. We can define subjective probability as the probability assigned to an event on
the basis of whatever evidence is available.
Symbols – are used to simplify the presentation of ideas.
Marginal or unconditional probability – a single probability means that only one event can take
place.
Venn Diagram – John Venn

*If two events are not mutually exclusive, their parts of the rectangle will overlap. If two event are
mutually exclusive, their parts of the rectangle will not overlap each other.

Additional rule for mutually exclusive events – probability of either A or B happening.


Additional rule for not mutually exclusive events – if two events are not mutually exclusive, it is
possible for both events to occur together.

Probabilities under conditions of statistical independence


When events are statistically independent, the occurrence of one event has no effect on the
probability of the occurrence of any other event.

Three Probabilities under conditions of statistical independence


1. Marginal Probability – is the simple probability of the occurrence of an event.
2. Joint Probability – the probability that two or more independent events will occur together or
in succession is the product of their marginal probabilities.
3. Conditional Probability – is the probability that a second event (B) will occur if a first event
(A) has already happened. P(B|A) = P(B)

Independent events – are those whose probabilities are in no way affected by the occurrence of any
other events.

Probabilities under conditions of statistical dependence

Statistical dependence – exists when the probability of some event is dependent upon or affected
by the occurrence of some other event.

Three Probabilities under conditions of statistical dependence


1. Marginal Probability – one and only one probability is involved.
2. Conditional Probability – more involved than marginal probabilities; will be treated first
because the concept of joint probabilities is best illustrated using conditional probabilities as a
basis.
3. Joint Probability – P(AB) = P(A|B) x P(B) ; the joint probability of events A and B equals the
probability of event A, given that event B has occurred, times the probability of event B.

Revised or posterior probabilities have value in decision making. The origin of the concept of
obtaining posterior probabilities with limited information is credited to the Reverend Thomas Bayes
(1702 - 1761), and the basic formula for conditional probability under conditions of statistical
dependence.

Bayes’ Theorem – P(A|B) = P(AB) / P(B)

Probability distributions – can be based on theoretical considerations or subjective assessments of


the likelihood of certain outcomes; can also be based on experience.

Two probability distribution classification


1. Discrete probability – is allowed to take on only a limited number of values.
2. Continuous probability – the variable under consideration is permitted to take on any value
within a given range; we associate probabilities only with intervals, rather than with single
values of the variable under discussion.

Random variable – is a variable that takes on different values as a result of the outcomes of a
random experiment; can either be a discrete or continuous random variable.
A priori probability – a probability estimate prior to receiving new information.

Bayes' theorem – the basic formula for conditional probability under statistical dependence.

Bernoulli process – a process in which each trial has only two possible outcomes, where the
probability of the outcome of any trial remains fixed over time, and where the trials are statistically
independent.

Binomial distribution – discrete distribution of the results of an experiment known as a Bernoulli


process.

Classical probability – the number of outcomes favorable to the occurrence of an event divided by
the total number of possible outcomes.

Collectively exhaustive events – a list of events that represents all the possible outcomes of an
experiment.

Conditional probability – the probability of one event occurring given that another event has
occurred.

Continuous probability distribution – a probability distribution in which the variable is permitted to


take on any value within a given range.

Continuous random variable – a random variable allowed to take on any value within a given
range.

Discrete probability distribution – a probability distribution in which the variable is allowed to take
on only a limited number of values.

Discrete random variable – A random variable allowed to take on only a limited number of values.

Event – one or more of the possible outcomes of doing something, or one of the outcomes of an
experiment

Expected value – a weighted average of the outcomes of an experiment.


Expected value of a random variable – a weighted average in which each possible value of the
random variable is given its probability as a weight; the long-run average value of the random
variable.

Experiment – the activity that produces an event.

Exponential distribution – a continuous probability distribution used to describe the distribution of


service times in a service facility.

Joint probability – the probability of two events occurring together or in succession.

Marginal probability – the unconditional probability of one event occurring; the probability of a single
event.

Mean – a measure of central tendency.

Mutually exclusive events – Events which cannot happen together.

Normal distribution – A distribution of a continuous random variable in which the curve has a single
peak, in which it is bell-shaped, in which the mean lies at the center of the distribution and the curve
is symmetrical around a vertical line erected at the mean, and in which the two tails extend indefinitely
and never touch the horizontal axis.

Poisson distribution – A discrete distribution in which the probability of the occurrence of an event
within a very small time period is very small, in which the probability that two or more such events will
occur within the same small time interval is effectively 0, and in which the probability of the
occurrence of the event within one time period is independent of where that time period is.

Posterior probability – A probability that has been revised after some new information was
obtained.

Probability – The chance that something will happen.

Probability density function – For continuous random variables, the probability that the random
variable falls within any given interval is the area under the density function in the interval in question.
Probability distribution – A list of the outcomes of an experiment with the probabilities we would
expect to see associated with these outcomes.

Random variable – a variable that takes on different values as a result of the outcomes of a random
experiment.

Relative frequency of occurrence – The proportion of times that an event occurs in the long run
when conditions are stable, or the observed relative frequency of an event in a very large number of
trials.

Sample space – The set of all possible outcomes of an experiment.

Standard deviation – A measure of the spread of the data around the mean.

Standard normal probability distribution – A normal probability distribution in which the values of
the random variable are expressed in standard units.

Standard unit – The standard deviation of a standard normal probability distribution.

Statistical dependence – The condition when the probability of some event is dependent upon or
affected by the occurrence of some other event.

Statistical independence – The condition when the occurrence of one event has no effect upon the
probability of the occurrence of any other event.

Subjective probability – A probability based on the personal beliefs of the person who makes the
probability estimate.

Venn diagram – A pictorial representation of probability concepts in which the sample space is a
rectangle and the events in the sample space are portions of that rectangle.
CHAPTER 3 | FORECASTING

Additive seasonal pattern – A type of time series in which the seasonal fluctuations are of constant
size, regardless of trend.

Asymptote – The limiting value of the forecasts using a damped trend. When sales reach

Benchmark – A standard for evaluating accuracy. The naive model is often used as a benchmark.

Bias – a tendency for the forecast errors to be systematic rather than random.

Box-Jenkins – A sophisticated statistical forecasting method which attempts to fit an optimal model
to past history.

Causal method – A forecasting method which attempts to find a relationship between the variable to
be forecast and one or more other variables.

Classical decomposition – A method which attempts to separate a time series into as many as four
components: seasonality, trend, cycle, and randomness. In this chapter, only the seasonal component
was separated (in the form of seasonal indices).

Conservatism – A belief that the future will look like the past regardless of evidence to the contrary.
This is one of the major types of bias in judgmental forecasting.

Constant-level model – A model which assumes that the time series has a relatively constant
mean. The forecast is a horizontal line for any period in the future.

Customer expectations – Planned purchases by customers. These are based on formal or informal
surveys.

Damped trend – A model used for long-range forecasting in which the amount of trend declines each
period.

Decomposition – The same as classical decomposition.


Deseasonalize – To remove multiplicative seasonality by dividing each data point by the seasonal
index.

Exponential smoothing – A weighted moving average technique in which more weight is given to
recent data.

Exponential trend model – A model in which the amount of growth increases continuously in the
future.

Extrapolation – A projection of patterns in past data into the future.

Fit – To fit a forecasting model is to compute parameters and initial values.

Forecast – error The actual data minus the forecast.

Forecasting sample – The latter part of the historical data, used to measure forecast accuracy.

Forecast profile – A plot of the forecasts against time. This varies according to the type of trend and
the seasonality in the data.

Gambler's fallacy – The belief that nature will compensate for past injustices.

Judgmental forecasting – Subjective forecasting.

Jury of executive opinion – A subjective forecast prepared by one or more executives.

Least-squares method – A procedure for fitting a trend line so that the sum of the squares of the
errors (the amount that each data point differs from the line) is at a minimum.

Linear exponential smoothing – Exponential smoothing adjusted for a linear trend. The model
includes two components, smoothed level and trend, and two parameters.

Linear trend – A straight-line trend. The amount of change is constant each period.

MAD – Mean absolute deviation, or the mean absolute error.


MAPE – Mean absolute percentage error.

MSE – Mean squared error.

Moving average – The unweighted or weighted average of a consecutive number of data points. It
can be used as a forecast or simply as a base figure for use in seasonal adjustment of the data.

Multiplicative seasonal pattern – A type of time series in which the seasonal fluctuations;
proportional in size to the data. As the trend increases, the seasonal fluctuations become larger.

Naive model – A forecasting model in which the forecast for the next period is the same as the actual
value of the time series this period. The naive model is used as a benchmark.

Noise Randomness in the data. – The greater the noise, the more difficult it is to forecast the future
data values.

Normalization factor – A number used to adjust the seasonal indices so they sum to 4.0 (for
quarterly data) or 12.0 (for monthly data).

Regression – A process of estimating the statistical relationship between two variables. It is usually
done by the least-squares method.

Robust Describes – a model which forecasts well on many different types of data.

Sales force composite – A sum of the judgmental forecasts made by a company's salespeople.

Seasonal adjustment – The same as deseasonalizing.

Seasonal index – The average seasonal fluctuation, expressed as a fraction of the average value of
the time series for the year.

Seasonalize – To put seasonality back into deseasonalized data. This is done by multiplying each
deseasonalized data point by its seasonal index.
Simple exponential smoothing – A constant-level model in which the new forecast is equal to the
last forecast plus a fraction of the error.

Simulation – Developing a model of a process and then conducting a series of trial-and-error


experiments to predict the behavior of the process over time.

Smoothing parameter – A fraction of the error used to adjust the forecasts in exponential smoothing.

Straight-line projection – A time-series regression in which the trend is linear.

Time series – A set of historical data collected at regular time intervals.

Time-series pattern – Same as forecast profile.

Time-series regression – A least-squares regression in which the independent variable is some


function of time. It is used to predict the average rate of growth in a time series.

Trend-line analysis – The comparison of different time-series regression models.

Warm-up sample – The first part of historical data, used to compute starting values and select
model parameters.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy