0% found this document useful (0 votes)
149 views36 pages

Quantitative Techniques in Business

This document is a term report submitted by Talha Mudassir to Prof. Khalid Sultan Anjum on January 2nd, 2022. It discusses quantitative techniques in business, including decision science, quantitative techniques, their role in business decision making, characteristics, limitations, and applications. Quantitative techniques help managers make data-driven decisions and include mathematical models, statistics, and programming to analyze business performance, predict trends, and allocate resources effectively.

Uploaded by

talha mudassir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
149 views36 pages

Quantitative Techniques in Business

This document is a term report submitted by Talha Mudassir to Prof. Khalid Sultan Anjum on January 2nd, 2022. It discusses quantitative techniques in business, including decision science, quantitative techniques, their role in business decision making, characteristics, limitations, and applications. Quantitative techniques help managers make data-driven decisions and include mathematical models, statistics, and programming to analyze business performance, predict trends, and allocate resources effectively.

Uploaded by

talha mudassir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Bahauddin Zakariya University Multan

Department Of Commerce

Prof. Khalid Sultan Anjum sb.

Quantitative
Techniques in Business

 Term Report.
 Talha Mudassir
 MCME #29

Date of submission:
02/01/2022
(Topic # 01)
Quantitative Techniques In Business

Introduction:
“ The discipline that is focused on the application of information technology for
well-versed decision making. ”
OR
“ Something which can be measured and a mathematical / statistical value can be endorsed
there in. ”

 It is also named as Decision Science / Quantitative techniques / Operational research.

Decision Science:
“ The application that uses the scientific approach and solves the management problems.”
Decision Sciences is a collaborative approach involving mathematical formulae, business
tactics, technological applications and behavioural sciences to help senior management make
data driven decisions.

 It helps managers in making best decision.


 It includes a large number of mathematically oriented techniques.
 It is recognized and established discipline in business.
 It is mainly used within businesses for increasing their efficiency and productivity.
 It is largely used in daily routine of most programs of business organization.
 Also known as Operation Research (OR), Quantitative Techniques (QT),
Quantitative Analysis (QA) and Management Sciences (MS).

Quantitative Techniques:
Those statistical and programming techniques which support the
decision making process especially related to the industry and business.

 It is the using of mathematics or statistics technique to asses / analyze the business


performance. ( mostly used in investments )
o For predicting the trends
o Determine the allocation of resources(assets).
o To manage Projects.

Management is getting the job


done by others
“Quantitative Techniques may be defined as those techniques which provide the decision
maker with a systematic and powerful means of analysis and help, based on quantitative in
exploring policies for achieving per-determined goals”.
Role of Quantitative Techniques in Business Decision Making:
1. To provide tools for scientific research.
2. It provides solutions for various business problems.
3. Quantitative techniques help in the proper allocation of resources which save time and
cost of the businessman.
4. The inventory management is deal with planning and control of inventory in the
organization.
5. Decision making is an essential part of management process quantitative techniques
provides facilities in decision making.
6. Reduction in cost and minimizing waiting time.
7. It helps in the selection of an appropriate strategy.
8. It enables proper deployment of resources.
9. Quantitative techniques helpful in marketing and financial management.
10. It helps the management to decide when to buy and what is the procedure of buying.

Role of QT in Finance:

 Investment decisions
 Predicting the trends

Characteristics of Quantitative Techniques:


1. Decision Making:
The quantitative techniques help in decision making process in the way that identifies the
factors which influence the decisions and quantify them. It becomes easier to resolve the
complexity of the decision making. Some of the quantitative techniques such as decision
making and simulation work best in complex decisions.

2. Scientific Approach:
Like any other research, operations research also emphasizes on the overall approach and
takes into account all the significant effects of the system. It understands and evaluates them
as a whole. It takes a scientific approach towards reasoning. It involves the methods defining
the problem, its formulation, testing and analysing of the results obtained.
3. Objective Oriented Approach:
Operations Research not only takes the overall view of the problem, but also endeavours to
arrive at the best possible (say optimal) solution to the problem in hand. It takes an objective-
oriented approach. To achieve this, it is necessary to have a defined measure of effectiveness
which is based on the goals of the organization. This measure is then used to make a
comparison between alternative solutions to the problem and adopt the best one.

4. Inter Disciplinary Approach:


No approach can be effective, if taken singly. It is also inter-disciplinary in nature. Problems
are multi-dimensional and approach needs a team work. For example, managerial problems
are affected by economic, sociological, biological, psychological, physical and engineering
aspect. A team that plans to arrive at a solution, to such a problem, needs people who are
specialists in areas such as mathematics, engineering, economics, statistics, management, etc.

Limitations of Quantitative Techniques:

1) Non-Quantifiable Factors:
One of the drawbacks of QT techniques is that they provide a solution only when all the
elements related to a problem are quantified. Since all relevant variables may not be
quantified, they do not find a place in QT models.

2) Dependence on an Electronic Computer:


QT approach is mathematical in nature. QT techniques try to find out an optimal solution to a
problem, by taking all the factors into consideration. The need of computers become
unavoidable because these factors are enormous (huge), it requires huge calculations to
express them in quantity and to establish relationships among them.

3) Expensive and time consuming:


Operations research is a costly affair. An organization needs to invest time, money and effort
into QT to make it effective. Professionals need to be hired to conduct constant research. For
better research outcomes, these professionals must constantly review the rapidly changing
business scenarios.
4) Wrong Estimation:
Certain assumptions and estimates are made for assigning quantitative values to factors
involved in QT, so that a quantitative analysis can be done. If such estimates are wrong, the
result can be misleading.

5) Difficulty in data analysis:


Quantitative study requires extensive statistical analysis, which can be difficult to perform for
researchers from non- statistical backgrounds. Statistical analysis is based on scientific
discipline and hence difficult for non-mathematicians to perform. Quantitative research is a
lot more complex for social sciences, education, anthropology and psychology. Effective
response should depend on the research problem rather than just a simple yes or no response.

Uses, Scope and Applications of Quantitative Techniques:

1) Finance:
The accounting department of a business relies on quantitative analysis. Accounting
personnel uses different quantitative data and methods, such as the discounted cash flow
model, to estimate the value of an investment. Products can also be evaluated based on the
costs of producing them and the profits they generate.

2) Production Planning:
Quantitative analysis also helps individuals to make informed product-planning decisions.
Let’s say a company finds it challenging to estimate the size and location of a new production
facility. Quantitative analysis can be employed to assess different proposals for costs, timing,
and location. With effective product planning and scheduling, companies will be more able to
meet their customers’ needs while maximizing their profits.

3) Purchase and Inventory:


One of the greatest challenges that businesses face is being able to predict the demand for a
product or service. However, with quantitative techniques, companies can be guided on just
how many materials they need to purchase, the level of inventory to maintain, and the costs
they’re likely to incur when shipping and storing finished goods.
4) Project Management:
One area where quantitative analysis is considered an indispensable tool is in project
management. As mentioned earlier, quantitative methods are used to find the best ways of a
locating resources, especially if these resources are scarce. Projects are then scheduled based
on the availability of certain resources.

5) Marketing:
Every business needs a proper marketing strategy. However, setting a budget for the
marketing department can be tricky, especially if its objectives are not set. With the right
quantitative method, marketers can easily set the required budget and allocate media
purchases. The decisions can be based on data obtained from marketing campaigns.

(Topic # 02)

Need for Quantitative Analysis in Economics

Quantitative analysis helps in decision making in economics. Quantitative analysis is


symbols numbers or mathematical formulas and expressions to represent the model of
reality.
The central aspects of economics theory can be stated as a functional relationship among
variables using the mathematical and statistical tools. Mathematics and statistics are
formed to be best for a scientific approach to economics.

Use of Statistics in Business:

1. Statistics helps in providing a better understanding and accurate description of nature’s


phenomena.

2. Statistics helps in the proper and efficient planning of a statistical inquiry in any field of
study.
3. Statistics helps in collecting appropriate quantitative data.
4. Statistics helps in presenting complex data in a suitable tabular, diagrammatic and graphic
form for an easy and clear comprehension of the data.
5. Statistics helps in understanding the nature and pattern of variability of a phenomenon
through quantitative observations.
6. Statistics helps in drawing valid inferences, along with a measure of their reliability about
the population parameters from the sample data.

Problem Solving and Decision Making:


Problem solving can be defined as the process of identifying a difference between the
actual and the desired state of affairs and then taking action to resolve this difference.

Process involves the following seven steps:

1. Define the problem.

2. Determine the set of alternative solutions.

3. Determine the criterion or criteria that will be used to evaluate the alternatives.

4. Evaluate the alternatives.

5. Choose an alternative.

6. Implement the selected alternative.


7. Evaluate the results to determine whether a satisfactory solution has been obtained.

Decision Making:

Decision making involves the selection of a course of action from among two or more
possible alternatives in order to arrive at a solution for a given problem.

Decision Making consists of five steps:

1. Define the problem.


2. Identify the alternatives.
3. Determine the criteria.
4. Evaluate the alternatives.
5. Choose an alternative.

Quantitative Analysis Model Development:

Models are representations of real objects or situations and can be presented in various
forms. For example, a scale model of an airplane is a representation of a real airplane.
Similarly, a child’s toy truck is a model of a real truck.

Iconic Model:
In modelling terminology, physical replicas are referred to as iconic models. The model
airplane and toy truck are examples of models that are physical replicas of real objects.

Analog Model:
The models which are physical in form but do not have the same physical appearance as
the object being modelled. Such models are referred to as analog models. The
speedometer of an automobile is an analog model; the position of the needle on the dial
represents the speed of the automobile. A thermometer is another analogue model
representing temperature.

Mathematical Model:
A model which is includes representations of a problem by a system of symbols and
mathematical relationships or expressions. Such models are referred to as mathematical
models and are a critical part of any quantitative approach to decision making.

Constraint: A restriction or limitation imposed on a problem.


Objective function: The mathematical expression that defines the quantity to be
maximized or minimized.

Data Preparation:

Another step in the quantitative analysis of a problem is the preparation of the data
required by the model. Data in this sense refer to the values of the uncontrollable inputs to
the model. All uncontrollable inputs or data must be specified before we can analyse the
model and recommend a decision or solution for the problem.

Uncontrollable Input: The factors that cannot be controlled by the decision maker.
Controllable Input: The decision alternatives that can be specified by the decision
maker.

Models of Cost, Revenue, and Profit:

These models have significant values in businesses. Some of quantitative models in


business and economic applications involve the relationships among a volume variable
such as production volume or sales volume and cost, revenue, and profit. Through these
models a manager can determine the projected cost, revenue, or profit associated with a
planned production quantity or a forecasted sales volume. Financial planning, production
planning, sales quotas, and other areas of decision making can benefit from such cost,
revenue, and profit models.

Models of Cost, Revenue, and Profit:


Some of the most basic quantitative models arising in business and economic applications
involve the relationships among a volume variable—such as production volume or sales
volume—and cost, revenue, and profit. Through the use of these models, a manager can
determine the projected cost, revenue, or profit associated with a planned production
quantity or a forecasted sales volume. Financial planning, production planning, sales
quotas, and other areas of decision making can benefit from such cost, revenue, and profit
models.

Cost and Volume Models:


The cost of manufacturing or producing a product is a function of the volume produced.
This cost can usually be defined as a sum of two costs: fixed cost and variable cost. Fixed
cost is the portion of the total cost that does not depend on the production volume; this
cost remains the same no matter how much is produced. Variable cost, on the other hand,
is the portion of the total cost that depends on and varies with the production volume. To
illustrate how cost and volume models can be developed, we will consider a
manufacturing problem faced by Newline Plastics. Newline Plastics produces a variety of
compact disc (CD) storage cases. Newline’s bestselling product is the CD-50, a slim
plastic CD holder with a specially designed lining that protects the optical surface of each
CD. Several products are produced on the same manufacturing line, and a setup cost is
incurred each time a changeover is made for a new product. Suppose the setup cost for
the CD-50 is $3000; this setup cost is a fixed cost and is incurred regardless of the
number of units eventually produced. In addition, suppose that variable labour and
material costs are $2 for each unit produced. The cost–volume model for producing x
units of the CD-50 can be written as
C(x) = 3000 + 2x
Where,
x = production volume in units
C(x) = total cost of producing x units
Once a production volume is established, the model in equation can be used to compute the
total production cost. For example, the decision to produce x = 1200 units would result in a
total cost of C(1200) = 3000 + 2(1200) = $5400.
Marginal cost is defined as the rate of change of the total cost with respect to production
volume; that is, the cost increase associated with a one-unit increase in the production
volume. In the cost model of equation (1.3), we see that the total cost C(x) will increase
by $2 for each unit increase in the production volume. Thus, the marginal cost is $2. With
more complex total cost models, marginal cost may depend on the production volume. In
such cases, we could have marginal cost increasing or decreasing with the production
volume x.

Revenue and Volume Models:


Management of Nowlin Plastics will also want information about projected revenue
associated with selling a specified number of units. Thus, a model of the relationship
between revenue and volume is also needed. Suppose that each CD-50 storage unit sells
for $5. The model for total revenue can be written as
R(x) = 5x
Where
x = sales volume in units
R(x) = total revenue associated with selling x units
Marginal revenue is defined as the rate of change of total revenue with respect to sales
volume, that is, the increase in total revenue resulting from a one-unit increase in sales
volume. In the model of equation (1.4), we see that the marginal revenue is $5. In this
case, marginal revenue is constant and does not vary with the sales volume. With more
complex models, we may find that marginal revenue increases or decreases as the sales
volume x increases.

Volume Models:
One of the most important criteria for management decision making is profit. Managers
need to know the profit implications of their decisions. If we assume that we will only
produce what can be sold, the production volume and sales volume will be equal. We can
then combine equations (1.3) and (1.4) to develop a profit–volume model that determines
profit associated with a specified production-sales volume. Total profit is total revenue
minus total cost; therefore, the following model provides the profit associated with
producing and selling x units:
P(x) = R(x) - C(x) = 5x - (3000 + 2x) = -3000 + 3x
Thus, the model for profit P(x) can be derived from the models of the revenue–volume
and cost–volume relationships.

Break Even Analysis:


Using equation, we can now determine the profit associated with any production volume
x. For example, suppose that a demand forecast indicates that 500 units of the product can
be sold. The decision to produce and sell the 500 units results in a projected profit of
P(500) = -3000 + 3(500) = -1500
In other words, a loss of $1500 is predicted. If sales are expected to be 500 units, the
manager may decide against producing the product. However, a demand forecast of 1800
units would show a projected profit of
P(1800) = -3000 + 3(1800) = 2400
This profit may be sufficient to justify proceeding with the production and sale of the
product. We see that a volume of 500 units will yield a loss, whereas a volume of 1800
provides a profit. The volume that results in total revenue equalling total cost (providing
$0 profit) is called the breakeven point. If the breakeven point is known, a manager can
quickly infer that a volume above the breakeven point will generate a profit, whereas a
volume below the breakeven point will result in a loss. Thus, the breakeven point for a
product provides valuable information for a manager who must make a yes/no decision
concerning production of the product. Let us now return to the Nowlin Plastics example
and show how the profit model in equation can be used to compute the breakeven point.
The breakeven point can be found by setting the profit expression equal to zero and
solving for the production volume. Using equation, we have
P(x) = -3000 + 3x = 0
3x = 3000
x = 1000
With this information, we know that production and sales of the product must exceed
1000 units before a profit can be expected.

(Topic # 03)
Introduction to Probability

“Probability is the science of decision making with calculated risks in the face of
uncertainty”.
“The probability of a given event is an expression of likelihood or chance of occurrence
of an event”.
The subject of probability is of great these days and it is being applied in almost branches
of science and technology. Probability is mathematical discipline extensively applied in
the development of various sciences in the contemporary academic life. The scope of
probability is so wide that this can be applied in such situations where we need to say that
it is, may be, possibly.
Possibility, chance, likeness etc. comes under the fold of probability.
Examples:
 Probability it will rain tomorrow.
 There is a chance of getting more medals in the next world cup.
 Probably he is right.
 Possibly the prices of oil comes down in the next month.

Experiment:
An act or the process of obtaining an observation is called experiment. Performing an
experiment is called trial.
An operation which can produce a result but that cannot be predicted exactly is called an
experiment.

Sample Space:

Experiment conducted on an act give a series of event. A set all possible outcomes from
an experiment are called the sample space.

Example: If a coin is tossed the possible events are, HH, TT, HT, TH. The sample space
is the set of all these events. Thus S = {HH, TT, HT, TH}
Sample space is also known as exhaustive set of events because it consist of all the
possible events of particular experiment.

Classical Approach:
“A method of assigning probabilities that is based on the assumption that the
experimental outcomes are equally likely”
Classical approach is the first theory on which other theories were developed. It is very
simple to understand and easy to measure to measure probability because it is built on
simple principles and assumptions. The basic assumption of classical approach is that the
outcomes of a random experiment are equally likely.
Classical methods based on three conditions:
 Equally likely occurrence of events.
 Collectively exhaustive events.
 Mutually events.

Relative Frequency Method: A method of assigning probabilities based on


experimentation or historical data.

The relative frequency theory of probability holds that if an experiment is repeated an


extremely large number of times and a particular outcome occurs a percentage of the
time, then that particular percentage is close to the probability of that outcome.

Example: if a machine produces 10,000 widgets one at a time, and 1,000 of those widgets
are faulty, the probability of that machine producing a faulty widget is approximately
1,000 out of 10,000, or 0.10.

Events and their probabilities:

An event is a collection of sample points (experimental outcomes). For example, in the


experiment of rolling a standard die, the sample space has six sample points and is
denoted S= {1, 2, 3, 4, 5, 6}. Now consider the event that the number of dots shown on
the upward face of the die is an even number. The three sample points in this event are 2,
4, and 6. Using the letter A to denote this event, we write A as a collection of sample
points:

A = {2, 4, 6}

Thus, if the experimental outcome or sample point were 2, 4, or 6, we would say that the
event A has occurred. Much of the focus of probability analysis is involved with
computing probabilities for various events that are of interest to a decision maker. If the
probabilities of the sample points are defined, the probability of an event is equal to the
sum of the probabilities of the sample points in the event.

(Topic # 04)

Some Basic Relationships of Probability


For computing the probability some relationships are the complement of an event, the
addition law, conditional probability, and the multiplication law.

Complement of an Event:
The complement of an event is the subset of outcomes in the sample space that are not in
the event. This means that in any given experiment, either the event or its complement
will happen, but not both. By consequence, the sum of the probabilities of an event and its
complement is always equal to 1. For an event A, the complement of event A is the event
consisting of all sample points in sample space S that are not in A. In any probability
application, event A and its complement Ac must satisfy the condition.
P (A) + P (Ac) = 1
Solving for P(A), we have
P (A) = 1 - P (Ac)

Addition law:
The addition rule for probabilities describes two formulas, one for the probability for
either of two mutually exclusive events happening and the other for the probability of two
non-mutually exclusive events happening. The first formula is just the sum of the
probabilities of the two events.
A probability law used to compute the probability of a union: P (A∪B) = P (A) + P (B) –
P (A∩B). For mutually exclusive events, P (A∩B) = 0, and the addition law simplifies to
P (A∪B) = P(A) +P(B)
Multiplication law:
Rule of Multiplication If events A and B come from the same sample space, the
probability that both A and B occur is equal to the probability the event A occurs times
the probability that B occurs, given that A has occurred.
A probability law used to compute the probability of an intersection. P(AՈB) P(A ‫׀‬
B)P(B) or P(A∩ B)=P(B ‫ ׀‬A)P(A). For independent events, this simplifies to P (A∩B) =P
(A)P(B).

Conditional probability:
Conditional probability is defined as the likelihood of an event or outcome occurring,
based on the occurrence of a previous event or outcome. Conditional probability is
calculated by multiplying the probability of the preceding event by the updated
probability of the succeeding, or conditional, event.
The probability of an event given another event has occurred. The conditional probability
of A given B is P (A¿B) = P (A∩B)/P (B).

Bayes’ Theorem:
Bayes' Theorem is a way of finding a probability when we know certain other
probabilities. It describes the probability of occurrence of an event related to any
condition. It is also considered for the case of conditional probability. Bayes theorem is
also known as the formula for the probability of causes. For example if we have to
calculate the probability of taking a blue ball from the second bag out of three different
bags of balls, where each bag contains three different colour balls viz. red , blue, black. In
this case, the probability of occurrence of an event is calculated depends another
conditions is known as conditional probability. Bayes' Theorem is a way of finding a
probability when we know certain other probabilities.

The formula is:

P ( A )P(B∨A)
P (A|B) =
P(B)

Where:

• P(A|B) – the probability of event A occurring, given event B has occurred

• P(B|A) – the probability of event B occurring, given event A has occurred

• P(A) – the probability of event A

• P(B) – the probability of event B

Example:

Imagine you are a financial analyst at an investment bank. According to your research of
publicly-traded companies, 60% of the companies that increased their share price by more
than 5% in the last three years replaced their CEOs during the period.

At the same time, only 35% of the companies that did not increase their share price by
more than 5% in the same period replaced their CEOs. Knowing that the probability that
the stock prices grow by more than 5% is 4%, find the probability that the shares of a
company that fires its CEO will increase by more than 5%.
Before finding the probabilities, you must first define the notation of the probabilities
P(A) – the probability that the stock price increases by 5% P(B) – the probability that the
CEO is replaced P(A|B) – the probability of the stock price increases by 5% given that the
CEO has been replaced P(B|A) – the probability of the CEO replacement given the stock
price has increased by 5%.

Using the Bayes’ theorem, we can find the required probability:

Sample Calculation

Thus, the probability that the shares of a company that replaces its CEO will grow by
more than 5% is 6.67%.

The Tabular Approach:

The tabular approach is helpful in conducting the Bayes’ theorem calculations


simultaneously for all events Ai. Such an approach is shown in Table 2.4. The
computations shown there involve the following steps.

Step 1_Prepare three columns:

Column 1—the mutually exclusive events for which posterior probabilities are desired

Column 2—the prior probabilities for the events

Column 3—the conditional probabilities of the new information given each event

Step 2. In column 4, compute the joint probabilities for each event and the new
information B by using the multiplication law. To get these joint probabilities, multiply
the prior probabilities in column 2 by the corresponding conditional probabilities in
column 3—that is, P(Ai n B) _ P(Ai)P(B l Ai).

Step 3. Sum the joint probabilities in column 4 to obtain the probability of the new
information,

P(B). In the example there is a 0.0130 probability that a part is from supplier 1 and is bad
and a 0.0175 probability that a part is from supplier 2 and is bad. These are the only two
ways by which a bad part can be obtained, so the sum 0.0130 + 0.0175 shows an overall
probability of 0.0305 of finding a bad part from the combined shipments of both
suppliers.

Step 4. In column 5, compute the posterior probabilities by using the basic relationship of
conditional probability: P(Ai l B) = P(Ai n B) / P(B)
Note that the joint probabilities P(Ai n B) appear in column 4, whereas P(B) is the sum of
the column 4 values.

(Topic # 05)

Random Variables

A random variable is a variable whose value is unknown or a function that assigns values
to each of an experiment's outcomes. Random variables are often designated by letters
and can be classified as discrete, which are variables that have specific values, or
continuous, which are variables that can have any values within a continuous range.

A random variable is a rule that assigns a numerical value to each outcome in a sample
space. Random variables may be either discrete or continuous. A random variable is said
to be discrete if it assumes only specified values in an interval. Otherwise, it is
continuous. We generally denote the random variables with capital letters such as X and
Y. When X takes values 1, 2, 3… it is said to have a discrete random variable.

“A random variable is a numeric description of the outcome of an experiment”.

Types of Random Variables:

There are two types of random variables:

 Discrete Random Variables

 Continuous Random variables

Discrete Random Variables:

A random variable, which can assume finite or countable infinite values, is called
discrete random values.

“A random variable that may assume only a finite or infinite sequence of values”

Example:

When two coins are tossed the random variable “No. of heads” can take only the values
0, 1, 2.
Discrete probability distribution:

A table listening all possible values that are discrete random variables can take on along
with the associated probabilities is called probability distribution.

Suppose, “X” is a random variables which assume the possible values as X 1, X2, X3,
….,Xn along with probabilities P(X 1), P(X2), P(X3), ….., P(Xn), the probability
distribution of random variable “X”.

Expected Value:

Expected value is exactly what you might think it means intuitively: the return you can
expect for some kind of action, like how many questions you might get right if you guess
on a multiple choice test.

“Weighted average of the values of the random variable, for which the probability
function provides the weights. If an experiment can be repeated a large number of times,
the expected value can be interpreted as the “long-run average.”

E(x) = μ = ∑xf(x)

The expected value of a random variable is the mean, or average, value. For experiments
that can be repeated numerous times, the expected value can be interpreted as the “long-
run” average value for the random variable.

Binomial Probability Distribution:

In this section we consider a class of experiments that meet the following conditions:

1. The experiment consists of a sequence of n identical trials.

2. Two outcomes are possible on each trial. We refer to one outcome as a success and the
other as a failure.

3. The probabilities of the two outcomes do not change from one trial to the next.

4. The trials are independent (i.e., the outcome of one trial does not affect the outcome of
any other trial).

Experiments that satisfy conditions 2, 3, and 4 are said to be generated by a Bernoulli


process. In addition, if condition 1 is satisfied (there are n identical trials), we have a
binomial experiment. An important discrete random variable associated with the
binomial experiment is the number of outcomes labeled success in the n trials. If we let x
denote the value of this random variable, then x can have a value of 0, 1, 2, 3, . . . , n,
depending on the number of successes observed in the n trials. The probability
distribution associated with this random variable is called the binomial probability
distribution. S 21.25
Expected Value and Variance for the Binomial Distribution:

We can use equation to compute the expected value or expected number of customers
making a purchase:

Note that we could have obtained this same expected value simply by multiplying n (the
number of trials) by p (the probability of success on any one trial):

np 3(0.30) 0.9

(Topic # 06)

Normal Probability Distribution

The Normal Probability Distribution is very common in the field of statistics.

Whenever you measure things like people's height, weight, salary, opinions or votes, the
graph of the results is very often a normal curve.

The Normal Distribution:

A random variable X whose distribution has the shape of a normal curve is called


a normal random variable.

This random variable X is said to be normally distributed with mean μ and standard


deviation σ if its probability distribution is given by:
Properties of a Normal Distribution:

1. The normal curve is symmetrical about the mean μ;

2. The mean is at the middle and divides the area into halves;

3. The total area under the curve is equal to 1;

4. It is completely determined by its mean and standard deviation σ (or variance σ2)

Standard Normal Distribution:

The standard normal distribution is a special case of the normal distribution


where μ=0,σ2=1. If is often essential to normalize data prior to the analysis. A random
normal variable with mean μ and standard deviation μ can be normalized via the
following:

z=x−μσ

The standard normal distribution is a normal distribution with mean μ = 0 and standard


deviation σ = 1. The letter Z is often used to denote a random variable that follows this
standard normal distribution. One way to compute probabilities for a normal distribution
is to use tables that give probabilities for the standard one, since it would be impossible to
keep different tables for each combination of mean and standard deviation. The standard
normal distribution can represent any normal distribution, provided you think in terms of
the number of standard deviations above or below the mean instead of the actual units
(e.g., dollars) of the situation.

Probabilities for Normal Distribution:

Suppose that we have a normal distribution with u = 10 and o = 2. Note that, in addition
to the values of the random variable shown on the x axis, we have included a second axis
(the z axis) to show that for each value of x there is a corresponding value of z. For
example, when x = 10, the corresponding z value (the number of standard deviations
away from the mean) is z = (x - u)/o = (10 - 10)/2 = 0. Similarly, for x = 14 we have z = (x
- u)/o = (14 - 10)/2 = 2.
Topic No. 7
Poisson Distribution

The Poisson distribution arises from either of two models. In one model quantities, for
example bacteria are assumed to be distributed at random in some medium with a uniform
density of λ (lambda) per unit area. The number of bacteria colonies found in a sample
area of size A follows the Poisson distribution with a parameter μ equal to the product of
λ and A.
In terms of the model over time, we assume that the probability of one event in a short
interval of length t1 is proportional to t1 that is, Pr{exactly one event} is approximately
λt1. Another assumption is that t1 is so short that the probability of more than one event
during this interval is almost zero. We also assume that what happens in one time
interval is independent of the happenings in another interval. Finally, we assume that λ is
constant over time. Given these assumptions, the number of occurrences of the event in a
time interval of length t follows the Poisson distribution with parameter μ, where μ is the
product of λ and t.
The Poisson probability mass function is

Poisson Probability Distribution:


We will consider a discrete random variable that often is useful when we are dealing with
the number of occurrences of an event over a specified interval of time or space. For
example, the random variable of interest might be the number of arrivals at a car wash in
1 hour, the number of repairs needed in 10 miles of highway, or the number of leaks in
100 miles of pipeline. If the following two assumptions are satisfied, the Poisson
probability distribution is applicable:
1. The probability of an occurrence of the event is the same for any two intervals of equal
length.
2. The occurrence or non-occurrence of the event in any interval is independent of the
occurrence or non-occurrence in any other interval.

Note that equation (3.8) shows no upper limit to the number of possible values that a
Poisson random variable can realize. That is, x is a discrete random variable with an
infinite sequence of values (x = 0, 1, 2 . . .); the Poisson random variable has no set upper
limit.

An Example Involving Time Intervals:


Suppose that we are interested in the number of arrivals at the drive-in teller window of a
bank during a 15-minute period on weekday mornings. If we assume that the probability
of a car arriving is the same for any two time periods of equal length and that the arrival
or non-arrival of a car in any time period is independent of the arrival or non-arrival in
any other time period, the Poisson probability function is applicable. Then if we assume
that an analysis of historical data shows that the average number of cars arriving during a
15-minute interval of time is 10, the Poisson probability function with λ = 10 apply:

If we wanted to know the probability of five arrivals in 15 minutes, we would set x = 5


and obtain

Although we determined this probability by evaluating the probability function with


λ = 10 and x = 5, the use of Poisson probability distribution tables often is easier. These
tables provide probabilities for specific values of x and λ. We included such a table as
Appendix C. To use the table of Poisson probabilities, you need know only the values of
x and λ. Thus, the probability of five arrivals in a 15-minute period is the value in the row
corresponding to x = 5 and the column corresponding to λ = 10. Hence, f (5) = 0.0378.

An Example Involving Length or Distance Intervals:


Suppose that we are concerned with the occurrence of major defects in a section of
highway one month after resurfacing. We assume that the probability of a defect is the
same for any two intervals of equal length and that the occurrence or non-occurrence of a
defect in any one interval is independent of the occurrence or non-occurrence in any other
interval. Thus, the Poisson probability distribution applies.
Suppose that major defects occur at the average rate of two per mile. We want to find the
probability that no major defects will occur in a particular 3-mile section of the highway.
The interval length is 3 miles, so λ= (2 defects/mile) (3 miles) =6 represent the expected
number of major defects over the 3-mile section of highway. Thus, by using equation or
Appendix C with λ = 6 and x = 0, we obtain the probability of no major defects of 0.0025.
Thus, finding no major defects in the 3-mile section is very unlikely. In fact, there is a 1 -
0.0025 = 0.9975 probability of at least one major defect in that section of highway.
Continuous Random Variables:
A continuous random variable is a random variable whose statistical distribution is
continuous. Continuous random variables describe outcomes in probabilistic situations
where the possible values some quantity can take form a continuum, which is often (but
not always) the entire set of real numbers R. They are the generalization of discrete
random variables to uncountable infinite sets of possible outcomes.
Continuous random variables are essential to models of statistical physics, where the
large number of degrees of freedom in systems mean that many physical properties
cannot be predicted exactly in advance but can be well-modelled by continuous
distributions. In particular, quantum mechanical systems often make use of continuous
random variables, since physical properties in these cases might not even have definite
values.

Applying the Uniform Distribution:


The uniform distribution (continuous) is one of the simplest probability distributions in
statistics. It is a continuous distribution, this means that it takes values within a specified
range, e.g. between 0 and 1. The probability density function for a uniform distribution
taking values in the range a to b is, the general formula for the probability density
function of the uniform distribution is:

This distribution is defined by two parameters, a and b:


 a is the minimum.
 b is the maximum.

Area as a Measure of Probability:


Refer to Figure 3.4 and consider the area under the graph of f (x) over the interval from
120 to 130. Note that the region is rectangular in shape and that the area of a rectangle is
simply the width times the height. With the width of the interval equal to 130 - 120 = 10
and the height of the graph f (x) = 1/20, the area = width * height = 10(1/20) = 10/20 =
0.50. What observation can you make about the area under the graph of f (x) and
probability? They are identical! Indeed, that is true for all continuous random variables.
In other words, once you have identified a probability density function f (x) for a
continuous random variable, you can obtain the probability that x takes on a value
between some lower value a and some higher value b by computing the area under the
graph of f (x) over the interval a to b. With the appropriate probability distribution and the
interpretation of area as probability, we can answer any number of probability questions.
For example, what is the probability of a flight time between 128 and 136 minutes? The
width of the interval is 136 -128 = 8. With the uniform height of 1/20, P(128 _< x _< 136)
= 8/20 = 0.40.

1. We no longer talk about the probability of the random variable taking on a particular
value. Instead we talk about the probability of the random variable taking on a value
within some given interval.
2. The probability of the random variable taking on a value within some given interval is
defined to be the area under the graph of the probability density function over the interval.
This definition implies that the probability that a continuous random variable takes on any
particular value is zero because the area under the graph of f (x) at a single point is zero.

(Topic # 08)
Decision Analysis:
Decision analysis can be used to develop an optimal strategy when a decision maker is faced
with several decision alternatives and an uncertain or risk-filled pattern of future events.
In some cases, the selected decision alternative may provide good or excellent results. In
other cases, a relatively unlikely future event may occur, causing the selected decision
alternative to provide only fair or even poor results.
A good decision analysis includes careful consideration of risk. Through risk analysis the
decision maker is provided with probability information about the favourable as well as the
unfavourable consequences that may occur.

Problem Formulation:
A factor in selecting the best decision alternative is the uncertainty associated with the chance
event concerning the demand for the condominiums.

 A decision problem is characterized by decision alternatives, states of nature, and


resulting payoffs.

 The decision alternatives are the different possible strategies the decision maker can
employ.

 The states of nature refer to future events, not under the control of the decision maker,
which may occur. States of nature should be defined so that they are mutually
exclusive and collectively exhaustive.
In decision analysis, the possible outcomes for a chance event are referred to as the states of
nature. The states of nature are defined so they are mutually exclusive (no more than one can
occur) and collectively exhaustive (at least one must occur), thus one and only one of the
possible states of nature will soccur.

Influence Diagrams:
An influence diagram is a graphical device that shows the relationships among the decisions,
the chance events, and the consequences for a decision problem.
The nodes in an influence diagram represent the decisions, chance events, and consequences.

 Squares or rectangles depict decision nodes.

 Circles or ovals depict chance nodes.

 Diamonds depict consequence nodes.

 Lines or arcs connecting the nodes show the direction of influence.

Payoff tables:
In decision analysis, we refer to the consequence resulting from a specific combination of a
decision alternative and a state of nature as a payoff.

 The consequence resulting from a specific combination of a decision alternative and a


state of nature is a payoff.

 A table showing payoffs for all combinations of decision alternatives and states of
nature is a payoff table.

 Payoffs can be expressed in terms of profit, cost, time, distance or any other
appropriate measure.

Decision Tree:
A decision tree provides a graphical representation of the decision-making process.

 A decision tree is a chronological representation of the decision problem.

 Each decision tree has two types of nodes; round nodes correspond to the states of
nature while square nodes correspond to the decision alternatives.

 The branches leaving each round node represent the different states of nature while
the branches leaving each square node represent the different decision alternatives.
 At the end of each limb of a tree are the payoffs attained from the series of branches
making up that limb.

 Decision tree algorithm falls under the category of supervise learning.

 It represents a function that takes as input a vector of attribute and returns a decision
that is single output value.
A point in a network or a diagram at which both lines intersects, known as nodes. It reaches
its decision by performing a sequence of tests. Decision tree is nothing but a classifier
(model) or tree structure.
Decision tree used to solved the:

 Regression
 Classification Problems

(Topic # 09)
Decision Making with Probabilities:
In many decision-making situations, we can obtain probability assessments for the states of
nature. When such probabilities are available, we can use the expected value approach to
identify the best decision alternative.
Let ,
N = the number of states of nature
P(sJ) = the probability of state of nature sJ
Because one and only one of the N states of nature can occur, the probabilities must satisfy
two conditions
P(sJ ) > 0 for all states of nature ……….… (1)
P(sJ) = P(s1)+ P(s2) +………… +P (sn) =1 ………….. (2)

the expected value of a decision alternative is the sum of weighted payoffs for the decision
alternative.
Expected Value of a Decision Alternative:

 The expected value of a decision alternative is the sum of weighted payoffs for the
decision alternative.

 The expected value (EV) of decision alternative di is defined as:


N
N
EV( d ii )   P( s
jj 
jj )Vijij
11

where: N = the number of states of nature


P(sj ) = the probability of state of nature sj
Vij = the payoff corresponding to decision
alternative di and state of nature sj
Expected Value approach:
• If probabilistic information regarding he states of nature is available, one may
use the expected value (EV) approach.
• Here the expected return for each decision is calculated by summing the
products of the payoff under each state of nature and the probability of the
respective state of nature occurring.
• The decision yielding the best expected return is chosen.

Expected Value of Perfect Information:


The expected value of perfect information (EVPI) is the price that one would be willing to
pay in order to gain access to perfect information. A common discipline that uses the EVPI
concept is health economics.
there is always some degree of uncertainty surrounding the decision, because there is always
a chance that the decision turns out to be wrong.
 EVPI is calculated as the difference in the monetary value of health gain associated with a
decision between therapy alternatives between when the choice is made on the basis of with
currently available information.

 Frequently information is available which can improve the probability estimates for
the states of nature.
 The expected value of perfect information (EVPI) is the increase in the expected
profit that would result if one knew with certainty which state of nature would occur.
 The EVPI provides an upper bound on the expected value of any sample or survey
information.

The expected value of perfect information (EVPI) is used:


 To measure the cost of uncertainty as the perfect information can remove the
possibility of a wrong decision.
 It provides a criterion to examine ordinarily forecasters that are imperfectly informed.
 It can be determined simultaneously with the elimination of uncertainty of all
parameters that were involved in model-based decision-making.

Formula:
It is the difference between predicted payoff under certainty and predicted monetary
value.

EVPI Calculation:
• Step 1:
Determine the optimal return corresponding to each state of nature.
• Step 2:
Compute the expected value of these optimal returns.
• Step 3:
Subtract the EV of the optimal decision from the amount determined
in step (2).

The major advantage of EVPI is that it is very easy and simple to compute. There
must be an equality between the probability of happening of an uncertain event and
the probability related to the perfect test result. Hence EVPI is easy to calculate and
can be determined directly.

Risk Analysis:
A decision alternative and a state of nature combine to generate the payoff associated with a
decision

 Risk analysis helps the decision maker recognize the difference between:
o the expected value of a decision alternative

and
o the payoff that might actually occur
 The risk profile for a decision alternative shows the possible payoffs for the decision
alternative along with their associated probabilities.

Sensitivity Analysis:
Sensitivity analysis can be used to determine how changes in the probabilities for the states of
nature or changes in the payoffs affect the recommended decision alternative
 Sensitivity analysis can be used to determine how changes to the following inputs
affect the recommended decision alternative:
o probabilities for the states of nature

o values of the payoffs

 If a small change in the value of one of the inputs causes a change in the
recommended decision alternative, extra effort and care should be taken in estimating
the input value.
Decision Analysis without Probabilities:
The decision maker must understand the approaches available and then select the specific
approach that, according to the judgment of the decision maker, is the most appropriate.
 Three commonly used criteria for decision making when probability information
regarding the likelihood of the states of nature is unavailable are:
o the optimistic approach

o the conservative approach

o the minimax regret approach.

Optimistic Approach:
The optimistic approach evaluates each decision alternative in terms of the best payoff that
can occur. The decision alternative that is recommended is the one that provides the best
possible payoff.

 The optimistic approach would be used by an optimistic decision maker.

 The decision with the largest possible payoff is chosen.

 If the payoff table was in terms of costs, the decision with the lowest cost would be
chosen.

Conservative Approach:
The Conservative approach evaluates each decision alternative in terms of the worst payoff
that can occur.

 The conservative approach would be used by a conservative decision maker.

 For each decision the minimum payoff is listed and then the decision corresponding to
the maximum of these minimum payoffs is selected. (Hence, the minimum possible
payoff is maximized.)

 If the payoff was in terms of costs, the maximum costs would be determined for each
decision and then the decision corresponding to the minimum of these maximum costs
is selected. (Hence, the maximum possible cost is minimized.)

Minimax Regret Approach:


In decision analysis, regret is the difference between the payoff associated with a particular
decision alternative and the payoff associated with the decision that would yield the most
desirable payoff for a given state of nature.

 The minimax regret approach requires the construction of a regret table or an


opportunity loss table.

 This is done by calculating for each state of nature the difference between each payoff
and the largest payoff for that state of nature.

 Then, using this regret table, the maximum regret for each possible decision is listed.

 The decision chosen is the one corresponding to the minimum of the maximum
regrets.

(Topic # 10)
Decision analysis with Sample Information:
EVSI Calculation
Step 1:
Determine the optimal decision and its expected return for the possible outcomes of
the sample or survey using the posterior probabilities for the states of nature.
Step 2 :
Compute the expected value of these optimal returns.
Step 3 :
Subtract the EV of the optimal decision obtained without using the sample
information from the amount determined in step (2).
• Efficiency of sample information is the ratio of EVSI to EVPI.
• As the EVPI provides an upper bound for the EVSI, efficiency is always a number
between 0 and 1.
Influence Diagram:
The two decision nodes correspond to the research study and the complex-size decisions. The
two chance nodes correspond to the research study results and demand for the condominiums.
Legend > Decision > Chance > Consequence

Market Average number


of customers per
survey
hour
results

Market Survey Store size Profit

An influence diagram was used to describe the complex structure of the decision analysis
process.

Decision Tree:
The decision tree for the PDC problem with sample information shows the logical sequence
for the decisions and the chance events.
Analysis of the decision tree and the choice of an optimal strategy require that we know the
branch probabilities corresponding to all chance nodes.

 At each decision node, the branch of the tree that is taken is based on the decision
made.
 At each chance node, the branch of the tree that is taken is based on probability or
chance.

Decision Strategy:
A decision strategy is a sequence of decisions and chance outcomes where the decisions
chosen depend on the yet to be determined outcomes of chance events.
The approach used to determine the optimal decision strategy is based on a backward pass
through the decision tree using the following steps:
 At chance nodes, compute the expected value by multiplying the payoff at the end of
each branch by the corresponding branch probabilities.
 At decision nodes, select the decision branch that leads to the best expected value.
This expected value becomes the expected value at the decision node.
The optimal decision for PDC is to conduct the market research study and then carry out
the following decision strategy:

 If the market research is favourable, construct the large condominium


complex.
 If the market research is unfavourable, construct the medium condominium
complex.

Risk Profile:
Risk profile shows the possible payoffs with their associated probabilities. In order
to construct a risk profile for the optimal decision strategy, we will need to compute the
probability for each of the four payoffs. Each payoff results from a sequence of branches
leading from node 1 to the payoff.
The probability of following that sequence of branches can be found by multiplying the
probabilities for the branches from the chance nodes in the sequence.
Probability assessments were made concerning both the technical risk and market risk at each
stage of the process. Net present value provided the consequence and the decision-making
criterion.

Expected Value of Sample Information:


The market research study is the sample information used to determine the optimal decision
strategy. For minimization problems, the expected value with sample information is always
less than or equal to the expected value without sample information.

EVSI = [EVwSI - EvwoSI]


Where,
EVSI = expected value of sample information
EVwSI = expected value with sample information about the states of
nature
EVwoSI = expected value without sample information about the states of
nature
EVSI is the magnitude of the difference between EVwSI and EvwoSI.

Efficiency of Sample Information:


The market research report would obtain perfect information, but we can use an efficiency
measure to express the value of the market research information.
With perfect information having an efficiency rating of 100%, the efficiency rating E for
sample information is computed as follows:

E = EVSI /EVPI = 100


Low efficiency ratings for sample information might lead the decision maker to look for other
types of information. However, high efficiency ratings indicate that the sample information is
almost as good as perfect information and that additional sources of information would not
yield substantially better results.
Computing Branch Probabilities:
The branch probabilities for the PDC decision tree chance
nodes were specified in the problem description. Bayes’ theorem can be used to compute
branch probabilities for decision trees.

F= Favourable market research report


U= Unfavourable market research report
s1= Strong demand (state of nature 1)
s2=Weak demand (state of nature 2)
In performing the probability computations, we need to know PDC’s assessment of the
probabilities for the two states of nature, P(s1) and P(s2). The preceding probability
assessments provide a reasonable degree of confidence in the market research study .
A favourable market research report given that the state of nature is weak demand is often
referred to as a “false positive,” while the converse (an unfavourable market research report
given that the state of nature is strong demand) is referred to as a “false negative”. A potential
buyer’s initial favourable response can change quickly to a “no thank you” when later faced
with the reality of signing a purchase contract and making a down payment. The tabular
probability computation procedure must be repeated for each possible sample information
outcome.
Lines showing the alternatives from decision nodes and the outcomes from chance nodes is
known as branch.
(Topic # 11)
Time Series Analysis and Forecasting:

Time Series:
“A set of observations at the equal interval of time”
“Time Series is sequence of observations in a variable measured (data observed in different
time period) at successive points in time or over successive period of time (increased value of
time, advancement of time)”.
“A set of observations taken at specified time-usually at equal intervals”
A time series plot is a graphical presentation of the relationship between time and the time
series variable; time is represented on the horizontal axis and values of the time series
variable are shown on the vertical axis.

Mathematically:
A time series is a set of observation taken at specified times, usually at ‘equal
intervals’. Mathematically a time series is defined by the values Y1, Y2…of a
variable Y at times t1, t2…. Thus, Y= F(t)
 Time Series may be identified by the values.
 Y1, Y2, Y3,…………………Yn
 Y may be any variable sales- production.
 Y occurs in different interval of time T1, T2, T3, ……….Tn
Hence Y = f (t)
 Here we means equal interval of time.

Time Series Analysis is used for many applications such as:


 Economic Forecasting.
 Sales Forecasting.
 Budgetary Analysis
 Stock Market Analysis
 For cost identification
 Inventory level Appraisal of Business sales
 Yield Projection
 Marketing staff trends etc.
 Processing of quality Control

Time series forecasting:


“process of analysing time series data using statistics and modelling to make predictions and
inform strategic decision-making”.

A time series in general is supposed to be affected by four main components, which can be
separated from the observed data.
These four components are:
 Secular trend, which describe the movement along the term;
 Seasonal variations, which represent seasonal changes;
 Cyclical fluctuations, which correspond to periodical but not seasonal variations;
 Irregular variations, which are other non-random sources of variations of series.

Secular Trend:
The general tendency of a time series to increase, decrease or stagnate (deteriorate) over a
long period of time is termed as Secular Trend or simply Trend.
Thus, it can be said that trend is a long term movement in a time series. For example National
income, Agricultural production Export & imports etc. show upward trend, whereas
downward trend can be observed in series relating to mortality rates, epidemics, and etc.
• Either trend would be increased or decreased
• Secular trend depends on long period of time.
Seasonal variation(s)
Seasonal variations in a time series are fluctuations within a year during the season.
The important factors causing seasonal variations are: climate and weather conditions,
customs, traditional habits, etc. For example sales of ice-cream increase in summer,
sales of woollen cloths increase in winter. Seasonal variation is an important factor for
businessmen, shopkeeper and producers for making proper future plans.

 Short term periodical movement and regular in nature


 Involves patterns of change within a year that trend to be repeated for year to year
 Regularity Fixed proportion Increase or Decrease Easy forecast.

Cyclical Variations:
Any change in economic activity that is due to some regular and/or recurring cause, such as
the business cycle or seasonal influences. The cyclical variation in a time series describes the
medium-term changes in the series, caused by circumstances, which repeat in cycles.
For example a business cycle consists of four phases, viz.
 Prosperity,
 Decline,
 Depression
 Recovery
I. Trend line behavior.
The cyclical (rend) variations may take positive or negative signs
depending whether they are above or below the trend line.
II. Zero effect
The positive value of cyclical variations during upswings cancels the
negative value during downswings so that net effect over the cyclical
period will be zero.

Selecting a Forecasting Method:


A time series plot should be one of the first analytic tools employed when trying to determine
which forecasting method to use. If we see a horizontal pattern, then we need to select a
method appropriate for this type of pattern. Similarly, if we observe a trend in the data, then
we need to use a forecasting method that is capable of handling a trend effectively.

Forecast Accuracy:
The key concept associated with measuring forecast accuracy is forecast error. If we denote
Yt and Ft as the actual and forecasted values of the time series for period t, respectively , the
forecasting error for period t is
et = Yt – Ft ………….. (1)
That is, the forecast error for time period t is the difference between the actual and the
forecasted values for period.

Moving Averages:
The moving averages method uses the average of the most recent k data values in the time
series as the forecast for the next period.
The term moving is used because every time a new observation becomes available for the
time series, it replaces the oldest observation in the equation and a new average is computed.
To use moving averages to forecast a time series, we must first select the order k, or number
of time series values to be included in the moving average.

Weighted Moving Averages:


In the moving averages method, each observation in the moving average calculation receives
equal weight. One variation, known as weighted moving averages, involves selecting a
different weight for each data value in the moving average and then computing a weighted
average of the most recent k values as the forecast.
A moving average forecast of order k is just a special case of the weighted moving averages
method in which each weight is equal to 1/k; for example, a moving average forecast of order
k = 3 is just a special case of the weighted moving averages method in which each weight is
equal to 1/3.

Exponential Smoothing:
Exponential smoothing also uses a weighted average of past time series values as a forecast,
it is a special case of the weighted moving averages method in which we select only one
weight the weight for the most recent observation. The weights for the other data values are
computed automatically and become smaller as the observations move farther into the past.

………………………………………………………………………….

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy