0% found this document useful (0 votes)
75 views9 pages

Prescriptive Analytics

Uploaded by

Piya Nair
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views9 pages

Prescriptive Analytics

Uploaded by

Piya Nair
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Prescriptive Analytics

Prescriptive analytics is the use of advanced processes and tools to analyze data and content to
recommend the optimal course of action or strategy for moving forward.

Simply put, it seeks to answer the question, “What should we do?”

Prescriptive analytics is the frontier of business analytics which is used for identifying the best action
(or optimal solution) for a problem. Operations Research (OR) techniques are frequently used for
finding optimal solution to a problem. Descriptive analytics analyses the past data to understand
trends pres ent in the data (answers to the question: What happened?). Predictive analytics techniques
will predict what will happen to key performance indicators in the future? Prescriptive analytics assists
the decision maker to identify the best action (optimal solution), given the problem context. That is,
prescriptive analytics, as the name suggests, prescribes the best solution or decision/action for the
problem. Note that decisions or actions can be derived based on descriptive and predictive analytics
as well. For example, using predictive analytics, retailer such as Amazon and Flipkart can predict what
a customer is likely to buy in the future and design product recommendations. The difference between
decisions arrived using descriptive/predictive analytics and prescriptive analytics is that prescriptive
analytics algorithm tries to arrive at the best decision (optimal solution) based on an objective function
(sometimes more than one objective functions) and a list of constraints. Operations research
techniques such as linear programming, integer programming, goal programming, non-linear
programming, and meta-heuristics are used for prescribing optimal solution to a problem.

Optimization model

• It is defined as a mathematical or intelligence-based approach used to solve complex problems


by analyzing various objectives and constraints to achieve optimal solutions.

LINEAR PROGRAMMING

Linear programming is one of the important techniques in operations research and prescriptive
analytics. Linear programming is used when the objective function and the constraints of the problem
can be expressed as linear equation of decision variables. The use of linear programming dates back
to the World War II during which manpower and logistics related problems were encountered by the
US military and attempts to solve these problems were carried using linear programming techniques
(Dantzig, 1963). Immediately after the World War II, several commercial applications of linear pro
gramming were identified which triggered further development of the field and solution approaches

LINEAR PROGRAMMING (LP) MODEL BUILDING

First stage in LP is formulating the problem as LP problem. The following steps are used in formulating
a problem as linear programming problem (LPP):

1. Identification of decision variables: Given a problem, we have to first identify the decisions to be
taken by the decision maker. The decisions to be taken are expressed through decision variables.

2. Identify the objective function: The primary goal of the decision maker is expressed through the
objective function which is a linear function of decision variables. The goal is either to mini mize or
maximize the objective function value.
3. Identify constraints: Constraints are restrictions such as availability of resources that a linear
programming problem should satisfy.

4. Identify implicit constraints: Implicit constraints are conditions that the model has to satisfy, for
example, the number of products to be produced cannot take negative values, and thus this variable
can take only non-negative values. Also all variables need to be non-negative in simplex algorithm.

5. Solve the problem: Once the objective function and constraints are identified, the problems can be
solved using algorithms such as simplex algorithm and interior point algorithm.

6. Perform sensitivity analysis: The values of objective function coefficients and resource avail ability
may change due to several factors such as market conditions. It is important to under stand the impact
of the changes in objective function coefficient and resource availability on the optimal solution; this
is achieved through sensitivity analysis.

• Mathematically, linear programming optimizes (minimizes or maximizes) the linear objective


of several variables subject to the given conditions/constraints that satisfies

• LP problems can be solved using different techniques such as Graphical, Simplex, and
Karmakar's method.

Important terms

• Decision variables are the variables that will be used as a function of the objective function.
These variables decide your output. The decision maker can control the value of an objective
function using the decision variable.

• Constraints are a set of restrictions or situational conditions. Constraints restrict the value of
decision variables.

• The objective function is a profit or cost function that maximizes or minimize. It is the main
target of making decisions.

• The optimal solution is one of the feasible solutions -It is the best value of the objective
function.
Decision Theory
Decision theory deals with methods for determining the optimal course of action when a number of
alternatives are available and their consequences cannot be forecast with certainty.

The scientific approach to the study of managerial problems incorporates a problem solving process.
The text presents a seven-step process:

1. Identify and define the problem

2. Determine the set of alternative solutions

3. Determine the criterion or criteria that will be used to evaluate the alternatives.

4. Evaluate the alternatives (using an appropriate quantitative method or model).

5. Choose an alternative

6. Implement the selected alternative (the decision)

7. Evaluate the results and determine if a satisfactory solution has been obtained.

Decision-making is concerned with the first five steps. The first three steps are concerned with
structuring the problem, and the next two with analysis. When the manager adds qualitative
considerations to the selection process, and follows through with implementation and evaluation,
the problem solving process is complete.

Elements of the problem

Possible alternatives (actions, acts),

Possible events(states, outcomes of a random process),

Probabilities of these events,

Consequences associated with each possible alternative-event combination (payoffs), and

The criterion (decision rule) according to which the best alternative is selected

Decision analysis is concerned with selecting an option or alternative course of action (the decision)
given prior knowledge of its outcome (called a payoff) for various future scenarios (called states of
nature or events). The decision-maker has control over the process of selecting an alternative course
of action, but not over the states of nature, at least not in the short run. Let's illustrate these terms
with an example.

Suppose a manufacturer has three alternative courses of action for the next production period. They
can make their product (we will use the symbol d1 to represent the first decision alternative), buy
their product from another manufacturer and sell it to their customers (d2), or they can do nothing in
the next production period (d3). Suppose further that the manufacture has a simple forecast for the
next productions period: demand will be low (state of nature symbol s1) or high (s2). The final input
for the structure of the decision analysis problem is the outcome or payoff resulting from each state
of nature/decision combination.

A convenient structure for displaying the decision alternatives, states of nature and payoffs is a
payoff table.
Table 1.2.1

States of Nature
Decision Alternatives
Low Demand, s1 High Demand, s2

Make Product, d1 ($ 20,000) $ 90,000

Buy Product, d2 $ 10, 000 $ 70,000

Do Nothing, d3 $ 5,000 $ 5,000

The payoff table shows, for example, that making the product leads to a profit of $90,000 when the
demand turn out to be high, or a loss of $ 20,000 if demand is low.

To continue with the table, buying the product shows that the manufacturer can avoid a loss,
compared to making the product, if the demand turns out to be low since the manufacturer avoids
paying fixed production costs. If the demand is high, the manufacturer makes a profit, but not as
much as if they made the product since they miss the production economies of scale. If the
manufacturer does nothing, a small profit is earned from selling existing inventory that just meets a
low level of demand ( $5,000).

Another way to display the structure of the decision problem is with a decision tree.

Figure 1.2.1
In this decision tree, the cell labeled A denotes a decision node, followed by its three decision
branches. The cells labeled B, C and D denote state of nature nodes which are each followed by the
two state of nature branches. Payoffs are shown at the terminal end of each state of nature branch.

Structuring the decision problem, as in formulation of all quantitative models, establishes the first
major benefit of these formal methods of decision making. That is, the decision-maker is required to
formally consider many of the important aspects of a decision problem during the problem-
structuring phase. These aspects may be overlooked when the decision-maker uses "gut feel" or
some other purely qualitative technique to make decisions.

After the problem is structured selecting one of the possible decision alternatives according to
predetermine selection criteria. There are two major approaches in decision analysis that depend on
the availability of information on the states of nature. One approach is called decision making
without probabilities, and the other, decision making with probabilities.

Decision Making without Probabilities

In this approach, the decision-maker has no information concerning the relative likelihood of each of
the states of nature. It is sometimes called "decision making under risk." There are three classic
criteria used in decision-making strategies without probabilities, as described in the next three
subsections.

Optimistic Criterion
In this strategy, the decision-maker evaluates each decision alternative in terms of the best payoff
that can occur. The strategy is best illustrated in the payoff table repeated below.

Table 1.3.1

States of Nature Maximum


Decision Alternatives
Low Demand, s1 High Demand, s2 Payoff

Make Product, d1 ($ 20,000) $ 90,000 $ 90,000

Buy Product, d2 $ 10, 000 $ 70,000 $ 70,000

Do Nothing, d3 $ 5,000 $ 5,000 $ 5,000

Note that an extra column is added to record the maximum payoff for each decision alternative
(maximum payoff in each row of the table). The decision-maker employing the optimistic criterion
then selects the decision alternative associated with the maximum of the maximum payoffs. Since
$90,000 is the maximum of the maximum payoffs, d1 is the selected decision. This strategy is
sometimes called the maximax strategy. The "eternal optimist" would be one using this approach -
it captures the philosophy of decision-makers that accept risk of large losses to make substantial
gains.
Conservative Criterion
In this strategy, the decision-maker evaluates each decision alternative in terms of the worst payoff
that can occur. The payoff table is repeated here with a new column to record the minimum payoffs
for each decision alternative (minimum payoff for each row in the table).

Table 1.3.2.

States of Nature Minimum


Decision Alternatives
Low Demand, s1 High Demand, s2 Payoff

Make Product, d1 ($ 20,000) $ 90,000 ($ 20,000)

Buy Product, d2 $ 10, 000 $ 70,000 $ 10,000

Do Nothing, d3 $ 5,000 $ 5,000 $ 5,000

This strategy, which is sometimes called the maximin strategy, then selects that decision alternative
associated with the maximum of the minimum payoffs. In this situation, the decision-maker would
select d2, buy the product, since $10,000 is the maximum of the minimum payoffs. Some believe this
strategy is associated with "eternal pessimists", but to be fair, it is actually a conservative strategy
used by decision-makers who seek to avoid large losses.

Compromise Minimax Regret Strategy


The third classic strategy for decision making under risk is called the minimax regret strategy, and is
neither purely optimistic nor conservative. This approach first starts by converting the payoff table
into a regret table. The regret table looks at each state of nature, one at a time, and asks, "if I knew
ahead of time that state of nature s1 will occur, what would I do?" The answer to maximize profit
would be, "buy the product (d2)", since that leads to the highest profit, $10,000. If the decision-
maker selected d2 and s1 occurred, there would be no regret. On the other hand, if the decision-
maker selected d3, there would be a regret or opportunity loss of $5,000 ($10,000 that could have
been gained minus $5,000 that was gained). Similarly, there would be a regret of $30,000 if the
decision-maker selected d1 and state of nature s1 occurred ($10,000 that could have been gained
minus the minus $20,000 loss).

The regret numbers for s2 are prepared in a similar fashion. "If I knew ahead of time that state of
nature s2 would occur, what would I do?" The answer, again to maximize profit, is "make the product
(d1)," since that leads to the highest profit for s2, $90,000. If the decision-maker selected d1 and
s2 occurred, there would be no regret. On the other hand, if the decision-maker selected d2, there
would be an opportunity loss or regret of $20,000 ($90,000 that could have been gained minus
$70,000 that was gained). Likewise, if the decision-maker selected d3 there would be a regret of
$85,000 ($90,000 minus $5,000). Table 1.3.3 illustrates the completed regret table.
Table 1.3.3.

States of Nature Maximum


Decision Alternative
Low Demand, s1 High Demand, s2 Regret

Make Product, d1 $ 30,000 $0 $ 30,000

Buy Product, d2 $0 $ 20,000 $ 20,000

Do Nothing, d3 $ 5,000 $ 85,000 $ 85,000

create a new column, "Maximum Regret," to record the maximum regret value associated with each
decision strategy (the maximum regret in each row of the regret table). This strategy, which is
sometimes called the minimax regret strategy, then selects that decision alternative associated with
the minimum of the maximum regrets. In this situation, the decision-maker would select d2, buy the
product, since $20,000 is the minimum of the maximum regrets. Some believe this strategy follows a
"middle of the road" approach of minimizing losses.

Decision Making with Probabilities

In this approach, the decision-maker has information concerning the relative likelihood of each of the
states of nature. It is sometimes called "decision making under uncertainty." The criterion used in
decision-making strategy with probabilities is to select that decision so as to maximize the expected
value of the outcome. To illustrate the approach, let's refer again to the payoff table for our make-
buy example, this time adding a row for probabilities and a column for the expected value of the
decision alternatives.

Table 1.4.1

States of Nature Expected


Decision Alternative
Low Demand, s1 High Demand, s2 Value

Make Product, d1 ($ 20,000) $ 90,000 $ 51,500

Buy Product, d2 $ 10,000 $ 70,000 $ 49,000

Do Nothing, d3 $ 5,000 $ 5,000 $ 5,000

Probabilities P (s1) = 0.35 P (s2) = 0.65

In decision analysis, it is assumed that the probabilities are long-term relative frequencies. Since they
are often simply the subjective judgment of the decision- maker, the techniques is subject to the
criticism of this limitation.
The expected value (EV) for a decision alternative is the sum of the [probabilities of the states of
nature times the payoffs]. For the "make product" decision:

EV (d1) = = [0.35 * -20,000] + [0.65 * 90,000]


= $51,500

The EV of $51,500 represents the long run outcome of repeated "make product" experiments. That
is, if we could theoretically conduct the "make product" decision 100 times, 35 times we would lose
$20,000, and 65 times we would make $90,000. The weighted average of these outcomes is $51,500.
In reality, we do not conduct the experiment 100 times - we make the decision once and we are
either going to lose $20,000 or make $90,000. However, and this is very important, we use the
expected value approach to assist us in making the decision.

That to abide by the laws of probability, each probability must be a real number between 0 and 1,
and the sum of the probabilities for the states of nature must sum to one. For this to happen, the
states of nature must be mutually exclusive and exhaustive - that is, there cannot also be a state of
nature called, for example, medium demand. If there was such a state of nature, it would have to be
added to the payoff table and accounted for with a third probability.

The expected values for the second and third decision alternatives are calculated in a similar fashion.
These are shown in Table 1.4.1. Following the criterion of selecting that decision alternative which
maximizes the expected value of the outcome, we select decision alternative d1 as our optimal or
best strategy.

The expected value computations can be shown adjacent to the state of nature nodes in the decision
tree, as illustrated in Figure 1.4.1.

Figure 1.4.1
The decision tree shows the expected value computations by each of the state of nature nodes
labeled B, C and D. Since the expected value for the "make" decision" is the maximum, that decision
branch is selected as the optimum strategy. The "buy" and "do nothing" branches are pruned, as
indicated by the crosshatched lines. The decision tree does not provide additional information, it
simply presents a picture of the decision strategy with probabilities.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy