0% found this document useful (0 votes)
29 views5 pages

A Comparison of Greedy Algorithm and Dynamic Programming Algorithm

The document compares greedy algorithms and dynamic programming algorithms. Greedy algorithms make locally optimal choices at each step to find a solution faster, but may not be globally optimal. Dynamic programming finds the globally optimal solution, but is slower. The knapsack problem is used as an example to illustrate the differences between the two approaches.

Uploaded by

fadeevla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views5 pages

A Comparison of Greedy Algorithm and Dynamic Programming Algorithm

The document compares greedy algorithms and dynamic programming algorithms. Greedy algorithms make locally optimal choices at each step to find a solution faster, but may not be globally optimal. Dynamic programming finds the globally optimal solution, but is slower. The knapsack problem is used as an example to illustrate the differences between the two approaches.

Uploaded by

fadeevla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

SHS Web of Conferences 144, 03009 (2022) https://doi.org/10.

1051/shsconf/202214403009
STEHF 2022

A Comparison of Greedy Algorithm and Dynamic Programming


Algorithm
Xiaoxi Chen*
High School Affiliated to Renmin University of China, Beijing, China

ABSTRACT: Two algorithms to handle the problem include greedy algorithms and dynamic programming.
Because of their simplicity, intuitiveness, and great efficiency in addressing problems, they are frequently
employed in a variety of circumstances. The connection and difference of the two algorithms are compared
by introducing the essential ideas of the two algorithms. The knapsack problem is a classic problem in
computer science. In the application of solving the backpack problem, greedy algorithm is faster, but the
resulting solution is not always optimal; dynamic programming results in an optimal solution, but the solving
speed is slower. The research compares the application properties and application scope of the two strategies,
with the greedy approach being the better approach in solving the knapsack problem.

algorithms in programming, and this paper helps them to


1. INTRODUCTION choose the proper and efficient algorithm to complete their
tasks in their programming work.
Computational algorithms have rapidly developed to
satisfy people’s need for large-scale data processing and
the solution of a wide range of practical problems. Several 2. STATING THE KNAPSACK PROBLEM
models, including linear planning, dynamic programming,
and greedy strategy, have been applied to computer In the knapsack problem, you are given n items (each item
computational law, resulting in efficient algorithms for has just one item) and a knapsack. Item i has a weight of
solving a wide range of practical issues. In computational wi, a value of vi, and a capacity of C in the knapsack.
algorithms, dynamic programming algorithms and greedy Inquire about how to choose stuff so that the objects in the
algorithms are key core design principles. There are some knapsack have the greatest value. For example, each item
commonalities as well as significant variances. The i load into xi has a benefit of vi *xi. There are two types
purpose of this research is to provide programmers with a of knapsack problems:
practical application performance of both algorithms 1. Partial knapsack problem. Items can be grouped into
when choosing an optimized method to implement a portions in a rucksack during the selecting process, i.e., 0
function. With the above support, the programs using < xi < 1 (greedy algorithm).
these two algorithms will have more decent data 2. 0/1 knapsack problem. Similar to the partial
organization and clearer logic. knapsack issue, except with no load or full load, i.e. xi=1
The optimal decision of a process has the property that or 0. (dynamic programming algorithm) [1].
its future strategies must constitute the optimal strategy for
the process that takes the state established by the first 3. GREEDY ALGORITHMS
decision as to its starting state, regardless of what its
beginning state and initial decision are. In other words, an The ideal solution is the row solution. A series of locally
optimum strategy’s sub-strategies must likewise be optimal choices, termed greedy choices, can lead to the
optimal for its beginning and final states. In general, fine- global optimal solution to this type of problem (this is the
tuning the algorithm is required to attain higher main difference between greedy algorithms and dynamic
performance. However, in some circumstances, even after programming).
adjusting the algorithm, the performance still falls short of
the criteria, necessitating the use of another way to tackle
the problem. 3.1. Definition of greed strategy
The partial knapsack problem and the 0/1 knapsack A greedy strategy is a method of solving a problem from
problem are discussed in this work, as well as the its initial state by making several greedy choices to arrive
differences between greedy and dynamic programming at the optimal value (or better solution). The greedy
algorithms. Practitioners in the field of computing are strategy always makes the choice that seems optimal at the
always faced with the selection between different moment, that is, the greedy strategy does not consider the

*Corresponding author. Email: 3408663616@qq.com

© The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0
(http://creativecommons.org/licenses/by/4.0/).
SHS Web of Conferences 144, 03009 (2022) https://doi.org/10.1051/shsconf/202214403009
STEHF 2022

problem as a whole, but makes a choice that is only locally To get the optimal solution, the knapsack must be
optimal in some sense, while the characteristics of many loaded to the maximum capacity, i.e., W =
problems determine that the problem can be solved To solve this problem with a greedy strategy, firstly
optimally or better using the greedy strategy. (Note: The choose a metric, i.e., what criteria to follow in each
greedy algorithm does not produce an overall optimal selection. After analysis, this problem can be in
solution for all problems, but it does produce an overall accordance with the criteria of maximum value, minimum
optimal solution for a fairly wide range of problems. weight, and maximum value per unit weight, respectively
However, the solution is necessarily a good [3]. The analysis is as follows.
approximation to the optimal solution.) [2]. Metrics according to maximum value priority:
By using a top-down, iterative approach to make The first choice among the remaining optional items is
successive greedy choices, each greedy choice reduces the the one with the highest value, i.e., item C, which weighs
problem to a smaller sub-problem. To determine whether 40 and is smaller than the total capacity of the knapsack,
a specific problem has the property of greedy selection, and then items A and B. Consequently, it cannot be put in.
we must prove that the greedy choices made at each step The corresponding solution list is:
ultimately lead to an optimal solution to the problem. It is
x 1,1,1,0,0,0
often possible to first show that an overall optimal solution
to the problem starts with a greedy selection and that after Metrics according to minimum wright priority:
the greedy selection is made, the original problem reduces Select the item with the least weight first from the
to a similar sub-problem of smaller size. Then, it is shown remaining available items each time. Select in order. That
by mathematical induction that each step of greedy choice is, first select item 1 with a weight of 10, which is smaller
leads to an overall optimal solution to the problem. than the total capacity of the knapsack of 90, and then
select items 5, 4, 6, and 2 in turn. The total capacity and
3.2. Practical application of greedy algorithms value of the selected items are respectively:
C 10 10 20 20 30 90
3.2.1. The fundamental strategy of the greedy V 50 30 20 40 45 185
method
The corresponding solution list is:
Starting from a certain initial solution to the problem.
x 1,1,0,1,1,1
Approach the given goal step by step to find a better
solution as fast as possible. When a certain step in the After comparison, the total value obtained by selecting
algorithm is reached and no further progress can be made, the item according to the criterion of minimum weight is
the algorithm stops. greater than the total value obtained by selecting the item
according to the criterion of maximum value. That is, the
weight criterion is superior to the value criterion.
3.2.2. Problems with the greedy algorithm
Metrics according to maximum unit wright priority:
The final solution is not guaranteed to be the best. Each time in the remaining optional items first choose
It cannot be used to find the maximum or minimum the item with the largest unit value, and then choose in
solution. turn. After analysis, the order of selecting items in turn is
It can only find the range of feasible solutions that 1, 5, 6, 2, at this time the capacity of the knapsack is:
satisfy certain constraints. C 10 10 20 30 70
Table 1. Examples of knapsack problem C is less than the total capacity of the knapsack 90, you
can also put half of item C, the total weight of 90, at this
Items A B C D E F time the total value of the knapsack is:
Weight 10 30 40 20 10 20
V 50 30 40 45 30 195
Value 50 45 60 20 30 40
W/V 5 1.5 1.5 1 3 2 After comparing, this method is optimal.
Therefore, the selection strategy of the 0/1 knapsack
problem is to select the items according to the maximum
3.2.3. The process of implementing the algorithm unit value first, and then greedily select the item with the
largest unit value among the available items. To solve the
Starting from an initial solution of the problem; finding a problem, first, sort the n items by unit value, and then
solution element of a feasible solution when it is possible prioritize the items according to the largest unit value after
to go further towards the given overall goal; combining all sorting.
solution elements into a feasible solution of the problem.

3.2.4. Example of a knapsack problem (partial


knapsack problem)
We suppose now there are 6 items, n=6 and W =90. Table
1 gives the weight, value and value per unit weight of the
items.

2
SHS Web of Conferences 144, 03009 (2022) https://doi.org/10.1051/shsconf/202214403009
STEHF 2022

4. DYNAMIC PROGRAMMING ensure that the sub-problems used to construct the optimal
solution are solved during the runtime.
4.1. Principles of dynamic programming
4.2.2. Non-aftereffect property
The basic strategy of dynamic programming is the multi-
stage problem, which is a problem that can be divided into When a multi-stage decision problem is divided into
multiple interconnected stages with a chain structure in a stages, the state of the stages preceding a given stage will
specific order. At each stage, a decision needs to be made, not influence the decision made in the current stage. The
and the decision of the previous stage directly affects the current stage can only make decisions about the future
state of the next stage, which depends on the result of the development of the process through the current state
previous decision. The decisions of all stages will without depending on the state of the previous stages,
eventually form a decision sequence and solving a multi- which is called posteriority-free.
stage decision optimization problem is to find a decision Therefore, the key condition for a problem to be solved
sequence that makes a certain indicator function of the by a dynamic programming algorithm is that the state of
problem optimal [5]. the problem satisfies the non-aftereffect property. To
determine whether the states of the problem have non-
aftereffect property, an effective method is to model the
graph with the phase states as vertices and the decision
relationships between phases as directed edges, and then
determine whether the graph can be topologically ordered.
If the graph cannot be topologically ordered, then there are
loops and the problem is not non-aftereffect between the
states, and the problem cannot be solved by dynamic
programming.

Figure 1. Multi-stage decision strategy. 4.3. Characteristics of dynamic programming


problems
As shown in Figure 1, solving problem A depends on
solving several sub-problems of phase B, solving phase B The effectiveness of the dynamic programming algorithm
depends on solving several problems of phase C, and so depends on an important property of the problem itself:
on until all problems are solved. the sub-problem overlap property.
When the algorithm computes the same sub-problem
repeatedly during the run, it indicates that the sub-
4.2. Scope of application of dynamic
problems of the problem are overlapping. Dynamic
programming
programming takes advantage of the overlapping nature
The dynamic programming algorithm applies a certain of sub-problems by computing each sub-problem
range and prerequisite constraints, beyond which the encountered for the first time and caching the solution of
specific certain conditions, it becomes useless. Decision the sub-problem so that if the algorithm encounters the
optimization problems that can be solved by dynamic same sub-problem in the next run, it does not need to
programming methods must satisfy the optimal recompute it and can directly check the cached results.
substructure property (optimality principle) and the non- The key to the dynamic programming algorithm is that it
aftereffect nature of the state. stores the various states of the solution during the process,
avoiding the repeated computation of sub-problems. In
contrast, the partitioning algorithm does not cache the
4.2.1. Optimal substructure properties solutions of sub-problems when solving a problem, and
calculates a new sub-problem each time, so the sub-
The first step in solving multi-stage decision problems
problems that can be solved by the partitioning algorithm
with dynamic programming algorithms is often to
cannot overlap, and if the sub-problems overlap, there will
describe the structure of the optimal solution to the
be a lot of repeated calculations in the partitioning
problem. A problem is said to satisfy the optimal sub-
algorithm, resulting in the inefficiency of the algorithm in
structure property if the optimal solution to the problem is
time. The overlapping nature of sub-problems is not a
composed of optimal solutions to sub-problems. The
necessary condition for applying dynamic programming
optimal substructure property is the basis of dynamic
algorithms, but the time efficiency of dynamic
programming algorithms. Any problem whose solution
programming algorithms depends on the degree of
structure does not satisfy the optimal substructure
overlapping sub-problems.
property cannot be solved by dynamic programming
methods. In general, the state transfer equation of the
problem can be derived from the optimal substructure of 4.4. General steps of dynamic programming
the problem. In a dynamic programming algorithm, the algorithm design
optimal solution to the problem is composed of the
optimal solutions to the sub-problems, so it is important to Define sub-problems: The problem is divided into several
sub-problems based on the characteristics of the problem.

3
SHS Web of Conferences 144, 03009 (2022) https://doi.org/10.1051/shsconf/202214403009
STEHF 2022

The divided sub-problems are sequentially related, and the 5. COMPARISON OF DYNAMIC
current sub-problem can be solved given a relatively small PROGRAMMING ALGORITHM AND
sub-problem solution.
Selecting a state: The objective situation of the sub- GREEDY ALGORITHM
problem is represented by the state, which must satisfy Both dynamic programming algorithms and greedy
non-aftereffect. algorithms are recursive classical algorithms for solving
Determining the state transfer equation: the process of optimization problems, and both derive the global optimal
determining the state of the current stage by choosing the solution from the local optimal solution, which makes
state of the previous stage and the decision of the current them similar. However, there are significant differences
stage is state transfer. Decisions are directly related to between them.
state shifting, and the state shifting equation for the Each decision step of the greedy algorithm cannot be
problem can be written naturally if the range of decisions changed and becomes a definitive step in the final decision
available for the stage is determined. solution, shown in the equation below.
Finding the boundary conditions: the initial or end
conditions of the iteration of the state transfer equation. f xn Vi
There is no standard model of dynamic programming
algorithm that can be used for all problems, and the
The global optimal solution of the dynamic
algorithmic model of dynamic programming varies from
programming algorithm must contain some local optimal
problem to problem, so problem-specific analysis is
solution, but the optimal solution of the current state does
needed. When designing an algorithm using dynamic not necessarily contain the local optimal solution of the
programming ideas, it is not necessary to stick to the
previous state, so it is different from the greedy algorithm,
design model too much often have unexpected results.
which needs to calculate the optimal solution of each state
(each step) and save it for the subsequent state calculation
4.5. Examples of Dynamic Programming reference.
Algorithms: 0/1 Knapsack Problem The greedy algorithm outperforms the dynamic
planning algorithm in terms of time complexity and space
When solving a real problem with a dynamic complexity, but the "greedy" decision rule (decision basis)
programming algorithm, the first step is to build a is difficult to determine, i.e., the selection of the Vi
dynamic programming model, which generally requires function, so that different decision bases may lead to
the following steps: different conclusions, affecting the generation of optimal
Analyze the problem and define the characteristics of solutions.
the optimal solution. The dynamic programming algorithm can be used to
Divide the problem phase and define the phase solve the eligible problems in a limited time, but it
calculation objectives. requires a large amount of space because it needs to store
Solve the phase conclusions, form a decision the computed results temporarily. Although it is possible
mechanism, and store the knowledge set. to share a single sub-problem solution for all problems
Construct an optimal solution based on the containing the same sub-problem, the advantage of
information obtained when calculating the optimal value. dynamic programming comes at the expense of space. The
Design the program and write the corresponding code. space conflict is highlighted by the need for efficient
The optimal solution is to select n items (0 ≤ n ≤ N) so access to existing results and the fact that data cannot be
that V is maximum; the knapsack problem is an N-stage easily compressed and stored. The high timeliness of
problem with j sub-problems in each stage, and the state dynamic programming is often reflected by large test data.
is defined as the process of how to decide the state of C = Therefore, how to solve the space overflow problem
j and N = i. The decision function is f i, j), and the analysis without affecting the operation speed is a hot issue for
shows that f (i, j) The decision is shown in equation below, dynamic programming in the future.
where vi is the value of V for the ith knapsack, which is
the core algorithm of the decision:
6. CONCLUSION
max f i 1, j vi vi, f i 1, j
f i, j
f i 1, j As with greedy algorithms, in dynamic planning, the
solution to a problem can be viewed as the result of a
When vi ≤ j, f (i, j) takes the maximum of f (i − 1, j −
series of decisions. as the result of a series of decisions.
vi) + v(i) and f (i − 1, j); when vi > j, the ith
The difference is that in a greedy algorithm, an irrevocable
knapsack cannot be put in, so the solution f (i, j) = f (i − 1,
decision is made every time the greedy criterion is used,
j).
whereas in dynamic programming, it is also examined
In the equation, f (i − 1, j),f (i − 1, j − vi) are solved,
whether each optimal sequence of decisions contains an
so f (i, j) can be calculated [6].
optimal subsequence. When a problem has an optimal
substructure, we think of using dynamic programming to
solve it, but there are simpler, more efficient ways to solve
some problems, if we always make what seems to be the
best choice at the moment. The choices made by the
greedy algorithm can depend on previous choices, but

4
SHS Web of Conferences 144, 03009 (2022) https://doi.org/10.1051/shsconf/202214403009
STEHF 2022

never on future choices or on the solution of sub-problems,


which gives the algorithm a speed advantage in both
coding and execution. This gives the algorithm a speed
advantage in both encoding and execution. If a problem
can be solved by several methods simultaneously, the
greedy algorithm should be one of the best choices.
However, the greedy algorithm does not provide the
overall optimal solution or the most desirable
approximation for all problems, and its application area is
much narrower than that of the backtracking method, so it
is important to judge the right time for its application.
This paper has achieved some results in the time-
efficient optimization of dynamic programming algorithm
and greedy algorithm, but there is a need for further
improvement and perfection. The next phase of research
will focus on combining the dynamic programming
algorithm and the greedy algorithm with other
optimization algorithms respectively, to design high-
performance programming models that bring into play the
advantages of both and enhance the computational
efficiency of the algorithms so that they can adapt to
larger-scale computations. This is also the direction of
improvement and development of these two strategies.

REFERENCES
1. Chu, P., & Beasley, J. (1998). A Genetic Algorithm
for the Multidimensional Knapsack Problem. Journal
of Heuristics, 4(1), 63–86.
https://doi.org/10.1023/a:1009642405419
2. Martello, S., & Toth, P. (1987). Algorithms for
Knapsack Problems. North-Holland Mathematics
Studies, 132, 213–257.
https://doi.org/10.1016/S0304-0208(08)73237-7
3. Vince, A. (2001). A framework for the greedy
algorithm. DISCRETE APPLIED MATHMATICS.
https://doi.org/10.1016/S0166-218X(01)00362-6
4. Wolsey, L. A. (1982). An analysis of the greedy
algorithm for the submodular set covering problem.
Combinatorica, 2(4), 385–393.
https://doi.org/10.1007/bf02579435
5. Eddy, S. R. (2004). What is dynamic programming?
Nature Biotechnology, 22(7), 909–910.
https://doi.org/10.1038/nbt0704-909
6. Rahwan, T., & r Jennings, N. (2008). An Improved
Dynamic Programming Algorithm for Coalition
Structure Generation.
https://aamas.csc.liv.ac.uk/Proceedings/aamas08/pro
ceedings/pdf/paper/AAMAS08_0192.pdf

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy