A Comparison of Greedy Algorithm and Dynamic Programming Algorithm
A Comparison of Greedy Algorithm and Dynamic Programming Algorithm
1051/shsconf/202214403009
STEHF 2022
ABSTRACT: Two algorithms to handle the problem include greedy algorithms and dynamic programming.
Because of their simplicity, intuitiveness, and great efficiency in addressing problems, they are frequently
employed in a variety of circumstances. The connection and difference of the two algorithms are compared
by introducing the essential ideas of the two algorithms. The knapsack problem is a classic problem in
computer science. In the application of solving the backpack problem, greedy algorithm is faster, but the
resulting solution is not always optimal; dynamic programming results in an optimal solution, but the solving
speed is slower. The research compares the application properties and application scope of the two strategies,
with the greedy approach being the better approach in solving the knapsack problem.
© The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0
(http://creativecommons.org/licenses/by/4.0/).
SHS Web of Conferences 144, 03009 (2022) https://doi.org/10.1051/shsconf/202214403009
STEHF 2022
problem as a whole, but makes a choice that is only locally To get the optimal solution, the knapsack must be
optimal in some sense, while the characteristics of many loaded to the maximum capacity, i.e., W =
problems determine that the problem can be solved To solve this problem with a greedy strategy, firstly
optimally or better using the greedy strategy. (Note: The choose a metric, i.e., what criteria to follow in each
greedy algorithm does not produce an overall optimal selection. After analysis, this problem can be in
solution for all problems, but it does produce an overall accordance with the criteria of maximum value, minimum
optimal solution for a fairly wide range of problems. weight, and maximum value per unit weight, respectively
However, the solution is necessarily a good [3]. The analysis is as follows.
approximation to the optimal solution.) [2]. Metrics according to maximum value priority:
By using a top-down, iterative approach to make The first choice among the remaining optional items is
successive greedy choices, each greedy choice reduces the the one with the highest value, i.e., item C, which weighs
problem to a smaller sub-problem. To determine whether 40 and is smaller than the total capacity of the knapsack,
a specific problem has the property of greedy selection, and then items A and B. Consequently, it cannot be put in.
we must prove that the greedy choices made at each step The corresponding solution list is:
ultimately lead to an optimal solution to the problem. It is
x 1,1,1,0,0,0
often possible to first show that an overall optimal solution
to the problem starts with a greedy selection and that after Metrics according to minimum wright priority:
the greedy selection is made, the original problem reduces Select the item with the least weight first from the
to a similar sub-problem of smaller size. Then, it is shown remaining available items each time. Select in order. That
by mathematical induction that each step of greedy choice is, first select item 1 with a weight of 10, which is smaller
leads to an overall optimal solution to the problem. than the total capacity of the knapsack of 90, and then
select items 5, 4, 6, and 2 in turn. The total capacity and
3.2. Practical application of greedy algorithms value of the selected items are respectively:
C 10 10 20 20 30 90
3.2.1. The fundamental strategy of the greedy V 50 30 20 40 45 185
method
The corresponding solution list is:
Starting from a certain initial solution to the problem.
x 1,1,0,1,1,1
Approach the given goal step by step to find a better
solution as fast as possible. When a certain step in the After comparison, the total value obtained by selecting
algorithm is reached and no further progress can be made, the item according to the criterion of minimum weight is
the algorithm stops. greater than the total value obtained by selecting the item
according to the criterion of maximum value. That is, the
weight criterion is superior to the value criterion.
3.2.2. Problems with the greedy algorithm
Metrics according to maximum unit wright priority:
The final solution is not guaranteed to be the best. Each time in the remaining optional items first choose
It cannot be used to find the maximum or minimum the item with the largest unit value, and then choose in
solution. turn. After analysis, the order of selecting items in turn is
It can only find the range of feasible solutions that 1, 5, 6, 2, at this time the capacity of the knapsack is:
satisfy certain constraints. C 10 10 20 30 70
Table 1. Examples of knapsack problem C is less than the total capacity of the knapsack 90, you
can also put half of item C, the total weight of 90, at this
Items A B C D E F time the total value of the knapsack is:
Weight 10 30 40 20 10 20
V 50 30 40 45 30 195
Value 50 45 60 20 30 40
W/V 5 1.5 1.5 1 3 2 After comparing, this method is optimal.
Therefore, the selection strategy of the 0/1 knapsack
problem is to select the items according to the maximum
3.2.3. The process of implementing the algorithm unit value first, and then greedily select the item with the
largest unit value among the available items. To solve the
Starting from an initial solution of the problem; finding a problem, first, sort the n items by unit value, and then
solution element of a feasible solution when it is possible prioritize the items according to the largest unit value after
to go further towards the given overall goal; combining all sorting.
solution elements into a feasible solution of the problem.
2
SHS Web of Conferences 144, 03009 (2022) https://doi.org/10.1051/shsconf/202214403009
STEHF 2022
4. DYNAMIC PROGRAMMING ensure that the sub-problems used to construct the optimal
solution are solved during the runtime.
4.1. Principles of dynamic programming
4.2.2. Non-aftereffect property
The basic strategy of dynamic programming is the multi-
stage problem, which is a problem that can be divided into When a multi-stage decision problem is divided into
multiple interconnected stages with a chain structure in a stages, the state of the stages preceding a given stage will
specific order. At each stage, a decision needs to be made, not influence the decision made in the current stage. The
and the decision of the previous stage directly affects the current stage can only make decisions about the future
state of the next stage, which depends on the result of the development of the process through the current state
previous decision. The decisions of all stages will without depending on the state of the previous stages,
eventually form a decision sequence and solving a multi- which is called posteriority-free.
stage decision optimization problem is to find a decision Therefore, the key condition for a problem to be solved
sequence that makes a certain indicator function of the by a dynamic programming algorithm is that the state of
problem optimal [5]. the problem satisfies the non-aftereffect property. To
determine whether the states of the problem have non-
aftereffect property, an effective method is to model the
graph with the phase states as vertices and the decision
relationships between phases as directed edges, and then
determine whether the graph can be topologically ordered.
If the graph cannot be topologically ordered, then there are
loops and the problem is not non-aftereffect between the
states, and the problem cannot be solved by dynamic
programming.
3
SHS Web of Conferences 144, 03009 (2022) https://doi.org/10.1051/shsconf/202214403009
STEHF 2022
The divided sub-problems are sequentially related, and the 5. COMPARISON OF DYNAMIC
current sub-problem can be solved given a relatively small PROGRAMMING ALGORITHM AND
sub-problem solution.
Selecting a state: The objective situation of the sub- GREEDY ALGORITHM
problem is represented by the state, which must satisfy Both dynamic programming algorithms and greedy
non-aftereffect. algorithms are recursive classical algorithms for solving
Determining the state transfer equation: the process of optimization problems, and both derive the global optimal
determining the state of the current stage by choosing the solution from the local optimal solution, which makes
state of the previous stage and the decision of the current them similar. However, there are significant differences
stage is state transfer. Decisions are directly related to between them.
state shifting, and the state shifting equation for the Each decision step of the greedy algorithm cannot be
problem can be written naturally if the range of decisions changed and becomes a definitive step in the final decision
available for the stage is determined. solution, shown in the equation below.
Finding the boundary conditions: the initial or end
conditions of the iteration of the state transfer equation. f xn Vi
There is no standard model of dynamic programming
algorithm that can be used for all problems, and the
The global optimal solution of the dynamic
algorithmic model of dynamic programming varies from
programming algorithm must contain some local optimal
problem to problem, so problem-specific analysis is
solution, but the optimal solution of the current state does
needed. When designing an algorithm using dynamic not necessarily contain the local optimal solution of the
programming ideas, it is not necessary to stick to the
previous state, so it is different from the greedy algorithm,
design model too much often have unexpected results.
which needs to calculate the optimal solution of each state
(each step) and save it for the subsequent state calculation
4.5. Examples of Dynamic Programming reference.
Algorithms: 0/1 Knapsack Problem The greedy algorithm outperforms the dynamic
planning algorithm in terms of time complexity and space
When solving a real problem with a dynamic complexity, but the "greedy" decision rule (decision basis)
programming algorithm, the first step is to build a is difficult to determine, i.e., the selection of the Vi
dynamic programming model, which generally requires function, so that different decision bases may lead to
the following steps: different conclusions, affecting the generation of optimal
Analyze the problem and define the characteristics of solutions.
the optimal solution. The dynamic programming algorithm can be used to
Divide the problem phase and define the phase solve the eligible problems in a limited time, but it
calculation objectives. requires a large amount of space because it needs to store
Solve the phase conclusions, form a decision the computed results temporarily. Although it is possible
mechanism, and store the knowledge set. to share a single sub-problem solution for all problems
Construct an optimal solution based on the containing the same sub-problem, the advantage of
information obtained when calculating the optimal value. dynamic programming comes at the expense of space. The
Design the program and write the corresponding code. space conflict is highlighted by the need for efficient
The optimal solution is to select n items (0 ≤ n ≤ N) so access to existing results and the fact that data cannot be
that V is maximum; the knapsack problem is an N-stage easily compressed and stored. The high timeliness of
problem with j sub-problems in each stage, and the state dynamic programming is often reflected by large test data.
is defined as the process of how to decide the state of C = Therefore, how to solve the space overflow problem
j and N = i. The decision function is f i, j), and the analysis without affecting the operation speed is a hot issue for
shows that f (i, j) The decision is shown in equation below, dynamic programming in the future.
where vi is the value of V for the ith knapsack, which is
the core algorithm of the decision:
6. CONCLUSION
max f i 1, j vi vi, f i 1, j
f i, j
f i 1, j As with greedy algorithms, in dynamic planning, the
solution to a problem can be viewed as the result of a
When vi ≤ j, f (i, j) takes the maximum of f (i − 1, j −
series of decisions. as the result of a series of decisions.
vi) + v(i) and f (i − 1, j); when vi > j, the ith
The difference is that in a greedy algorithm, an irrevocable
knapsack cannot be put in, so the solution f (i, j) = f (i − 1,
decision is made every time the greedy criterion is used,
j).
whereas in dynamic programming, it is also examined
In the equation, f (i − 1, j),f (i − 1, j − vi) are solved,
whether each optimal sequence of decisions contains an
so f (i, j) can be calculated [6].
optimal subsequence. When a problem has an optimal
substructure, we think of using dynamic programming to
solve it, but there are simpler, more efficient ways to solve
some problems, if we always make what seems to be the
best choice at the moment. The choices made by the
greedy algorithm can depend on previous choices, but
4
SHS Web of Conferences 144, 03009 (2022) https://doi.org/10.1051/shsconf/202214403009
STEHF 2022
REFERENCES
1. Chu, P., & Beasley, J. (1998). A Genetic Algorithm
for the Multidimensional Knapsack Problem. Journal
of Heuristics, 4(1), 63–86.
https://doi.org/10.1023/a:1009642405419
2. Martello, S., & Toth, P. (1987). Algorithms for
Knapsack Problems. North-Holland Mathematics
Studies, 132, 213–257.
https://doi.org/10.1016/S0304-0208(08)73237-7
3. Vince, A. (2001). A framework for the greedy
algorithm. DISCRETE APPLIED MATHMATICS.
https://doi.org/10.1016/S0166-218X(01)00362-6
4. Wolsey, L. A. (1982). An analysis of the greedy
algorithm for the submodular set covering problem.
Combinatorica, 2(4), 385–393.
https://doi.org/10.1007/bf02579435
5. Eddy, S. R. (2004). What is dynamic programming?
Nature Biotechnology, 22(7), 909–910.
https://doi.org/10.1038/nbt0704-909
6. Rahwan, T., & r Jennings, N. (2008). An Improved
Dynamic Programming Algorithm for Coalition
Structure Generation.
https://aamas.csc.liv.ac.uk/Proceedings/aamas08/pro
ceedings/pdf/paper/AAMAS08_0192.pdf